• 0 Posts
  • 35 Comments
Joined 2 years ago
cake
Cake day: March 22nd, 2024

help-circle

  • I’ve used 5K some.

    IMO the only ostensible benefit is for computer type stuff. It gives them more headroom to upscale content well, to avoid anti aliasing or blurry, scaled UI rendering, stuff like that. 4:1 rendering (to save power) would be quite viable too.

    Another example would be editing workflows, for 1:1 pixel mapping of content while leaving plenty of room for the UI.

    But for native content? Like movies?

    Pointless, unless you are ridiculously close to a huge display, even if your vision is 20/20. And it’s too expensive to be worth it: I’d rather that money go into other technical aspects, easily.







  • brucethemoose@lemmy.worldtoScience Memes@mander.xyzPosers
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 day ago

    Ashamed but not ashamed to admit it, but I have quite a fascination with this, and am working/thinking on a fanfic that explores Avatar lore more scientifically. A major perspective is through a Zaofu engineer-type character who looks at spirits, his own bending, and other lore like the portals or vines more scientifically.

    That’s where my brain goes. Shrug.

    I was hoping we’d get a “bender engineer” character in canon, but that doesn’t appear to be the direction they’re going.


  • And yet the article is basically all upvotes.

    As of late, Lemmy has been feeling way too much like Reddit to me, where clickbait trends hard as long as it affirms the environment.

    I’ve even pointed this out once, and had OP basically respond with “I don’t care if it’s misinformation. I agree with the sentiment.” And mods did nothing.

    That’s called disinformation.

    Not that information hygiene is a priority here :(


    Yeah, comments often “correct” that, but that doesn’t stop the extra order of magnitude of exposure the original post gets.

    As much as the Twitter format sucks, Lemmy could really use a similar “community note” blurb right below headlines.


  • Preface: I love The Guardian, and fuck Altman.

    But this is a bad headline.

    Correlation is not causation. It’s disturbing that OpenAI even possesses, and has mined for these statistics, or that millions of people somehow think their ChatGPT app has any semblance of privacy, but I’m reading that millions reached out to ChatGPT with suicidal ideations.

    Not that it’s the root cause.

    The headline is that the mental health of the world sucks, not that ChatGPT inflamed the crisis all of the sudden. The Guardian should be ashamed of shoehorning in some “Fuck AI” article into that for clicks, when there are literally a million other malicious bits of OpenAI they could cover. This a sad story, sourced from an app that has an unprecedented (and disturbing) window into folks psyche en masse, they’ve twisted into clickbait.




  • You don’t have to overclock, but I’d at least look at your mobo’s settings. Many mobos set really, really bad, non stock settings by default, especially with XMP memory.

    An example: they might default to 1.3V VSOC which is absolutely a “default overclock” and is going to make idle power skyrocket, and your CPU potentially unstable because infinity fabric doesn’t like that. For reference, I personally wouldn’t go over 1.2V VSOC myself and shoot for like 1.1V.

    I’d recommend Buildzoid’s videos:

    https://youtu.be/dlYxmRcdLVw

    https://youtu.be/Xcn_nvWGj7U

    And Igor’s Lab for general text info.

    Also, if you don’t turn on XMP at least (aka the RAM’s rated speed), they will run at some slow default and hurt your inference speed rather significantly.






  • Not anymore.

    I can run GLM 4.6 on a Ryzen/single RTX 3090 desktop at 7 tokens/s, and it blows lesser API models away. I can run 14-49Bs (or GLM Air) in more utilitarian cases that do just fine.

    And I can reach for free/dirt cheap APIs called locally when needed.

    But again, it’s all ‘special interest tinkerer’ tier. You can’t do that with ollama run, you have to mess with exotic libraries and tweaked setups and RAG chains to squeeze out that kind of performance. But all that getting simplified is inevitable.



  • Net might get to where you need AI

    I hate to say it, but we’re basically there, and AI doesn’t help a ton. If the net is slop and trash, there’s not a lot it can do.

    Hopefully by then they will have figured out a way to make it free.

    Fortunately self hosting is 100% taking off. Getting a (free) local agent to sift through the net’s sludge will be about as easy as tweaking Firefox before long.

    You can already do it. I already do it (and am happy to ramble about how when asked), but it’s more of an enthusiast/tinkerer thing now.