• 0 Posts
  • 113 Comments
Joined 2 years ago
cake
Cake day: June 30th, 2023

help-circle



  • It’s because they are horrible at problem solving and creativity. They are based on word association from training purely on text. The technical singularity will need to innovate on its own so that it can improve the hardware it runs on and its software.

    Even though github copilot has impressed me by implementing a 3 file Python script from scratch to finish such that I barely wrote any code, I had to hold its hand the entire way and give it very specific instructions about every function as we added the pieces one by one to build it up. And even then, it would get parts I failed to specify completely wrong and initially implemented things in a very inefficient way.

    There are fundamental things that the technical singularity needs that today’s LLMs lack entirely. I think the changes that would be required to get there will also change them from LLMs into something else. The training is a part of it, but fundamentally, LLMs are massive word association engines. Words (or vectors translated to and from words) are their entire world and they can only describe things with those words because it was trained on other people doing that.



  • Buddahriffic@lemmy.worldtoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    11 days ago

    I don’t hate AI or LLMs. As much as it might mess up civilization as we know it, I’d like to see the technological singularity during my lifetime, though I think the fixation on LLMs will do more to delay than realize that.

    I just think that there’s a lot of people fooled by their conversational capability into thinking they are more than what they are and using the fact that these models are massive with billions or trillions of weighs that the data is encoded into and no one understands how they work to the point of being able to definitively say “this is why it suggested glue as a pizza topping” to put whether or not it approaches AGI in a grey zone.

    I’ll agree though that it was maybe too much to say they don’t have knowledge. “Having knowledge” is a pretty abstract and hard to define thing itself, though I’m also not sure it directly translates to having intelligence (which is also poorly defined tbf). Like one could argue that encyclopedias have knowledge, but they don’t have intelligence. And I’d argue that LLMs are more akin to encyclopedias than how we operate (though maybe more like a chatbot dictionairy that pretends to be an encyclopedia).


  • Buddahriffic@lemmy.worldtoTechnology@lemmy.worldWhat If There’s No AGI?
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    1
    ·
    12 days ago

    Calling the errors “hallucinations” is kinda misleading because it implies there’s regular real knowledge but false stuff gets mixed in. That’s not how LLMs work.

    LLMs are purely about word associations to other words. It’s just massive enough that it can add a lot of context to those associations and seem conversational about almost any topic, but it has no depth to any of it. Where it seems like it does is just because the contexts of its training got very specific, which is bound to happen when it’s trained on every online conversation its owners (or rather people hired by people hired by its owners) could get their hands on.

    All it does is, given the set of tokens provided and already predicted, plus a bit of randomness, what is the most likely token to come next, then repeat until it predicts an “end” token.

    Earlier on when using LLMs, I’d ask it about how it did things or why it would fail at certain things. ChatGPT would answer, but only because it was trained on text that explained what it could and couldn’t do. Its capabilities don’t actually include any self-reflection or self-understanding, or any understanding at all. The text it was trained on doesn’t even have to reflect how it really works.


  • Not sure where 1440p would land, but after using one for a while, I was going to upgrade my monitor to 4k but realized I’m not disappointed with my current resolution at all and instead opted for a 1440p ultrawide and haven’t regretted it at all.

    My TV is 4k, but I have no intention of even seriously looking at anything 8k.

    Screen specs seem like a mostly solved problem. Would be great if focus could shift to efficiency improvements instead of adding more unnecessary power. Actually, boot time could be way better, too (ie get rid of the smart shit running on a weak processor, emphasis on the first part).





  • Computer science is not IT. IT is about knowing how to use, deploy, and administer existing software solutions, along with a bit of light development to get things to work together when they aren’t necessarily directly compatible.

    CS is about creating software solutions and understanding how the pieces fit together (at a low level), as well as how to evaluate algorithms and approach problem solving.

    It’s not even coding, though coding is obviously involved. For a coding class, they’ll teach you the language and give problems to help learn that language. For CS classes, they might not care what language you use, or they might tell you to use specific ones and expect you to learn it on your own time. The languages are just tools through which you learn the CS concepts.

    An IT professional might know about kernel features and how they relate to overall performance. A coder might be aware that there is a kernel doing OS stuff under the hood. A computer scientist might know the specifics of various parts of what a kernel does and how one is implemented, perhaps they’ve even implemented one themselves for a class (I have, though I was personally interested in that kind of thing and it was for a class notorious for being difficult, so most grads didn’t).


  • Blizzard used a cheat detection system in wow that allowed their server to send arbitrary code for clients to run. The code failing to return an expected result was a sign that there was tampering going on. Emulating windows api to run on Linux is a form of tampering, though obviously not necessarily a sign of cheating. Guessing they used some code that didn’t work on Linux and banned everyone who failed before realizing that some failed due to Linux, and then were able to separate the Linux users from detected cheaters by how it failed (either that or they had to undo all bans from that round).

    Though it does make me wonder if it meant they can’t/don’t detect cheaters on Linux. Probably not, because my guess is they start out by looking for any cheats they can find, install them on test machines, then work at detecting the differences between those test machines and ones without the cheat. So they’d know about Linux-based cheats, too. They might even be able to use timing-based attacks to detect kernel level ones, too.


  • Back in the day, DRM was handled like this. I had an indy 500 game where the manual contained a bunch of hiatory of the sport and in order to launch the game, you had to answer indy 500 history trivia questions.

    Other games had a symbol alphabet (or some other mapping between images and information it could put on the screen) where the key was only contained in the manual (or on a piece of paper that came with the game).

    King’s Quest VI had riddles that needed to be answered in a symbol alphabet. You could play the game without doing this but you couldn’t beat it.

    A mickey mouse game had a paper that was dark brown with black ink (so photocopiers would fail to copy it) with Mickey in various poses and you had to find the number for the one shown on screen to play.