Researchers convinced ChatGPT to do things it normally wouldn’t with basic psychology.

  • nymnympseudonym@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    3
    ·
    10 days ago

    It’s still not reasoning. It’s running a simulation

    As Daniel Dennett once asked: “What is the difference between a simulated song, and a real song?”

    You say it’s not reasoning, but I’ve seen it debug and fix a core dump

    • lakemalcom@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 days ago

      A couple of things:

      • we are talking about chat bots talking to people in this post, and how you can steer the simulated conversation towards whatever you want
      • it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say. Is it useful? Sure, maybe. Why didn’t you debug it yourself?
      • nymnympseudonym@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        chat bots

        Fair, we need to get terms straight; this is new and unstable territory. Let’s say, LLMs specifically.

        it did not debug anything, a human debugged something and wrote about it. Then that human input and a ton of others were mapped into a huge probability map, and some computer simulated what people talking about this would most likely say

        Can you explain how that is different from what a human does? I read a lot about debugging, went to classes, worked examples…

        Why didn’t you debug it yourself?

        In my case this is enterprise software, many products and millions of lines of code. My test and bug-fixing teams are begging for automation. Bug fixing at scale

    • Kay Ohtie@pawb.social
      link
      fedilink
      English
      arrow-up
      2
      ·
      9 days ago

      Birds string together words they hear when they can repeat them, and end up with the short phrases they seem to make. It’s extremely rare for them to actually understand meaning, most often it’s simply association which is why you often get nonsensical responses that still sort of make sense, or sounds out of them that sound like words but just…aren’t. The simalcrum of language without containing any. Often words can be linked by that, and our own brain wants to find words and sometimes we decipher ones, similar to seeing shapes in noise – we just tend to realize that it’s actually just recognition, not real.

      What’s happening here is the equivalent of recording a bird and playing back it’s recording to itself to get a new response, as a chain. It’s predictive text feeding itself, in a simplistic but not inaccurate manner given how language models actually work at a technical level, tokenizing the input to train and create matrices of language vectors that contain word fragments, and often loop back on themselves or into yet more matrices of options. This is the “beam size” option some models have when run, selecting how many search routines should be created simultaneously for things that map to probability values that make sense.

      Our own reasoning is far more complicated. Sometimes we think it’s just words, but our brain will seamlessly weave inner monologue into concepts and imagery or ideas without text and back again, sometimes into sounds or other things. We stitch together everything so seamlessly because it all actually has meaning for us.

      LLMs having “reasoning” at all is operating by the Sapir-whorf hypothesis, which would imply there is no reasoning without language. And even animals can fucking reason without language. We absolutely did too. Sapir-whorf was an infantile thought experiment turned theory of language that’s been patently proven wrong even when it makes for great sci-fi (see Arrival).

      This isn’t the difference between hearing a song live and played aloud, or midi/samples vs instruments. This is that part of our consciousness operates in some absolute wild ways that we can still only classify at a high level because the complexities are so far beyond what we can describe with models that, by comparison, are simplistic as hell.

      Put another way, without transcriptions of “that’s right, the square hole”, if you showed two photos to a model and asked “where does this piece go” it’s just going to “see” the shape in both, recognize the image->word mapping and come up with a response fitting that, without ever being able to “realize” it can go into the square hole without being prompted, because it can’t invent.

      Only parrot.

      • nymnympseudonym@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        9 days ago

        we think it’s just words, but our brain will seamlessly weave inner monologue into concepts

        Are you familiar with latent space representation?

        Because yes, that’s how LRM’s work, cycling tokens in latent space multiple times before sending to upper layers and decoding into human words

        https://arxiv.org/abs/2412.06769