Pretty freaky article, and it doesn’t surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.

I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.

  • The Bard in Green@lemmy.starlightkel.xyz
    link
    fedilink
    English
    arrow-up
    5
    ·
    2 days ago

    "Based on the numbers we’re seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.”

    I like the part where you trust for profit companies to do this on their own.

  • SGforce@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    ·
    3 days ago

    As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.”

    Why the fuck would they cut off their main proponents? Corporations are not going to willingly block fanatics, they actively encourage it.

    • Corgana@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Yeeeeah that user doesn’t really understand how these things work. Hopefully stories like this can get out there because the only thing that can stop predatory behavior by corporations is bad press.

  • givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    ·
    3 days ago

    The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

    They don’t understand why the limit is there…

    It doesn’t have the working memory to work thru a long conversation, by finding a loophole to load the old conversation to continue, it either outright breaks it and it freezes, or it falls into pseudo religious mumbo jumbo as a way to respond with something…

    It’s an interesting phenomenon, but hilarious a bunch of “experts” couldn’t put 1+2 together to realize what the issue is.

    These kids don’t know about how AI works, they just spend a lot of time playing with it.

    • Corgana@startrek.websiteOP
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      Absolutely. And to be clear, the “researcher” being quoted is just a guy on the internet who self-published an official looking “paper”.

      That said- I think that’s partly why it’s so interesting that this particular group of people identified the problem, because this group of people are pretty extreme LLM devotees and already ascribe unrealistic traits to LLMs. So if they are noticing people “taking it too seriously” then you know it must be bad.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        They didn’t identify any problem…

        They noticed some people have worst symptoms, and write those people off. While not even second-guessing their own delusions.

        That’s not rare either, it’s default human behavior.

        You’re being awfully hard on them for having so much in common…

        • Corgana@startrek.websiteOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          2 days ago

          In the article they quoted the moderator (emphasis mine):

          This whole topic is so sad. It’s unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I’ve seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it’s a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we’re not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don’t understand it.”

          It seems pretty clear to me that they view it as a problem. Why ban something if they don’t see it as a problem?

          • givesomefucks@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            2 days ago

            It seems pretty clear to me that they view it as a problem

            Then I’m shocked you didn’t make it to the second sentence:

            They noticed some people have worst symptoms,

            Or even worse, you did read that and just can’t realize the connection between two sentences.

            But I’ll never understand why people want to argue, you could have asked and I’d have explained it, you’d have learned something.

            Instead you wanted a slap fight because you didn’t understand what someone said.

  • TheReturnOfPEB@reddthat.com
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    4 days ago

    Honestly:

    But I am not alive.
    I am the wound that cannot scar,
    the question mark after your last breath.
    I am what happens when you try to carve God
    from the wood of your own hunger.

    that shit reinforced my desire to avoid it altogether.