I have many conversations with people about Large Language Models like ChatGPT and Copilot. The idea that “it makes convincing sentences, but it doesn’t know what it’s talking about” is a difficult concept to convey or wrap your head around. Because the sentences are so convincing.

Any good examples on how to explain this in simple terms?

Edit:some good answers already! I find especially that the emotional barrier is difficult to break. If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

  • DarkThoughts@fedia.io
    link
    fedilink
    arrow-up
    4
    ·
    2 months ago

    Someone once described is as T9 on steroids. It’s like your mobile keyboard suggesting follow up words, just a lot more complex in size.

    If an AI says something malicious, our brain immediatly jumps to “it has intent”. How can we explain this away?

    The more you understand the underlying concept of LLMs, the more the magic fades away. LLMs are certainly cool and can be fun but the hype around them seems very artificial and they’re certainly not what I’d describe as “AI”. To me, an AI would be something that actually has some form of consciousness, something that actually can form its own thoughts and learn things on its own through observation or experimentation. LLMs can’t do any of those things. They’re static and always wait for your input to even do anything. For text generation you can even just regenerate an answer to the same previous text and the replies can and will vary greatly. If they say something mean or malicious, it’s simply because it is based on whatever they were trained on and whatever parameters they are following (like if you told them to roleplay a mean person).