• givesomefucks@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 months ago

    It “knows” as in it has access to the information and the ability to provide the right info for the right context.

    Any part of that process the AI can just “bullshit” and fills in the gaps with random stuff.

    Which is what you want when it’s “learning”. You want it to try so it’s attempt can be rated, and the relevant info added to its “knowledge”.

    But when consumers are using it, you want it to say “I can’t answer that”. But consumers are usually stupid and will buy/use the one that says “I can’t answer that” the least.

    And it’s legit really hard to differentiate between factual things and random bullshit it made up.

    Which is why AI should tell end users “I don’t know” more often.

    • Kichae@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      It “knows” as in it has access to the information and the ability to provide the right info for the right context.

      It doesn’t, though, any more than you have access to the information in a pile of 10 million shredded documents.

      • givesomefucks@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        8 months ago

        Right, in this case that we’re talking about…

        Do you not understand how “answer unavailable” is a better answer than taking a small percent of strips of paper at random and filling in the rest with words that sound relevant?

        It’s like a mad libs

        • Ech@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          taking a small percent of strips of paper at random and filling in the rest with words that sound relevant?

          It’s like a mad libs

          Right. They’re text generators. That’s the technology. It can’t do what you’re demanding because that’s not how it works. LLMs aren’t magic answer machines. They don’t know when to say “answer not available”. They don’t know what they’re being asked. They don’t know anything.

        • wahming@monyet.cc
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 months ago

          That is what LLMs do in EVERY conversation. Most of the time you don’t notice it, because it fits your expectations.