• DudeDudenson@lemmings.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Was gonna say, the AI doesn’t make up or admit bullshit, its just a very advanced a prediction algorithm. It responds with what the combination of words that is most likely the expected answer.

    Wether that is accurate or not is part of training it but you’ll never get 100% accuracy to any query

    • maynarkh@feddit.nl
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      If it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?

      • kent_eh@lemmy.ca
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.

      • DudeDudenson@lemmings.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        6 months ago

        It’s not actually deciding anything, the AI thinking is marketing fluff really. But yes that’s called confidence rating and it does. But at the scale of something like chatgpt that uses a snapshot of the entire internet and is non mutable there’s no way to train it for every possible question. If you ask about a topic 99% of the internet gets wrong it’ll respond the wrong thing with 99% confidence