If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
It’s not actually deciding anything, the AI thinking is marketing fluff really. But yes that’s called confidence rating and it does. But at the scale of something like chatgpt that uses a snapshot of the entire internet and is non mutable there’s no way to train it for every possible question. If you ask about a topic 99% of the internet gets wrong it’ll respond the wrong thing with 99% confidence
If it can name what the most likely combination is, couldn’t it also know how likely that combination of words is?
If it has been trained using questionable sources, or if it’s training data includes sarcastic responses (without understanding that context), it isn’t hard to imagine how confidently wrong some of the responses could be.
It’s not actually deciding anything, the AI thinking is marketing fluff really. But yes that’s called confidence rating and it does. But at the scale of something like chatgpt that uses a snapshot of the entire internet and is non mutable there’s no way to train it for every possible question. If you ask about a topic 99% of the internet gets wrong it’ll respond the wrong thing with 99% confidence
No, because that requires it to understand the words. It doesn’t.