- cross-posted to:
- linuxmemes@lemmy.world
- memes@lemmy.ml
- cross-posted to:
- linuxmemes@lemmy.world
- memes@lemmy.ml
cross-posted from: https://feddit.uk/post/16950456
For those who don’t know google gemini is an AI created by google
When you get a long and nuanced answer to a seemingly simple question you can be quite certain they know what they’re talking about. If you prefer a short and simple answer it’s better to ask someone who doesn’t.
It’s a LLM. It doesn’t “know” what it’s talking about. Gemini is designed to write long nuanced answers to ‘every’ question, unless prompted otherwise.
Not knowing what it’s talking about is irrelevant if the answer is correct. Humans that knows what they’re talking about are just as prone to mistakes as an LLM is. Some could argue that in much more numerous ways too. I don’t see the way they work that different from each other as most other people here seem to.
Sometimes, it’s just the opposite.
Seems to be a decent answer considering the source.
This doesn’t mean anything. It’s an LLM and it will only give you a valid sounding answer regardless of the truth. “Yes” sounds valid and is probably the one with the most occurrences in the training data.
Stop posting shit like this.
Information can’t be dismissed simply by stating it was written by an LLM. It’s still ad hominem.
It is possible to create an infinite amount of bullshit at no cost. So by simply hurling waves and waves of bullshit at you, we can exhaust you.
Feel free to argue further, I’ll be outsourcing my replies to ChatGPT.
Oh yea? Well, why doesn’t Ross, the larger of the friends, simply eat the other friends?
What? No, the fact that it’s an LLM is pivotal to the reliability of the information. In fact, this isn’t even information per se, just the most likely responses to this question synthesized into one response. I don’t think you’ve fully internalized how LLMs work.
I disagree. Information can be factual independent of who or what said it. If it’s false, then point to the errors in it, not to the source.
You’re correct, but why are you trusting the output by default? Why ask us to debunk something that is well-known to be easy to lead to the answer you want, and that doesn’t factually understand what it’s saying?
But I’m not trusting it by default and I’m not asking you to debunk anything. I’m simply stating that ad hominem is not a valid counter-argument even in the case of LLMs.
You’re saying ad hominem isn’t valid as a counterargument, which means you think there’s an argument in the first place. But it’s not a counterargument at all, because the LLM’s claim is not an argument.
ETA: And it wouldn’t be ad hominem anyways, since the claim about the reliability of the entity making an argument isn’t unrelated to what’s being discussed. Ad hominem only applies when the insult isn’t valid and related to the argument.
Dismissing something AI has ‘said’ not because of the content, but because it came from LLM is a choice any individual is free to make. However, that doesn’t serve as evidence against the validity of the content itself. To me, all the mental gymnastics about AI outputs being just meaningless nonsense or mere copying of others is a cop-out answer.
90% of the market sounds like “yes” to me too.
Relax bro
On a side note the free gemini version (whichever model they use) is absolute poo poo compared to free Claude or even Chatgpt.