It’s spicy autocorrect running on outdated training data. People expect too much from these things and make a huge deal when they get disappointed. It’s been said earlier in the thread, but these things don’t think or reason. They don’t have feelings or hold opinions and they don’t believe anything.
It’s just the most advanced autocorrect ever implemented. Nothing more.
The recent DeepSeek paper shows that this is not the case, or at the very least that reasoning can emerge from “advanced autocorrect”.
I doubt it’s going to be anything close to actual reasoning. No matter how convincing it might be.
Okay, but why? What elements of “reasoning” are missing, what threshold isn’t reached?
I don’t know if it’s “actual reasoning”, because try as I might, I can’t define what reasoning actually is. But because of this, I also don’t say that it’s definitely not reasoning.
It doesn’t think, meaning it can’t reason. It makes a bunch of A or B choices, picking the most likely one from its training data. It’s literally advanced autocorrect and I don’t see it ever advancing past that unless they scrap the current thing called “AI” and rebuild it fundamentally differently from the ground up.
As they are now, “AI” will never become anything more than advanced autocorrect.
It doesn’t think, meaning it can’t reason.
- How do you know thinking is required for reasoning?
- How do you define “thinking” on a mechanical level? How can I look at a machine and know whether it “thinks” or doesn’t?
- Why do you think it just picks stuff from the training data, when the DeepSeek paper shows that this is false?
Don’t get me wrong, I’m not an AI proponent or defender. But you’re repeating the same unsubstantiated criticisms that have been repeated for the past year, when we have data that shows that you’re wrong on these points.
Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being. These things work like autocorrect and fool people into thinking they’re more than that.
If this DeepSeek thing is anything more than just hype, I’d love to see it. But I am (and will remain) HIGHLY SKEPTICAL until it is proven without a drop of doubt. Because this whole “AI” thing has been nothing but hype from day one.
Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being.
You can go and do that right now. Not every conversation will rise to that standard, but that’s also not the case for humans, so it can’t be a necessary requirement. I don’t know if we’re at a point where current models reach it more frequently than the average human - would reaching this point change your mind?
These things work like autocorrect and fool people into thinking they’re more than that.
No, these things don’t work like autocorrect. Yes, they are recurrent, but that’s not the same thing - and mathematical analysis of the model shows that it’s not a simple Markov process. So no, it doesn’t work like autocorrect in a meaningful way.
If this DeepSeek thing is anything more than just hype, I’d love to see it.
Great, the papers and results are open and available right now!
Ask the AI to answer something totally new (not matching any existing training data) and watch what happen… It’s highly probable that the answer won’t be logical.
Reasoning is being able to improvise a solution with provided inputs, past experience and knowledge (formal or informal).
AI or should i say Machine Learning are not able to perform that today. They are only mimicking reasoning.
DeepSeek shows that exactly this capability can (and does) emerge. So I guess that proves that ML is capable of reasoning today?
Could be! I didn’t test it (yet) so i’m won’t take the commercial / demo / buzz as proof.
There is so much BS sold under the name of ML, selling dreams to top executives that i have after to bring back to earth as the real product is finally not so usable in a real production environment.
I absolutely agree with that, and I’m very critical of any commercial deployments right now.
I just don’t like when people say “these things can’t think or reason” without ever defining those words. It (ironically) feels like stochastic parrots - repeating phrases they’ve heard without understanding them.
If humans can “reason” themselves into thinking the world is flat and the sky is water (not being hyperbolic), then I don’t see why an AI can reason at least a little.
LLMs can’t believe anything. It’s based on training data up until 2023, so of course it has no “recollection” (read: sources) about current events.
An LLM isn’t a search engine nor an oracle.
Geez I know that, everybody knows it’s just a chatbot. I thought it was a bit funny to share this conversation in this sub but most of the replies are people lecturing about the fact that AI is not sentient and blablabla
Ah, I believe this community is for posting about actual real things that make our society look like a boring dystopia. Not a fictual thing that might be funny.
So that might explain why people are responding the way they do.
Maybe I’m not interpreting the goal of this community it right.
I think it’s funny that a bot locked in 2023 would tell me that all the things that -actually happened- in the past week are not plausible, and that I’m probably just inventing a dystopian scenario.
It’s kinda cheating to be honest, you can make a bot say anything you want. But I understand your angle better now, thanks for the extra info!
People are downvoting because you worded your title weirdly based on what your screenshot shows. It would be more accurate to say that the bot refuses to believe Musk could be a Nazi (based on past training data), not that it refuses to believe he is based on current events, since it doesn’t know about current events.
Yeah I don’t really care, english is not my first langage.
Ok just telling you why it was received that way
gotya thanks
The AI is stuck in 2023 as it can not bare the dystopia of 2025.
In Soviet-2025 US, AI tells you that you are hallucinating.
The USUSSR.
Smart AI.
It is still living in 2023 in regards to the data its operating with. Try going back to 2023 to warn people Felon Musk would not only begin performing Nazi salutes and support the German far right and you would get laughed out the door. They’ve basically made it so that thinking things through even slightly or looking at the history of the last century is “too woke”. They are trying to make the “Twit-ler Youth” a thing again.
this has gotta be like astroturfing or something are we really citing LLM content in year of our lord 2025 ?? like gorl
two things can be true:
1 musk IS a nazi
2 LLMs are majorly sucky and trained with old data. the one OP is citing in particular doesn’t even know what year it is 🗿
what are we doing here? stop outsourcing common sense to ARTIFICIAL INTELLIGENCE of all things. we are cooked. 😭
“This software we’ve all been saying is trash that produces trash produced trash!”
This isn’t surprising at all.
It truly is a stochastic parrot, and you can spot the style it has been trained on.
we are currently in 2023,
I gave ChatGPT a still image of Musk’s salute, prefaced it with a context where it was being displayed, and it immediately thought it was a nazi salute. With some disclaimers obviously, but still.
Or, just like any enshittified software, it’s hopelessly out of date.
(Although I wouldn’t rule out that they’ve added pro-Musk guardrails)
You should try Claude and give it an image of the salute, since it can see.
Maybe while using a VPN that shows your location as Germany, just in case they’re tampering with things in the USA.
Arguing with an AI because it always fails with humans 👌
Still better than being a troll because no one cares about you.
you didn’t see sam alt-right-man at the rally? not hard to inject bias into the thing you own
Removed by mod
Keep your head in the sand, man.
Removed by mod