- cross-posted to:
- fuck_ai@lemmy.world
- cross-posted to:
- fuck_ai@lemmy.world
You know how Google’s new feature called AI Overviews is prone to spitting out wildly incorrect answers to search queries? In one instance, AI Overviews told a user to use glue on pizza to make sure the cheese won’t slide off (pssst…please don’t do this.)
Well, according to an interview at The Vergewith Google CEO Sundar Pichai published earlier this week, just before criticism of the outputs really took off, these “hallucinations” are an “inherent feature” of AI large language models (LLM), which is what drives AI Overviews, and this feature “is still an unsolved problem.”
There is apparently no limit to calling a bug a feature
I mean, it did say non toxic glue. Which is technically edible. /S
Non toxic can still have bad properties
The moment a politician’s kid drinks bleach because of Google’s AI is the moment any regulatory action is taken.
If its job is to write a fan fic on what may or may not be true on what you asked for, then it does a great job. But typically people search for information, and getting what is essentially a glorified auto complete isn’t useful. It’s like big tech has learned nothing about the massive issue of disinformation and just added fuel to the fire to an unsolved problem we’re still very much trying to figure out.
just like there’s no solution for not punishing youtubers who follow the rules while allowing doxxers and pedos to use youtube to dox people and lure little girls into their houses.
You mean hallucinations like this one?
Yeah no shit, that’s what LLMs do
They could probably mostly or entirely fix it, but to do so they’d have to better curate search results. Because what it does is summarize the top search results for the query.
The problem they can’t fix is consistently getting useful high quality search results to the top without getting misinfo, disinfo, irrelevant info, trolling, answers to similar but not identical questions or memes as high or higher.
This is what competition is now.
Putting out worthless things simply because everyone else is doing it.
Hey, Google: if your big tech friends jumped off a cliff, would you join 'em?
(Also why is the AI assistant on my phone opening up just by typing “hey Google?” 😡)
shhhh…it’s always watching 🤫
I’m curious, are these hallucinations very prevalent? I’m outside under US so haven’t seen the feature yet. But I have noticed that practically every article references the same glue incident.
So I’m not sure if the hallucinations are happening all the time, or everyone is just jumping on a handful of mistakes the AI made. If the latter, the situation reminds me of how every single accident involving a Tesla was reported on back in the day.
I know an easy fix. Just don’t do ai.
How about turn it the fuck off since it sucks and eventually will kill someone.
I know, right? This seems so fucking obvious to me. Maybe I’m just old school, but I still believe if you come out with a new product and it sucks you should pull it from shelves and go back to the older better one that people liked before you drive all your customers away.
That doesn’t seem to be the attitude of modern tech tho, SOP now seems to be if you come up with a new version and it sucks and everybody hates it, you double down, keep telling people why it’s actually better and your customers don’t know what they want and refuse to change course until either you fix it or all your customers leave. This apparently is better in some way. Not sure how, but most of the companies seem to be doing it.
Looks like Google stopped the AI feature. No more AI suggestions at the top of the page after searching for something.
I never got it in the first place. I think they had a limited rollout or something
I’d have to send you back in time.
Maybe Google should put a disclaimer… warning people it’s not 100% accurate. Or… just take down the technology because clearly their AI is chit tier.
This is so wild to me… as a software engineer, if my software doesn’t work 100% of the time as requested in the specification, it fails tests, doesn’t get released and I get told to fix all issues before going live.
AI is basically another word for unrealiable software full of bugs.
And therein lies the difference between engineers and business people. And look which ones are usually in charge.
Depends on how strict you are about the tests. Google is obviously satisfied if the first live iteration of a product doesn’t kill more than 5% of the users.