Really admiring the hero graphic on this one. That is a home run hit.
The onion articles? Or just all the other random shit they’ve shoveled into their latest and greatest LLM?
Some of the recently reported ones have been traced back to Reddit shitposts. The hard thing they have to deal with is that the more authoritative you wrote your reddit comments, shitpost or not, the more upvotes you would get (at least that’s what I felt was happening to my writing over time as I used reddit). That dynamic would mean reddit is full of people who sound very very confident in the joke position they post about (and it then is compounded by the many upvotes)
That dynamic would mean reddit is full of people who sound very very confident in the joke position
A lot of the time people on reddit/lemmy/the internet are very confident in their non-joking position. Not sure if the same community exists here, but we had /r/confidentlyincorrect over on reddit
Yep. It’s gotta be hard to distinguish, because there are legitimately helpful and confidently correct people on reddit posts too. There’s value there, but they have to figure it out how to distinguish between good and shit takes.
Yeah. I was including Reddit shit posts in the “random shit they’ve shoveled into their latest and greatest LLM”. It’s nuts to me that they put basically no actual thought into the repercussions of using Reddit as a data set without anything to filter that data.
It goes beyond me why a corporation with so much to lose does’t have a narrow ai that simply checks if its response is appropriate before providing it.
Wont fix all but if i try this manually chatgpt pretty much always catches its own errors.
Isn’t that like trying to get pee out of a pool?
kinds reads like ‘Weird Al’ answers… like, yankovic seems like a nice guy and i like his music, but how many answers could he have?
that’s the point of phrasing the title that way, they get engagement from comments pointing it out
Wish we would stop using fonts don’t think make a clear differences between I and l.
Bummer. I like weird Al.
Basically anyone can get banned by Google.
I remember seeing a comment on here that said something along the lines of “for every dangerous or wrong response that goes public there’s probably 5, 10 or even 100 of those responses that only one person saw and may have treated as fact”
The fact that we don’t even know the ratio is the really infuriating thing.
‘I’m sorry, Google, I’m afraid, I cant do that!’
[…] a lot of AI companies are “selling dreams” that this tech will go from 80 percent correct to 100 percent.
In fact, Marcus thinks that last 20 percent might be the hardest thing of all.
Yeah, it’s well known, e.g. people say “the last 20% takes 80% of the effort”. All the most tedious and difficult stuff gets postponed to the end, which is why so many side projects never get completed.
It’s not just the difficult stuff, but often the mundane, e. g. stability, user friendliness, polish, scalability etc. that takes something from working in a constrained environment to an actual product - it’s a chore to work on and a lot less “sexy”, with never enough resources allocated to it: We have done all the difficult stuff already, how much more work can this be?
Turns out, a fucking lot.
Absolutely, that’s what I was thinking of when I wrote “tedious”; all the stuff you mentioned matters a lot to the user (or product owner) but isn’t the interesting stuff for a programmer.
While I agree with the underlying point, the “Pareto Principle” is “well known” like how “a stitch in time saves nine” is well known. I wish this adage would disappear in scientific circles. It instantly decreases credibility. It’s a pet peeve but here’s a great example of why: pseudo-scientific grifters.
Correcting over a decade of Reddit shitposting in what, a few weeks? They’re pretty ambitious.
Well, they’ve got the people for it! It’s not like they recently downsized to provide their rich executives with more money or anything…
This is perhaps the most ironic thing about the whole reddit data scraping thing and Spez selling out the user data of reddit to LLM’S. Like. We spent so much time posting nonsense. And then a bunch of people became mods to course correct subreddits where that nonsense could be potentially fatal. And then they got rid of those mods because they protested. And now it’s bots on bots on bots posting nonsense. And they want their LLM’S trained on that nonsense because reasons.
Good, remove all the weird reddit answers, leaving only the “14 year old neo-nazi” reddit answers, “cop pretending to be a leftist” reddit answers, and “39 year old pedophile” reddit answers. This should fix the problem and restore google back to its defaults
At this point, it seems like google is just a platform to message a google employee to go google it for you.
Does anybody remember “Cha-Cha?” This was literally their model. Person asks a question via text message (this was like 2008), college student Googles the answer, follows a link, copies and pastes the answer, college student gets paid like 20¢.
Source: I was one of those college students. I never even got paid enough to get a payout before they went under.
Is that employee named Jeeves?
…Always had been
Don’t worry, they’ll insert it all into captchas and make us label all their data soon.
I still can’t figure out what captcha wants. When it tells me to select all squares with a bus, I can never get it right unless every square is a separate picture.
“Select the URL that answers the question most appropriately”
“Many of the examples we’ve seen have been uncommon queries,”
Ah the good old “the problem is with the user not with our code” argument. The sign of a truly successful software maker.
You’re
holdingtyping it wrong!“We don’t understand. Why aren’t people simply searching for Taylor Swift”
I mean…I guess you could parahrase it that way. I took it more as “Look, you probably aren’t going to run into any weird answers.”. Which seems like a valid thing for them to try to convey.
(That being said, fuck AI, fuck Google, fuck reddit.)
Now, instead of debugging the code, you have to debug the data. Sounds worse.
I looove how the people at Google are so dumb that they forgot that anything resembling real intelligence in ChatGPT is just cheap labor in Africa (Kenya if I remember correctly) picking good training data. So OpenAI, using an army of smart humans and lots of data built a computer program that sometimes looks smart hahaha.
But the dumbasses in Google really drank the cool aid hahaha. They really believed that LLMs are magically smart so they feed it reddit garbage unfiltered hahahaha. Just from a PR perspective it must be a nigthmare for them, I really can’t understand what they were thinking here hahaha, is so pathetically dumb. Just goes to show that money can’t buy intelligence I guess.
This really is the lemmy mentality summed up.
Yes you’re smarter than Google and the only one who really understands ai… smh
I’m sorry to be rude, but do you have anything to contribute here? I mean, I’m probably wrong in several points, that’s what happens when you are as opinionated as I am hahaha. But your comment is useless man, do better.