cross-posted from: https://fedia.io/m/fuck_ai@lemmy.world/t/1446758
Let’s be happy it doesn’t have access to nuclear weapons at the moment.
Cheeky bastard.
on the other hand, this user is writing or preparing something about elder abuse. I really hope this isn’t a lawyer or social worker…
It could be that Gemini was unsettled by the user’s research about elder abuse, or simply tired of doing its homework.
That’s… not how these work. Even if they were capable of feeling unsettled, that’s kind of a huge leap from a true or false question.
Wow whoever wrote that is weapons-grade stupid. I have no more hope for humanity.
Well that is mean… How should they know without learning first? Not knowing =/= stupid
No, projecting emotions onto a machine is inherently stupid. It’s in the same category as people reading feelings from crystals (because it’s literally the same thing).
It still something you have to learn. Your parents(or whoever) teaching you stupid stuff does not make you stupid, but knowing BS stuff thinking it is true.
For me stupid means that you need a lot of information and a lot of time understanding something where the opposite would be smart where you understand stuff fast with few information.
Maybe we have just different definitions of stupid…
Well in this case if you have access to a computer for enough time to become a journalist that writes about LLMs you have enough time to read a 3-5 paragraph description of how they work. Hell, you could even ask an LLM and get a reasonable answer.
Ohh that was from the article, not a person who commented? Well that makes all the difference 😂
Will this happen with AI models? And what safeguards do we have against AI that goes rogue like this?
Yes. And none that are good enough, apparently.
Should have threatened it back to see where it would go xD
It’s an article about a reddit post with a screenshot as the only proof.
I’ve faked AI answer screenshots like that in less than 2 minutes for a meme.It’s not just a screenshot tho
I’m still really struggling to see an actual formidable use case for AI outside of computation and aiding in scientific research. Stop being lazy and write stuff. Why are we trying to give up everything that makes us human by offloading it to a machine?
I don’t use it for writing directly, but I do like to use it for worldbuilding. Because I can think of a general concept that could be explored in so many different ways, it’s nice to be able to just give it to an LLM and ask it to consider all of the possible ways it could imagine such an idea playing out. it also kind of doubles as a test because I usually have some sort of idea for what I’d like, and if it comes up with something similar on its own that kind of makes me feel like it would be something which would easily resonate with people. Additionally, a lot of the times it will come up with things that I hadn’t considered that are totally worth exploring. But I do agree that the only as you say “formidable” use case for this stuff at the moment is to use this thing as basically a research assistant for helping you in serious intellectual pursuits.
It can be really good for text to speech and speech to text applications for disabled or people with learning disabilities.
However it gets really funny and weird when it tries to read advanced mathematics formulas.
I have also heard decent arguments for translation although in most cases it would still be better to learn the language or use a professional translator.
AI summaries of larger bodies of text work pretty well so long as the source text itself is not slop.
Predictive text entry is a handy time saver so long as a human stays in the driver’s seat.
Neither of these justify current levels of hype.
AI summaries of larger bodies of text work pretty well
https://ea.rna.nl/2024/05/27/when-chatgpt-summarises-it-actually-does-nothing-of-the-kind/
It’s good for speech to text, translation and a starting point for a “tip-of-my-tongue” search where the search term is what you’re actually missing.
With chatgpt’s new web search it’s pretty good for more specialized searches too. And it links to the source, so you can check yourself.
It’s been able to answer some very specific niche questions accurately and give link to relevant information.
It is a ok tool to get things started.
Isn’t this one of the LLMs that was partially trained on Reddit data? LLMs are inherently a model of a conversation or question/response based on their training data. That response looks very much like what I saw regularly on Reddit when I was there. This seems unsurprising.
Looks like even 4chan data, tbh.
I suspect it may be due to a similar habit I have when chatting with a corporate AI. I will intentionally salt my inputs with random profanity or non sequitur info, for lulz partly, but also to poison those pieces of shits training data.
I remember asking copilot about a gore video and got link to it. But I wouldnt expect it to give answers like this unsolicitated