For really useless call centers this makes sense.
I have no doubt that a ML chatbot is perfectly capable of being as useless as an untrained human first level supporter with a language barrier.
And the dude in the article basically admits that’s what his call center was like:
Suumit Shah never liked his company’s customer service team. His agents gave generic responses to clients’ issues. Faced with difficult problems, they often sounded stumped, he said.
So evidently good support outcomes were never the goal.
- works 24/7
- no emotional damage
- easy to train
- cheap as hell
- concurrent, fast service possible
This was pretty much the very first thing to be replaced by AI. I’m pretty sure it’d be way nicer experience for the customers.
Cheap as hell until you flood it with garbage, because there is a dollar amount assigned for every single interaction.
Also, I’m not confident that ChatGPT would be meaningfully better at handling the edge cases that always make people furious with phone menus these days.
This is just the smallest tip of the iceberg.
I’ve been working with gpt-4 since the week it came out, and I guarantee you that even if it never became any more advanced, it could already put at least 30% of the white collar workforce out of business.
The only reason it hasn’t is because companies have barely started to comprehend what it can do.
Within 5 years the entire world will have been revolutionized by this technology. Jobs will evaporate faster than anyone is talking about.
If you’re very smart, and you begin to use gpt-4 to write the tools that will replace you, then you MIGHT have 10 good years left in this economy before humans are all but obsolete.
If you’re not staying up nights, scared shitless by what’s coming, it’s because you don’t really understand what gpt-4 can do.
I’m a senior Linux sysadmin who’s been following the evolution of AI over this past year just like you, and just like you I’ve been spending my days and nights tinkering with it non stop, and I have come to more or less the same conclusion as you have.
The downvotes are from people who haven’t used the AI, and who are still in the Internet 1.0 mindset. How people still don’t get just how revolutionary this technology is, is beyond me. But yeah, in a few years that’ll be evident enough, time will show.
I feel sorry for these folks. They have no idea what’s about to happen.
@flossdaily@lemmy.world
@anarchy79@lemmy.world
@SirGolan@lemmy.sdf.org
I quite agree.And, from SirGolan ref : Submitted on 3 Oct 2023 Language Models Represent Space and Time
… (from the summary) …Our analysis demonstrates that modern LLMs acquire structured knowledge about fundamental dimensions such as space and time, supporting the view that they learn not merely superficial statistics, but literal world models.
https://arxiv.org/abs/2310.02207
What makes it worse (in my opinion) is that LLMs are just one step in this development (which is exponential and not limited by human capabilities).
For example :
Numenta launches brain-based NuPIC to make AI processing up to 100 times more efficient
https://lemmy.world/post/4941919Wait what the hell do I suddenly get notifications for a reply I made to some thread four months ago? Is this right? In any case, interesting take. I’m not sure what it is, though. What is your point? Not looking to be an ass, just want to understand.
since I forgot what I was saying here 4 months ago I read the whole thread again and basically what I said is that I agree with what you said then (4 months ago) and I added a couple of references//ideas to make this point stronger.
Also, I have no idea why you did receive this notification only today, 4 months after the discussion. I guess the Lemmy software is buggy since for my account I did not receive some notifications in a few instances where someone replied to some of my comments and I just happened to see those replies anyway since I was reading all again.
take care, 👍
Removed by mod