Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.
Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.
If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.
Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?
Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.
If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.
That’s an even worse ‘use case’ than I could imagine.
HR should be one of the most protected fields against AI, because you actually need a human resource.
And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.
I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.
My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.
I think AI has mostly been about luring investors into pumping up share prices rather than offering something of genuine value to consumers.
Some people are gonna lose a lot of other people’s money over it.
Yes, I’m getting some serious dot-com bubble vibes from the whole AI thing. But the dot-com boom produced Amazon, and every company is basically going all-in in the hope they are the new Amazon while in the end most will end up like pets.com but it’s a risk they’re willing to take.
“You might lose all your money, but that is a risk I’m willing to take”
Investors pump money in a bunch of companies so the chances of at least one of them making it big and paying them back for all the failed investments is almost guaranteed. That’s what taking risks is all about.
If the whole sector turns out to be garbage it won’t matter which particular set of companies within it you invest in; you will get burned if you cash out after everyone else.
Sure, but it SEEMS, that some investors are relying on buzzword and hype, without research and ignoring the fundamentals of investing, i.e. besides the ever evolving claims of the CEO, is the company well managed? What is their cash flow and where is it going a year from now? Do the upper level managers have coke habits?
You’re right, but these fundamentals don’t really matter anymore, investors are buying hype and hoping to sell a bigger hype for more money later.
Seeing the whole thing as Knowingly Trading in Hype is actually a really good insight.
Certainly it neatly explains a lot.
Also called a Ponzi scheme, where every participant knows it’s a scam, but hopes to find some more fools before it crashes and leave with positive balance.
OpenAI will fail. StabilityAI will fail. CivitAI will prevail, mark my words.
A lot of it is follow the leader type bullshit. For companies in areas where AI is actually beneficial they have already been implementing it for years, quietly because it isn’t something new or exceptional. It is just the tool you use for solving certain problems.
Investors going to bubble though.
Definitely. Many companies have implemented AI without thinking with 3 brain cells.
Great and useful implementation of AI exists, but it’s like 1/100 right now in products.
If my employer is anything to go by, much of it is just unimaginative businesspeople who are afraid of missing out on what everyone else is selling.
At work we were instructed to shove ChatGPT into our systems about a month after it became a thing. It makes no sense in our system and many of us advised management it was irresponsible since it’s giving people advice of very sensitive matters without any guarantee that advice is any good. But no matter, we had to shove it in there, with small print to cover our asses. I bet no one even uses it, but sales can tell customers the product is “AI-driven”.
My old company before they laid me off laid off our entire HR and Comms teams in exchange for ChatGPT Enterprise.
“We can just have an AI chatbot for HR and pay inquiries and ask Dall-e to create icons and other content”.
A friend who still works there told me they’re hiring a bunch of “prompt engineers” to improve the quality of the AI outputs haha
That’s an even worse ‘use case’ than I could imagine.
HR should be one of the most protected fields against AI, because you actually need a human resource.
And “prompt engineer” is so stupid. The “job” is only necessary because the AI doesn’t understand what you want to do well enough. The only productive guy you could hire would be a programmer or something, that could actually tinker with the AI.
I’m sorry. Hope you find a better job, on the inevitable downswing of the hype, when someone realizes that a prompt can’t replace a person in customer service. Customers will invest more time, i.e., even wait in a purposely engineered holding music hell, to have a real person listen to them.
My doorbell camera manufacturer now advertises their products as using, “Local AI” meaning, they’re not relying on a cloud service to look at your video in order to detect humans/faces/etc. Honestly, it seems like a good (marketing) move.