It’s a hype bubble. AI had been around for a while and will continue to be. The problem is specifically Large Language Models. They’ve been trained to SOUND human, but not to actually use that ability for anything more useful than small talk and bullshit. However because it SOUNDS charasmatic, and that is interesting to people, companies have started cramming it into everything they can think of to impress shareholders.
Shareholders are a collective group of people who are, on average, really more psychologically similar to crows than other humans - they like shiny things, have a mob mentality, and can only use the most basic of tools available, in their case usually money. New things presented in a flashy way by a charasmatic individual are most attractive to them, and they will seldom do any research beyond superficial first impressions. Any research they actually do generally skews towards confirmation bias.
This leads to an unfortunate feature of capitalism, which is the absolute need to make the numbers go up. To impress their shareholders, companies have to jangle keys in front of their faces. So whenever The Hip New Things comes along, it’s all buzzwords and bullshit as they try and find any feasible way to cram it into their product. If they could make Smart Corn 2.0 powered by Chat GPT they would, and sell it for three times as much in the same produce isle as normal corn. And then your corn would tell you this great recipe if knows where the sauce is made with a battery acid base.
In most recent memory, this exact scenario played out with NFTs. When the NFT market collapsed as was inevitable, the corporations who swore it would supercharge their sales all quietly pretended it never happened. Soon something new was jangled in front of the shareholders and everybody forgot about them.
Now that generative AI is proving itself to just be a really convincing bullshitter, it’s only a matter of time until it either dies and quietly slinks away or mutates into the next New Things and the cycle repeats. Like a pandemic of greed and stupidity. Maybe they’ll figure out how to teach Chat GPT how to check and cite verified sources and make it actually do what they currently claim it does.
I guess it depends on if they can make it shiny enough to impress the crows.
I think we’re in an ai bubble because Nvidia is way over valued and I agree with you that often people flock to shiny new things and many people are taking risk with the hope of making it big…and many will get left holding the bag.
But how do you go from NFTs, which never had widespread market support, to the market pumping a trillion dollars into Nvidia alone? This makes no sense. And to down play this as “just a bullshitter” leads me to believe you have like zero real world experience with this. I use copilot for coding and it’s been a boost to productivity for me, and I’m a seasoned vet. Even the ai search results, which many times have left me scratching my head, have been a net benefit to me in time savings.
And this is all still pretty new.
While I think there it is over hyped and people are being ridiculous with how much this will change things, at the very least this is going to be a huge new tool, and I think you’re setting yourself up to be left behind if you aren’t embracing this and learning how to leverage it.
The AI technology we’re using isn’t “new” the core idea is several decades old, only minor updates since then. We’re just using more parallel processing and bigger datasets to brute force the “advances”. So, no, it’s not actually that new.
We need a big breakthrough in the technology for it to actually get anywhere. Without the breakthrough, we’re going to burst the bubble once the hype dies down.
I just don’t get this. There has not been some huge leap in processing power over the past few years, but there has been in generative AI. parallel processing, on the other hand, has been around for decades.
I just don’t know how one can look at this and think there hasn’t been some big step forward in ai, but instead claim it’s all processing power. I think it’s pretty obvious that there has been some huge leap in the generative AI world.
Also I’ve been incorporating it more and more. It boggles my mind that someone would look at this and seea passing fad.
The landmark paper that ushered in the current boom in generative AI (Attention is all you need (Vaswani et al 2017)) is less than a decade old (and attention itself as a mechanism is from 2014), so I’m not sure where you are getting the idea that the core idea is “decades” old. Unless you are taking the core idea to mean neural networks, or digital computing?
Nvidia are the ones selling shovels in this gold rush, so it makes some sense that they’ll make a lot of money out of it even if it was fool’s gold all along.
Is Nvidias shovel-selling sustainable? Doubt it: when the gold rush is over, the demand for shovels will fall. However, we’re long past the era when most money being pumped into the stockmarket was actually controlled by investors who cared about prospects beyond the next quarter, and it does make sense that speculative investors would be seeking to profit from the rise on Nvidias profits due to their shove-selling for this gold rush, even if later it falls back again.
Sure, but NFTs did not generate this many shovels being sold.
Which together with my point explains your own “But how do you go from NFTs, which never had widespread market support, to the market pumping a trillion dollars into Nvidia alone?” question.
Not only would LLMs and other more advanced generative AI have a significantly broader impact than NFTs if it lived up to the hype, but in technical terms it’s much more dependent on GPUs for its functionality with any decent speed than NFTs as you can see in this comparison I just found with DDG.
Mind you, if you meant Bitcoin rather that NFTs (since the last big demand for GPUs was for Bitcoin mining rather than NFTs) the point that the possible impact of generative AI is much broader still explains it, plus if I remember it correctly Nvidia stock did got pulled up by the whole Bitcoin mining demand for GPUs (I vaguelly remember something about their share price doubling within a few years, but am not sure anymore).
Also keep in mind that Stock Markets at their most speculative end - i.e. Tech - have a huge herding effect: everybody wants to jump into the “next big thing” hype train as soon as possible and keep wanting to do so as long as it seems to be going (i.e. as long as they think there’s a “Greater Fool” they can dump their overvalued stock on if an when it stops going) so there’s a huge snowballing effect that pushes stock prices and company valuations far beyond anything explainable by actual and likely future financial improvements of their situation: this is how we get Tesla reaching a market valuation which is more that the rest of the Auto-Industry put together even though the former sells far fewer vehicles than the latter.
Stock Market rational considerations, especially in the most speculative parts of it, are not on “how much wealth can this company produce” but on “for how much more money can I sell this stock later”, which is about one’s own “smarts” or advantages that others don’t have such as insider info and the gullibility of others, not about actual financial and accounting reasons of the company itself, which is why hype works so well to pump up the valuations of Tech companies.
Yes.
It’s frankly ridiculous. Ai is useful but the only company that deserves to get this much gains on the stock market is possibly Nvidia, because they make actual sales of actual hardware.
Everyone else puts out AI that is much worse than open gpt with tons of marketing to try and make up for it.
There is aws summit now in a couple of days where I live and there was more AI breakout sessions than you can shake a stick at. It’s definently overhyped.
AMD
Even Nvidia’s gains are based on selling that hardware to outlets buying GPUs in shipping container quantities to run huge LLMs. If those companies dry up, so will Nvidia’s market position. At the other side of things, their traditional market is gaming, but that industry is contracting and may not be making games that push hardware limits the way it used to. Their best long term bet might be a rumored Steam Deck/Switch like handheld (one of their own making; they do supply the chips for the Switch and upcoming Switch 2), but that’s not going to justify a $3T market cap.
You’ve invoked Betteridge’s law of headlines.
It depends
Here is an alternative Piped link(s):
https://piped.video/watch?v=Qh7Dqzm8f7Y
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
I am just waiting for the trough of disillusionment to arrive…
Of course we are. Its really all a hype of half ass helpful prototypes at the moment.
I hope so, ‘cause I just gotta patent pending and I’m goin’ out to raise a bunch a money.
Anyone remember the dotcom boom (and bust)? The AI hype bubble reminds me a lot of that. It ticks all the same boxes: wild new tech showing up all the time, stratospheric hype, corporate FOMO, a money spigot that seems to be spraying investments at any company with AI in the name, business plans that lose money per unit sold but plan to “make it up at scale.” And unlike the last 16 years this is all happening when interest rates are non-zero so money actually costs something.
When I think about the dotcom boom and bust I tend to group the companies into 3 or so broad categories:
- Companies that were doing the right thing at the right time. These are the companies that weren’t necessarily pushing the envelope from a technology perspective; they were building a business model on where the technology was at the time but that could improve as the technology did. In the dotcom days the business model that most exemplifies that was e-commerce. Amazon and eBay grew up in the dotcom era and survived the bust no problem because they were already profitable by the time the investment money stopped flowing.
- Companies that were way too early. These are the ones that had a great vision but that were too far ahead of the technology curve. Did you know we had online grocery delivery in 1999? Webvan tried to move fast and corner the market but due to mismanagement and the tech and market not being ready they crashed hard in 2001. Grocery delivery is of course totally commonplace today, but even if Webvan wasn’t mismanaged I find it highly unlikely that they could have succeeded when less than half the country even had dialup and the common wisdom of the day was to not type your credit card number online.
- And last but not least, you’ve got the startups that never really had a business plan and the existing companies just jumping on the hype train because of FOMO. Startups were getting investment dollars just to … build a website. Big companies were putting up totally contentless “web experiences.” Suddenly every breakfast cereal had a website. Did it have nutrition information? No. Online ordering? No. Mostly it was just marketing drivel and maybe a recipe for snack mix if you’re lucky. These are the ones I think of when I hear that Taco Bell is going “AI-First.”
Anyways, there’s more I could say about why I think this will play out faster than crypto did but this is already a wall of text. For all the people who missed the dotcom boom: Enjoy the hype cycle. It’ll be a smoking crater before you know it. :)
I was in high school during the boom and my career plan was:
- Go work for a startup doing computer stuff
- Get stock options
- Retire before 30 when it goes public
The landscape after graduating college was… different.
Making it a question, makes me question in which reality the autor lives
it’s tl;dr news so normal.
Ai isn’t the bubble, that’ll keep on improving, although probably not at this rate.
The hype bubble is companies adding AI to their product where it offers very little, if any, added value, which is incredibly tedious.
The latter bubble can burst, and we’ll all be better for it. But generative AI isn’t going anywhere.
We referred to the dotcom bubble as the dotcom bubble, but that didn’t mean that the web went away, it just meant that companies randomly tried stuff and had money thrown at them because the investors had no idea either.
So same here, AI bubble because it’s being randomly attempted without particular vision with lots and lots of money, not because the technology fundamentally is a bust.
That’s a good thing to put it in perspective, yeah. The amount of people who think AI is just a fad that will go away is staggering.
Yeah, right now the loudest voices are either “AI is ready to do everything right now or in a few months” or “This AI thing is worthless garbage” (both in practice refer to LLM specifically, even though they just say “AI”, the rest of AI field is pretty “boringly” accepted right now). There’s not a whole lot of attention given to more nuanced takes on what it realistically can/will be able to do or not do. With proponents glossing over the limitations and detractors pretending that every single use of LLM is telling people to eat rocks and glue.
Yeah, I’m super salty about the hype because if I had to pick one side or the other, I’d be on team “AI is worthless”, but that’s just because I’d rather try convincing a bunch of skeptics that when used wisely, AI/ML can be super useful, than to try talk some sense into the AI fanatics. It’s a shame though, because I feel like the longer the bubble takes to pop, the more harm actual AI research will receive
it’s a fad in terms of the hype and the superstition.
it won’t go away. it will just become boring and mostly a business to business concern that is invisible to the end consumer. just like every other big fad of the past 20 years. ‘big data’, ‘crypto’, etc.
5 years ago everyone was suddenly a ‘data scientist’. where are they now? yeah… exactly.
Improving but to what end? If it’s not something that the public will ultimately perceive as useful it will tank no matter how hard it’s pushed.
I saw a quote that went something like, “I want AI to do my laundry so I can have time for my art, not to do art while I keep doing laundry”.
Art vs laundry is an extreme example but the gist of it is that it should focus on practical applications of the mundane sort. It’s interesting that it can make passable art but ultimately it’s mediocre and meaningless.
This
AI is actually providing value and advancing to a huge rate, I don’t know how people can dismiss that so easily
How has it helped you personally in every day life?
And if it’s doing some of your job with prompts that anybody could write, should you be paid less, or should you be replaced by someone juggling several positions?
I’m using LLMs to parse and organize information in my file directory, turning bank receipts into json files, I automatically rename downloaded movies into a more legible format I prefer, I summarize clickbaity-youtube-videos, I use copilot on vscode to code much faster, chatGPT all the time to discover new libraries and cut fast through boilerplate, I have a personal assistant that has access to a lot of metrics about my life: meditation streak, when I do exercise, the status of my system etc and helps me make decisions…
I don’t know about you but I feel like I’m living in an age of wonder
I’m not sure what to say about the prompts, I feel like I’m integrating AI in my systems to automate mundane stuff and oversee more information, I think one should be paid for the work and value produced
Your question sounds like a trap but I found a bunch of uses for it.
- Rewriting emails
- Learning quickly how to get popular business software to do stuff
- Wherever I used to use a search engine
- Setup study sessions on a topic I knew very little about. I scan the text. Read it. Give it to the AI/LLM. Discuss the text. Have it quiz me. Then move to the next page.
- Used it at a poorly documented art collection to track down pieces.
- Basically everything I know about baking. If you are curious my posts document the last 7 months or so of my progress.
- Built a software driver (a task I hate) almost completely by giving it the documentation
- Set it up so it can make practice tests for my daughters school work
- Explored a wide range of topics
Now go ahead and point out that I could have done all this myself with just Google, the way we did back in the day. That’s the thing about this stuff. You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about. Whenever I get yelled at or hear people complain about opposite things I know that they just want to be angry and they have no argument. It’s just rage full throwing things at the wall to see what sticks.
You can always make an argument that some new thing is bad by pointing out it is solving problems that were already solved or solving problems no one cares about.
That’s not the issue. I’m not a luddite. The issue is that you can’t rely on its answers. The accuracy varies wildly. If you trust it implicitly there’s no way of telling what you end up with. Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s like being fed in bed and never getting out to do anything by yourself, and to top it off you don’t even know if you’re being fed correct information.
The issue is that you can’t rely on its answers.
Cough… Wikipedia…cough. You remember being told how Wikipedia wasn’t accurate and the only true sources were books made by private companies that no one could correct?
Human learning process normally involves comparing information to previous information, some process of vetting, during which your brain “muscles” are exercised so they become better at it all the time. It’s l
Argument from weakness. Classic luddite move. I am old enough to remember the fears that internet search engines would do this.
In any case no one is forcing you to use it. I am sure if you called up Britianica and told them to send you a set they would be happy to.
Nah but once the put AI on everything bubble bursts companies will have a sour taste on it and won’t be so interested in investing into it.
I believe we’ll get a lot of good improvements over it but in people’s minds AI will be that weird thing that never worked quite right. It’ll be another meme like Cortana on windows so it won’t drive stock price at all unless you’re doing something really cutting edge.
And good luck competing with the tech giants
Remember how smartwatches were a big deal and now no one cares? That’ll probably happen with ai. Hopefully in 10 or so years whatever losses corporations suffer when all the money they’ve invested goes to waste will help the common person somehow. Maybe gpus and computer parts will stop costing so much for example. Maybe their leadership will collapse, things will change and the tech industry will be a good place to work again.
Pretty much all technology goes through the same odd shape of adoption.
What is often really hard to tell is where you are until your definitely in the trough of Disillusionment. We could be practically very early on the way up and human level or above AI is coming or near the peak of Inflated expectations and its about to crash down before finally finding a use that is less hype and more worthwhile. The regulation will certainly slow things down a bit towards the peak.
I am not sure whether slightly better chat bots that still lie and image generators that do look reasonably good is the peak or just the beginning. Progress has been dramatic in the past years since invention but the cost of training is now immense and it requires a breakthrough to make big steps of improvement and I am not sure if we are going to make that. A lot of billionaire money riding on it.
How can there be a bubble when there are no AI products?
Eh, it depends on what we count as “AI”. I’m in a field where machine learning has been a thing for years, and there’s been a huge amount of progress in the last couple of years[1]. However, it’s exhausting that so much is being rebranded as “AI”, because the people holding the purse strings aren’t necessarily the same scientists who are sick of the hype.
[1] I didn’t get into the more computational side of things until 2021 or so, but if I had to point to a catalyst for this progress, I’d say that the transformer mechanism outlined in the 2017 paper “Attention is all you need”, by Google scientists.
Yes. Tech is a hype-based sector where the actual value of products is obfuscated by marketing. When the AI craze settles down, hopefully we’ll stop seeing it injected into everything.
By everything do you mean 1/3 lemmy.world articles and comments or?
I mean AI bullshit like Copilot being added to Windows then quietly paywalled, ridiculous products like the Rabbit R1, and the arms race involving products that aren’t ready for market or produced ethically. LLMs still just make shit up a lot of the time.
It will be like the IoT bloom of 2015ish where everyone and their mother had a new IoT product on Kickstarter.
I just hope it dies fast so that we can concentrate on the actual utility of LLMs and other AI sectors.