I saw people complaining the companies are yet to find the next big thing with AI, but I am already seeing countless offer good solutions for almost every field imaginable. What is this thing the tech industry is waiting for and what are all these current products if not what they had in mind?
I am not great with understanding the business point of view of this situation and I have been out from the news for a long time, so I would really appreciate if someone could ELI5.
Here’s a secret. It’s not true AI. All the hype is marketing shit.
Large language models like GPT, llama, and Gemini don’t create anything new. They just regurgitate existing data.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Until a llm can understand why it is wrong we won’t have true AI.
It’s just a stupid probability bucket. The term AI shits me.
It is true AI, it’s just not AGI. Artificial General Intelligence is the sort of thing you see on Star Trek. AI is a much broader term and it encompasses large language models, as well as even simpler things like pathfinding algorithms or OCR. The term “AI” has been in use for this kind of thing since 1956, it’s not some sudden new marketing buzzword that’s being misapplied. Indeed, it’s the people who are insisting that LLMs are not AI that are attempting to redefine a word that’s already been in use for a very long time.
You can see this when chat bots keep giving the same 2 pieces incorrect information. They have no concept of they are wrong.
Reminds me of the classic quote from Charles Babbage:
“On two occasions I have been asked, – “Pray, Mr. Babbage, if you put into the machine wrong figures, will the right answers come out?” … I am not able rightly to apprehend the kind of confusion of ideas that could provoke such a question”
How is the chatbot supposed to know that the information it’s been given is wrong?
If you were talking with a human and they thought something was true that wasn’t actually true, do you not count them as an intelligence any more?
That’s not a secret. The industry constantly talks about the difference between LLMs and AGI.
Until a product goes through marketing and they slap that ‘Using AI’ into the blurb when it doesn’t.
LLMs are AI. They are not AGI. AGI is a particular subset of AI, that does not preclude non-general AI from being AI.
People keep talking about how it just regurgitates information, and says incorrect things sometimes, and hallucinates or misinterprets things, as if humans do not also do those things. Most people just regurgitate information they found online, true or false. People frequently hallucinate things they think are true and stubbornly refuse to change when called out. Many people cannot understand when and why they’re wrong.
People can also stop saying words and think for a second about the information they’re actually saying first, whereas an LLM just vomits up words that seem to match the pattern of the rest of the sentence. If I were to ask you what 2 + 2 is, you’d stop, run the math in your head, get 4, then reply with 4. An LLM would just start vomiting out words based on what it’s been trained on without verifying that the information is good (or even relevant), and can end up confidently telling you that 2 + 2 is in fact equal to the cube root of 5 because that’s what the data said so it has to be right, for instance.
I’m aware this is a drastic oversimplification, and I think the tech is neat (although I avoid non-self-hosted models like the plague due to privacy concerns), but it’s oversold to all hell, and is definitely not even close to intelligent.
You haven’t really looked into multi-agent setups at all, have you? Basically any system of multiple agents can double-check themselves.
Additionally, none of this conflicts with my original point. If you train a human on bad data, they’ll GIGO too. I know plenty of humans who have confidently told me objectively false things because they had bad training data.
Large language models like GPT, llama, and Gemini don’t create anything new
That’s because it is a stupid use case. Why should we expect AI models to be creative, when that is explicitly not what they are for?
Disclaimer : I currently work in the field, not on the fundamental side of things but I build tooling for LLM-based products.
There are a ton of true uses for newer AI models. You can already see specialized products getting mad traction in their respective niches, and the clients are very satisfied with them. It’s mostly boring stuff, legal/compliance like Hypercomply or accounting like Chaintrust. It doesn’t make headlines but it’s obvious if you know where to look.
You’re falling into a no true Scotsman fallacy. There are plenty of uses for recent AI developments, I use them quite frequently myself. Why are those uses not “true” uses?
Because by design, once an AI implementation finds a use, it changes names. It has to, it’s just how marketing this stuff works. We don’t use writer AI, we have predictive text; we don’t have vision AI, we have enhanced imaging cancer diagnosis; we don’t have meeting’s AI, we have automatic transcription; we don’t have voice AI, we have software dictation. And this is not exclusive to AI, all fields of technology research follow the same pattern. Because selling AI is a grift. No matter how much you want to fold it, it’s the same thing as selling NFT or Blockchain or any of the previous tech grifts, solutions without problems. No one actually have a use for a fancy chatbot. And when they do and get a nice chatbot going, they won’t call it AI, because AI is associated with grifts and no one wants that perception problem. But when you actually make a product that solves a problem, you sell that product, you stop selling AI. Also AI is way larger than the current stream of LLMs.
The most successful applications (e.g. translation, medical image processing) aren’t marketed as “AI”. That term seems to be mostly used for more controversial applications, when companies want to distance themselves from the potential output by pretending that their software tools have independent agency.
They’re looking for something like the internet or smartphones and are disappointed that it’s not doing something on that level. Doesn’t matter that there’s tons of applications in science and art (even if we’d like to ignore the latter).
Or maybe they thought we’d have human level AI by now.
I’m pretty chuffed with what we have now. Considering it really hasn’t been that long that this sort of stuff has even been around, yet the average person can utilize an “AI” in their everyday life without even knowing how to use a computer.
Sure, it’s not 100% perfect, but I’ll take “stupidly convenient and right 90% of the time” over “takes hours of sifting through blogspam to find useful information that may or may not be correct”. Especially when it comes to mundane stuff like writing a resume or things where you have the knowledge, but just not the time.
Between OCR and LLM, summarising scanned things (something I do ~20% of the time) has about halved in terms of mental effort and time. As I’m paid on billable hours, this is big for me. I have told nobody and have not increased my overall output commensurately. This is the only good kind of automation I’ve observed: bottom-up, no decrease in compensation, no negotiations.
I tried FreedomGPT for better personal ownership, but for now, the hardware isn’t up to snuff for my needs. With stronger processing and somewhat better open source models I’ll be sitting pretty.
https://en.m.wikipedia.org/wiki/File:Gartner_Hype_Cycle.svg
It’s not that helpfull as everybody thinks and slowly people are realizing that.
Recently I saw AI transcribe a YT video. It was genuinely helpful.
I think most of the media coverage is hype. That doesn’t directly answer your question… But I take everything I read with a grain of salt.
Currently, for the tech industry, it’s main use is to generate hype and drive the speculation bubble. Whether it’s useful or not, slapping the word “AI” on things and offering AI services increases the value of your company. And I personally think if they complain about this, it’s they want the bubble even bigger, but they already did the most obvious things. But that has nothing to do with “find use” in the traditional sense (for the thing itself.)
And other inventions came with hype. Like smartphones (the iPhone.) Everyone wanted one. Lots of people wanted to make cash with that. But still, if it’s super new, it’s not always obvious at what tasks it excels and what the main benefits are in the long term. At first everyone wants in just because it’s cool and everyone else has one. In the end it turned out not every product is better with an App (or Bluetooth). And neither a phone, nor AI can (currently) do the laundry and the other chores. So there is a limit in “use” anyways.
So I think the answer to your question: what did they have in mind… is: What else can we enhance with AI or just slap the words on to make people buy more. And to be cool in the eyes of our investors.
I think one of the next steps is the combination with robotics. That will make it quite more useful. Like input from sensors so AI can take part in the real world, not just the virtual one. But that’s going to take some time. We’ve already started, but it won’t happen over night. And for the close future i think it’s gonna be gradual increase. AI just needs to get more intelligent, make less errors, be more affordable to run. That’s going to be a gradual increase and provide me with a better translation service on my phone, a smart-home that i can interact with better, an assistant that can clean up the mess with all the files on my computer, organize my picture folder… But the revolution already happened. I think it’s going to be constant, but smaller steps/ progress from now on.
AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.
There are possibilities of consumer products (e.g. smarter alexa and siri) but those are non monetized, so they cannot generate 100B revenue from it.
There is possibility of more innovative products e.g. smart christmas toy, but AI needs few more years to get there.
AI is being used to replace a lot of jobs, but companies usually do not want to advertise that.
I would be careful with that statement.
I’ve been involved in some projects about “leveraging on data” to reduce maintenance costs. And a big pitfall is that you still someone to do the job. Great, now, you know that the “Primary pump” is about to break. You still need to send a tech to replace-it, and often you have to deal with a user who can’t afford to turn the system off until the repair is done, and the you can’t let someone work alone in the area. So you end-up having to send 2 persons asap to repair the “primary pump”.
It’s a bit better in term of planning/ressources than “Send 2 persons to diagnose what’s going wrong, get the part and do the repair”, which allows to replace engineer able to do a diagnostic by technicians able to execute a procedure (which is itself an issue as soon as you have to think out of the box). It allow to have a more dynamic “preventive maintenance planning”. So somehow, it helped cutting down the maintenance costs and improve system reliability. But in the end, you still need staff to do the repair. And I let alone, all the manpower needed to collect/process the data, hardware engineer looking on how to integrate sensor in the machines, data-engineer building a data-base able to use these data, data-scientists building efficient algorithm, product maintenance expert trying to make-sense of these data and so on.
I feel like, a big chunk of the AI will be similar, with some jobs being cut down (or less qualified) while tons of new jobs will take over
I’m not sure it’s going to be that. That was the model for the last wave of tech advancement layoffs and job replacements. This one is going to be so much dumber.
It’s no secret that most companies are stagnant or losing money right now across the board. For many reasons, disposable income is way down, COVID mentality change (people decided they wanted to live instead of just consume), and products have just been getting worse. So, CEOs are using AI to replace jobs that AI cannot yet replace. It immediately makes their bottom line look better for investors while doing nothing useful. This will bite them in the ass soon but they’ll say AI was oversold and it’s not their fault. Meanwhile, they look like the nothing they’re doing to improve their company is working and will survive another day.
Nobody’s mentioning this but the reason is that when they say ‘next big thing’ what they mean is ‘being able to monetize it and make it profitable’.
They care about usefulness only insofar as its a way to monetize it. It doesn’t need to be useful at all. It’s maybe a nice buzzword for on the PowerPoint slide when they’re trying to convince investors.
But investors aren’t idiots and they are usually pretty fucking tuned in to whatever they put money on. And ai is very oversold and overpromised. It’s not that great, very difficult to get to do what you want, and very costly to operate, with mostly questionable/untrustworthy results that still require a lot of knowledge to be able to work with. Plus it’s begging for a lot of new legislation to protect copyrights and privacy law. So, we need a bunch of idiots with money to make this work and those are usually in the large tech companies (think the bezos and musks of this world). They have the infrastructure and resources to put into it, and then they try to incorporate it into their ecosystem. They’ll probably fuck everything up forever and probably make it so llm’s and other models are going to have to be destroyed to be able to comply to legislation.
Anyone with a brain stays away from investing in this and maybe hedge it a bit. See what happens… I don’t think that there’s going to be other companies popping up in this space. But just the continuing progress of big tech streamlining their current systems until enough people are exposed to enough bullshit to change legislation. Depending on that, maybe some companies will be able to give us something useful like: An ai personal assistant that figured out in the middle of a conversation to put the appointment you made in the agenda, ordered a seat in the restaurant and reroutes you to a gasstation because you’re low on fuel and messages your spouse you’re 5 minutes late because of it. While your privacy is protected and your data secure.
In the meantime we can make pictures of cats in space wearing a clown costume.
A buddy told me he used AI to mostly author a PowerShell script for something or other automation at his work the other day. Sounded like it was reasonably complex and all he had to do was sanity-check the code and touch it up to make sure it worked correctly. I’ve barely dabbled in that area, but I was reasonably impressed with the small tasks I threw at it.
AI seems to be for coders what the PC was for Designers.
We used to have a guy for type, a guy for colours, a copywriter, an art director, and a graphic designer. Now it’s all one guy whose responsible for everything start to finish.