For me, if a company fails to make a clear cut case about why a product of theirs needs AI, I’m gonna assume they just want to misuse AI to cheaply deliver a mediocre product instead of putting in the necessary cost of manhours.
It’s really simple: There are a number of use cases where generative AI is a legitimate boon. But there are countless more use cases where AI is unnecessary and provides nothing but bloat, maybe novelty at best.
Generative AI is neither the harbinger or doom, nor the savior of humanity. It’s a tool. Just a tool. We’re just caught in this weird moment where people are acting like it’s an all-encompassing multipurpose tool right now instead of understanding it as the limited use specific tool it actually is.
It’s a tool. Just a tool.
And, more often than not, it’s a poorly implemented tool that didn’t need to be added to the product in the first place.
Yes, that was literally my point. A plumbing wrench is a perfectly useful and wonderful tool, but it isn’t going to be much help in the middle of brain surgery. Tools have use cases; they can’t be applied to any situation
I wonder if we’ll start seeing these tech investor pump n’ dump patterns faster collectively, given how many has happened in such a short amount of time already.
Crypto, Internet of Things, Self Driving Cars, NFTs, now AI.
It feels like the futurism sheen has started to waver. When everything’s a major revolution inserted into every product, then isn’t, it gets exhausting.
Internet of Things
This is very much not a hype and is very widely used. It’s not just smart bulbs and toasters. It’s burglar/fire alarms, HVAC monitoring, commercial building automation, access control, traffic infrastructure (cameras, signal lights), ATMs, emergency alerting (like how a 911 center dispatches a fire station, there are systems that can be connected to a jurisdiction’s network as a secondary path to traditional radio tones) and anything else not a computer or cell phone connected to the Internet. Now even some cars are part of the IoT realm. You are completely surrounded by IoT without even realizing it.
Huh, didn’t know that! I mainly mentioned it for the fact that it was crammed into products that didn’t need it, like fridges and toasters where it’s usually seen as superfluous, much like AI.
I would beg to differ. I thoroughly enjoy downloading various toasting regimines. Everyone knows that a piece of white bread toasts different than a slice of whole wheat. Now add sourdough home slice into the mix. It can get overwhelming quite quickly.
Don’t even get me started on English muffins.
With the toaster app I can keep all of my toasting regimines in one place, without having to wonder whether it’s going to toast my pop tart as though it were a hot pocket.
Bagels are a whole different set of data than bread. New bread toasts much more slowly than old bread.
I mean give the thing an USB interface so I can use an app to set timing presets instead of whatever UX nightmare it’d otherwise be and I’m in, nowadays it’s probably cheaper to throw in a MOSFET and tiny chip than it is to use a bimetallic strip, much fewer and less fickle parts and when you already have the capability to be programmable, why not use it. Connecting it to an actual network? Get out of here.
Yea I’m being a little facetious I hope it is coming through lol
don’t forget Big Data
TimeSquirrel made a good point about Internet of Things, but Crypto and Self Driving Cars are still booming too.
IMHO it’s a marketing problem. They’re major evolutions taking root over decades. I think AI will gradually become as useful as lasers.
It’s more of a macroeconomic issue. There’s too much investor money chasing too few good investments. Until our laws stop favoring the investor class, we’re going to keep getting more and more of these bubbles, regardless of what they are.
Yeah it’s just investment profit chasing from larger and larger bank accounts.
I’m waiting for one of these bubble pops to do lasting damage but with the amount of protections for specifically them and that money that can’t be afforded to be “lost” means it’s just everyone else that has to eat dirt.
LLMs: using statistics to generate reasonable-sounding wrong answers from bad data.
Often the answers are pretty good. But you never know if you got a good answer or a bad answer.
With proper framework, decent assertions are possible.
- It must cite the source and provide the quote, not just a summary.
- An adversarial review must be conducted
If that is done, the work on the human is very low.
That said, it’s STILL imperfect, but this is leagues better than one shot question and answer
Except LLMs don’t store sources.
They don’t even store sentences.
It’s all a stack of massive N-dimensional probability spaces roughly encoding the probabilities of certain tokens (which are mostly but not always words) appearing after groups of tokens in a certain order.
And all of that to just figure out “what’s the most likely next token”, an output which is then added to the input and fed into it again to get the next word and so on, producing sentences one word at a time.
Now, if you feed it as input a long, very precise sentence taken from a unique piece, maybe you’re luck and it will output the correct next word, but if you already have all that you don’t really need an LLM to give you the rest.
Maybe the “framework” you seek - which is quite akin to a indexer with a natural language interface - can be made with AI, but it’s not something you can do with LLMs because their structure is entirely unsuited for it.
The proper framework does, with data store, indexing and access functions.
The cutting edge work is absolutely using LLMs in post-rag pipelines.
Consumer grade chat interfaces def do not do this.
Edit if you worry about topics like context window, sentence splitting or source extraction, you aren’t using a best in class framework any more.
They really aren’t. Go ask about something in your area of expertise. At first glance, everything will look correct and in order, but the more you read the more it turns out to be complete bullshit. It’s good at getting broad strokes but the details are very often wrong.
Now imagine someone that doesn’t have your expertise reading that answer. They won’t recognize those details are wrong until it’s too late.
That is about the experience I have. I asked it for factual information in the field I work at. It didn’t gave correct answers. Or, it gave working protocols which were strange and would not be successful.
And the system doesn’t know either.
For me this is the major issue. A human is capable of saying “I don’t know”. LLMs don’t seem able to.
Accurate.
No matter what question you ask them, they have an answer. Even when you point out their answer was wrong, they just have a different answer. There’s no concept of not knowing the answer, because they don’t know anything in the first place.
The worst for me was a fairly simple programming question. The class it used didn’t exist.
“You are correct, that class was removed in OLD version. Try this updated code instead.”
Gave another made up class name.
Repeated with a newer version number.
It knows what answers smell like, and the same with excuses. Unfortunately there’s no way of knowing whether it’s actually bullshit until you take a whiff of it yourself.
Adobe Acrobat has added AI to their program and I hate it so much. Every other time I try to load a PDF it crashes. Wish I could convince my boss to use a different PDF reader.
LLM based AI was a fun toy when it first broke. Everyone was curious and wanted to play with it, which made it seem super popular. Now that the novelty has worn off, most people are bored and unimpressed with it. The problem is that the tech bros invested so much money in it and they are unwilling to take the loss. They are trying to force it so that they can say they didn’t waste their money.
Honestly they’re still impressive and useful it’s just the hype train overload and trying to implement them in areas they either don’t fit or don’t work well enough yet.
AI does a good job of generating character portraits for my TTRPG games. But, really, beyond that I haven’t found a good use for it.
So far that’s been the best use of AI for me too. I’ve also used it to help flesh out character backgrounds, and then I just go through and edit it.
Yeah exactly, as a tool that doesn’t need to be perfect to give you a starting point it’s excellent. But companies sort of forgot the “as a tool” part and are just implementing ai outright in places it’s not ready yet like drive-thru windows or voice only interface devices…it’s not ready for that shit currently (if it ever truly will be).
They are all completely half-baked products being rolled out before they’re ready because none of these billion dollar tech companies will allow a product to not immediately generate revenue.
I’m really enjoying seeing the backlash of everyone unanimously being sick of having this unfinished tech shoved down our throats.
One place where I found AI usefull is in generating search queries in JIRA. Not having to deal with their query language every time I have to change a search filter, but being able to just use the built in AI to query in natural language has already saved me like two or three minutes in total in the last two months.
I agree with this, my sentiments exactly as well. Getting AI pushed towards us from every direction & really never asked for it. Like to use it for certain things but go to it when needed. Don’t want it in everything, at least personally.
Many of us who are old enough saw it as an advanced version of ELIZA and used it with the same level of amusement until that amusement faded (pretty quick) because it got old.
If anything, they are less impressive because tricking people into thinking a computer is actually having a conversation with them has been around for a long time.
So you want to tell me they all spent billions and made huge data centres that suck more power than small country so we can all play with it say it was fun and then toss it away?
This is kinda insane if that’s how it will play out
Not the first time this has happened. Even recently. See NFTs. Venture capitalists hear “tech buzzword” and throw money at it because if they’re lucky, it’s the next Google. Or at least it gets an IPO and they can cash out.
Yeah but like we could be doing something worthwhile with all these finite resources it makes me a bit dizzy
We could, but they don’t care about making the world a better place. They care about getting rich. And then if everything collapses, they can go to their private island or their doomsday vault or whatever and enjoy the apocalypse.
I like my AI compartmentalized, I got a bookmark for chatGPT for when i want to ask a question, and then close it. I don’t need a different flavor of the same thing everywhere.
The less technologically literate shout “AI is theft!”
Conspiracy theorists whisper of “government surveils” and “brain hacking chips”…
As a result, those who don’t understand new technology become fearful of it.
In itself, “AI” is a total buzzword.
I have no qualms about AI being used in products. But when you have to tell me that something is “powered by AI” as if that’s your main selling point, then you do not have a good product. Tell me what it does, not how it does it.
I mean, pretty obvious if they advertise the technology instead of the capabilities it could provide.
Still waiting for that first good use case for LLMs.
It is legitimately useful for getting started with using a new programming library or tool. Documentation is not always easy to understand or easy to search, so having an LLM generate a baseline (even if it’s got mistakes) or answer a few questions can save a lot of time.
So I used to think that, but I gave it a try as I’m a software dev. I personally didn’t find it that useful, as in I wouldn’t pay for it.
Usually when I want to get started, I just look up a basic guide and just copy their entire example to get started. You could do that with chatGPT too but what if it gave you wrong answers?
I also asked it more specific questions about how to do X in tool Y. Something I couldn’t quickly google. Well it didn’t give me a correct answer. Mostly because that question was rather niche.
So my conclusion was that, it may help people that don’t know how to google or are learning a very well know tool/language with lots of good docs, but for those who already know how to use the industry tools, it basically was an expensive hint machine.
In all fairness, I’ll probably use it here and there, but I wouldn’t pay for it. Also, note my example was chatGPT specific. I’ve heard some companies might use it to make their docs more searchable which imo might be the first good use case (once it happens lol).
I’m actually working on a vector DB RAG system for my own documentation. Even in its rudimentary stages, it’s been very helpful for finding functions in my own code that I don’t remember exactly what project I implemented it in, but have a vague idea what it did.
E.g
Have I ever written a bash function that orders non-symver GitHub branches?
Yes! In your ‘webwork automation’ project, starting on line 234, you wrote a function that sorts Git branches based on WebWork’s versioning conventions.
I just recently got copilot in vscode through work. I typed a comment that said, “create a new model in sqlalchemy named assets with the columns, a, b, c, d”. It couldn’t know the proper data types to use, but it output everything perfectly, including using my custom defined annotations, only it was the same annotation for every column that I then had to update. As a test, that was great, but copilot also picked up a SQL query I had written in a comment to reference as I was making my models, and it also generated that entire model for me as well.
It didn’t do anything that I didn’t know how to do, but it saved on some typing effort. I use it mostly for its auto complete functionality and letting it suggest comments for me.
That’s awesome, and I would probably would find those tools useful.
Code generators have existed for a long time, but they are usually free. These tools actually costs a lot of money, cost way more to generate code this way than the traditional way.
So idk if it would be worth it once the venture capitalist money dries up.
That’s fair. I don’t know if I will ever pay my own money for it, but if my company will, I’ll use it where it fits.
What are these code generators that have existed for a long time?
Lookup emmet.
I’ve also found IntelliJ’s generators useful for Java.
Neither of those seem similar to GitHub copilot other than that they can reduce keystrokes for some common tasks. The actual applicability of them seems narrow. Frequently I use GitHub copilot for “implement this function based on this doc comment I wrote” or “write docs for this class/function”. It’s the natural language component that makes the LLM approach useful.
Wrote my last application with chat gpt. Changed small stuff and got the job
That’s because businesses are using AI to weed out resumes.
Basically you beat the system by using the system. That’s my plan too next time I look for work.
Please write a full page cover letter that no human will read.
I’ve built a couple of useful products which leverage LLMs at one stage or another, but I don’t shout about it cos I don’t see LLMs as something particularly exciting or relevant to consumers, to me they’re just another tool in my toolbox which I consider the efficacy of when trying to solve a particular problem. I think they are a new tool which is genuinely valuable when dealing with natural language problems. For example in my most recent product, which includes the capability to automatically create karaoke music videos, the problem for a long time preventing me from bringing that product to market was transcription quality / ability to consistently get correct and complete lyrics for any song. Now, by using state of the art transcription (which returns 90% accurate results) plus using an open weight LLM with a fine tuned prompt to correct the mistakes in that transcription, I’ve finally been able to create a product which produces high quality results pretty consistently. Before LLMs that would’ve been much harder!
I actually think the idea of interpreting intent and connecting to actual actions is where this whole LLM thing will turn a small corner, at least. Apple has something like the right idea: “What was the restaurant Paul recommended last week?” “Make an album of all the photos I shot in Belize.” Etc.
But 98% of GenAI hype is bullahit so far.
How would it do that? Would LLMs not just take input as voice or text and then guess an output as text?
Wouldn’t the text output that is suppose to be commands for action, need to be correct and not a guess?
It’s the whole guessing part that makes LLMs not useful, so imo they should only be used to improve stuff we already need to guess.
One of the ways to mitigate the core issue of an LLM, which is confabulation/inaccuracy, is to have a layer of either confirmation or simply forgiveness intrinsic to the task. Use the favor test. If you asked a friend to do you a favor and perform these actions, they’d give you results that you can either/both look over yourself to confirm they’re correct enough, or you’re willing to simply live with minor errors. If that works for you, go for it. But if you’re doing something that absolutely 100% must be correct, you are entirely dependent on independently reviewing the results.
But one thing Apple is doing is training LLMs with action semantics, so you don’t have to think of its output as strictly textual. When you’re dealing with computers, the term “language” is much looser than you or I tend to understand it. You can have a “grammar” that is inclusive of the entirety of the English language but also includes commands and parameters, for example. So it will kinda speak English, but augmented with the ability to access data and perform actions within iOS as well.
Writing bad code that will hold together long enough for you to make your next career hop.
Haven’t you been watching the Olympics and seen Google’s ad for Gemini?
Premise: your daughter wants to write a letter to an athlete she admires. Instead of helping her as a parent, Gemini can magic-up a draft for her!
On the plus side for them, they can probably use Gemini to write their apology blog about how they missed the mark with that ad.
AI is garbage.
I’ll use it more when its has a proven reliable use.
I literally uninstalled and disabled every AI process and app in that latest galaxy AI update, which was the whole update btw. my reasons are:
1- privacy and data sharing.
2- the battery, cpu, ram of AI bloatware running in the background 247.
3- it was chaging and doing things which I didn’t want especially in the galary photo albums and camera AI modes.
I was considering a new Samsung phone - is that baked into it? (Assuming you’re talking Samsung anyway, based on the galaxy name)
Samsung is a nightmare, don’t purchase their products.
For example: I used to have a Samsung phone. If I plugged it into the USB port on my computer Windows Explorer would not be able to see it to transfer files. My phone would tell me I need to download Samsung’s drivers to transfer files. I could only get them by downloading Samsung’s software. Once I installed the software Windows Explorer was able to see the device and transfer files. Once I uninstalled the software Windows Explorer couldn’t see the device again.
Anything Samsung can do in your region to insert themselves between you and what you are trying to do they will do.
2nd this. Samsung is for people who hate themselves but can’t commit to ending it all.
This is a great summary I’m going to make use of
What’s a good brand then?
I use Pixel for Android. IMHO it’s the easiest to modify and if you want it stock, its vanilla Googled Android and it gets the updates 1st and most often.
The software bloat is not dissimilar to what I’ve heard in the past, but I’d forgotten since I haven’t gone in depth researching yet. Which phones do we prefer today? Loosely off the top of my head, less bloat/intrusiveness, nice camera, battery life enough for a day, and maybe on the smaller size to fit one hand are probably what I’ll be looking in to.
I’ve got a Ulefone, I’m quite fond of it.
Ooo I haven’t heard of Ulefone before, I see some of their phones have a built in thermal camera? That sounds cool. How’s the Android/software experience? I’m not familiar with the Chinese phone lines, do they have their own bloat like Samsung?
No bloatware, although mine has a “feature” called Duraspeed I need to uninstall that restricts background applications, including fitness tracking ones I actually want running, and notifies me multiple times per day about this.
Them and Doogee I really like, especially since the phones don’t need to be in a case.
Apparently Pixel is the easiest to install an alternative OS on, going to start looking into that soon.
Care to share how you disabled every bit of AI in the phone?
Yee. No root required, neither recommended for samsung devices. In short just enable developer mode from phone settings, then debug it with adb platform to uninstall and disable any system app, and can also change lines, colors, phone behaviors, properties and look, install and uninstall apps which you could not before…and so many things.
Do you have to do this every time you update your phone?
No off course. Once and all the OS and security and all updates all work fine
I get AI has its uses but I don’t need my mouse to have any thing AI related (looking at you Logitech).
Unsurprisingly. I have use for LLMs and find them helpful, but even I don’t see why should we have the copilot button on new keyboards and mice, as well as on the LinkedIn’s post input form.
There are certainly great uses for LLMs. 99% of the time it is useless though.