This is a very bad take. LLM’s, appear to be at their limit. They’re autocomplete and are only as good as their inputs. They can’t be depended on for truth. They can’t be trusted to even do math.
LLM’s work as a place to bounce things off of, but still require editorial work afterword, even when they are working their best.
LLM’s take huge amounts of power, both to make run, keep running, and to correct their output.
In general LLM’s don’t significantly reduce labor, and they are still ~very costly~.
Even the most basic assembly line multiplies someones output. The best assembly lines remove almost all human labor. Even bad assembly lines are wholesale better than individual assembly.
As long as it’s LLM, I don’t believe it will ever be “useful”. We need a different technology to make this sort of assistance useful.
Here’s the big difference. Automated assembly lines do a job better than the average human can. LLMs do the job consistently worse than the average human would.
Not yet.
Automated assembly lines had a net positive impact on productivity though.
So do LLMs. They don’t sleep or eat and the can work as fast as hardware allows.
But the output of AAL is useful and desired.
And if you’re not careful you can fuck up your fingers in either case
While right now that may not be the case, but as someone who looked into the topic I can definitely say that it has similar potential. It’s just that a lot of companies just duct-tape AI onto their product and the result they get is usually shitty. But if you ignore all those “projects” that are obviously meant to fail, there are some promising projects and applications for AI that are actually made by people who understand what they are doing, the limitations and the upsides of the technology and they may just make products that are indeed useful.
Sorry for the undecipherable wall of text, it’s early in the morning for me.