When I think about this issue, I sometimes think about future scenarios on a scale of 1 - 10, with 10 being ‘most confidence to predict will occur’ and 1 being ‘least able to definitively predict’.
I give UBI a 4 on that scale. It may well occur, but there are different ways of achieving the same goal, so who knows.
One of the few facts I rank at 10, is that the day is coming when AI and robotics will be able to do most work, even the jobs uninvented, but for pennies on the hour.
The logical follow-on is that the day will also have to come, when society realizes that this is happening, understands it, and begins to prepare for its new reality. This is going to seem scary for many people; they will just see the destructive aspects of it, as the old ways of running the world crumble.
This is how I look at what this research is talking about - signs of this awakening becoming more widespread. We badly need politicians who start telling us about what the world is going to be like afterwards, and painting a hopeful vision about it.
Seems reasonable, people who’ve used generative AI are more likely to know how good it really is. I find that most of the people who just dismissively call it all “slop” haven’t actually tried using it much.
I’m using it more and more and find it very useful. I do a lot of writing for work, AI voice transcription and AI grammar checks are invaluable, not to mention having an AI voice read my writing back as a form of copy editing.
Also great for visual stuff, and for providing sound for videos.
However the hallucination problem is a real roadblock. I would never want to trust the current models of AI with an important decision.