These are going to be great for lots of types of workers, especially in nursing and elder care. More broadly, I wonder will there come a day where human workers will need them, to compete with actual robot workers?
Given Musk’s record of non-stop bullshi***ng, i’m prepared to believe the worst about any offstage shenanigans making this look more impressive than it was.
The robotaxi and announcements about FSD were deeply disappointing. Just more vague promises about the future. It’s madness to me this man has been allowed have such a senior and pivotal position in America’s Space Program.
Although I’m using AI more and more for writing related tasks, I still find it constantly making simple rudimentary errors of logic. If it is advancing as this research paper claims, why are we still seeing so many of these types of hallucination errors?
Quite apart from the issue of Google as a monopoly, i’ve really noticed a lot of their services going downhill in the last 18 months or so. Search gets steadily worse. But their AI voice transcription services, and grammar and spell checks, are nowhere near as good as free open source alternatives. Not only that, but bizarrely they’ve actually got worse than they used to be, where everywhere else in these fields it’s constant improvement.
AR/VR always seems on the cusp of taking off, yet never seems to actually do so.
There are a few other new heavy lift rockets in development around the world. Some people think Spacex’s Starship will make them obsolete, but it doesn’t seem like it will be ready anytime soon.
If someone can build robotic systems that are entirely made up of 3D printed components, that seems very possible.
Yes, that is true by many dictionary definitions. But does it matter? If this process of recursive self-improvement has truly started. Is there is a scenario where this continuous improvement in the chips is what brings true AI about, and not human design.
For anyone familiar with the ideas behind what Ray Kurzweil called ‘The Singularity’, this looks awfully like it’s first baby steps.
For those that don’t know, the idea is that when AI gets the ability to improve itself, it will begin to become exponentially more powerful. As each step will make it even better, at designing the next generation of chips to make it more powerful.
The model family is “a new suite of state-of-the-art multimodal models trained solely with next-token prediction,” BAAI writes. “By tokenizing images, text, and videos into a discrete space, we train a single transformer from scratch on a mixture of multimodal sequences”.
Every single time it looks like closed Big Tech AI systems might steal a lead, open source is never far behind snapping at their heels. Now it seems it’s the same story with multi-modal AI.
I’m using it more and more and find it very useful. I do a lot of writing for work, AI voice transcription and AI grammar checks are invaluable, not to mention having an AI voice read my writing back as a form of copy editing.
Also great for visual stuff, and for providing sound for videos.
However the hallucination problem is a real roadblock. I would never want to trust the current models of AI with an important decision.
When I think about this issue, I sometimes think about future scenarios on a scale of 1 - 10, with 10 being ‘most confidence to predict will occur’ and 1 being ‘least able to definitively predict’.
I give UBI a 4 on that scale. It may well occur, but there are different ways of achieving the same goal, so who knows.
One of the few facts I rank at 10, is that the day is coming when AI and robotics will be able to do most work, even the jobs uninvented, but for pennies on the hour.
The logical follow-on is that the day will also have to come, when society realizes that this is happening, understands it, and begins to prepare for its new reality. This is going to seem scary for many people; they will just see the destructive aspects of it, as the old ways of running the world crumble.
This is how I look at what this research is talking about - signs of this awakening becoming more widespread. We badly need politicians who start telling us about what the world is going to be like afterwards, and painting a hopeful vision about it.
In Europe and North America food loss adds up to around 16%
Everywhere in the world hunger and malnutrition are distribution problems, not lack of production problems.
If you are familiar with the idea of AI taking off into the realms of superintelligence, one of the steps that is supposed to accompany that is recursive self-improvement. In other words when AI can write its own code, to improve itself, and thus continuously get more and more powerful. I wonder if examples like this are the tiny first baby steps of that.
I’m surprised drone deliveries haven’t taken off more yet, these guys in Germany look like they are on to a winning solution too.
https://www.siliconrepublic.com/machines/wingcopter-drone-delivery-groceries-germany
Microsoft has cash reserves of $75 billion.
Microsoft - If you really want to convince us that nuclear power is part of the future, why can’t you use some of your own money? Why does every single nuclear suggestion always rely on bailouts from taxpayers? Here’s a thought, if you can’t pay for it yourself - just pick the cheaper option that taxpayers don’t have to pay for - you know renewables and grid storage? The stuff that everybody else, all over the world, is building near 99% of new electricity generation with.
“INBRAIN’s BCI technology was able to differentiate between healthy and cancerous brain tissue with micrometer-scale precision.”
This breakthrough with the surgery is very interesting, but what is even more interesting to me is their wider goals. They wonder if it will be possible to use this approach to treat many brain disorders, including mental health conditions.
This also has the potential for a direct connection between artificial intelligence and our brains. That has long been speculated about in sci-fi. This approach has a chance of starting to make it a reality.
When I look at the potential in current advances in medicine, and the idiocracy that passes for “politics” and “debate”, in some quarters, I wonder when more people are going to wise up.
Training and educating surgeons is the biggest bottleneck in the availability of their skills, and thus the amount of surgeries people can have. Here we have the potential to smash through that. Procedure by procedure, as robots master individual types of surgery, suddenly the only type of bottleneck you have is the amount of robots. A vastly easier and quicker problem to solve than increasing the supply of trained human surgeons.
For sure there is a certain amount of hype here. That said much of their thinking seems like it could be sound. But I want to see stuff like this working in practice, not just theoretically. I guess we will have to wait and see.
I think part of this increase may be down to an increased awareness of mental health issues. Mental health problems that were not understood, or ignored in decades past, are much more clearly seen now.
However, it seems undeniable that life has gotten worse across the Western world for younger generations. Economic independence of any kind is impossible without going into soul-crushing debt first. In many ways, it bears similarity to the indentured servitude of the past. Meanwhile, you get lectured by a generation that grew up with free education, cheap rents, and jobs that were easy to get and could support a whole family.
If much of this is caused by economic factors, will the soon-to-be widespread automation of more of the economy make things better or worse? My guess is that in the short term, they will get worse. Until we arrive at what new economic model follows.
Driving jobs are about to disappear to self-driving autonomous vehicles. They were one of the last refuges of the less educated to have a degree of economic independence, especially for less educated young men. The mental health consequences of that category of job disappearing forever may be enormous.