That the government adds a “cause a car accident remotely” option to vehicles so that offending individuals traveling by car may die by the government remotely tweaking the car.
While it might be possible to remotely control a production car, cars now are safe enough that you’d need to have a lot of systems fail in order to ensure that an accident would be fatal. Things like, all the crumple zones not working as intended, airbags not going off, seat belts not locking properly, all at once. Or you could, I dunno, design the car so that the doors were only controlled electronically, and then ensure that if there was a fire or the car was submerged, the electronics failed (e.g., Teslas).
This is definitely possible, since you can actually controll cars (at least some models) via a (non-public, but the capability is there) API. Two security researchers at defcon were able to find a way how to control a vehicle remotely, even including things like stopping or turning, and eventually made an exploit that could be used remotely to any car of the same model. So, if they wanted to, they were able to stop or turn the wheel of IIRC hundreds of thousands of cars around the world instantly, since the cars are connected to the network through GSM, so you don’t even need to be anywhere near them.
It’s been a few years since I saw the video, but IIRC the vehicle controls are on a separate board that should not be reachable from the other smart vehicle system. However, they were able to reverse engineer a way how to abuse framework update mechanism as a bridge, and use it to patch the framework to get it under their control. And then they discovered that they could actually trigger the update remotely.
That the government adds a “cause a car accident remotely” option to vehicles so that offending individuals traveling by car may die by the government remotely tweaking the car.
On Teslas that’s a subscription feature
While it might be possible to remotely control a production car, cars now are safe enough that you’d need to have a lot of systems fail in order to ensure that an accident would be fatal. Things like, all the crumple zones not working as intended, airbags not going off, seat belts not locking properly, all at once. Or you could, I dunno, design the car so that the doors were only controlled electronically, and then ensure that if there was a fire or the car was submerged, the electronics failed (e.g., Teslas).
Doors not opening in a fire should end the company that made them. Not sure how this company still exists.
Too high level, it’s way cheaper to just hire a dude to cause an accident with a big vehicle like a truck, no passenger car can survive.
Coming from experience, I would think a car being submerged sounds like the least convenient time for it to stop working.
It certainly is.
I guess you can always count on Elon Musk to take trial and error too literally. Fortunately in my case no Teslas had been involved.
This is definitely possible, since you can actually controll cars (at least some models) via a (non-public, but the capability is there) API. Two security researchers at defcon were able to find a way how to control a vehicle remotely, even including things like stopping or turning, and eventually made an exploit that could be used remotely to any car of the same model. So, if they wanted to, they were able to stop or turn the wheel of IIRC hundreds of thousands of cars around the world instantly, since the cars are connected to the network through GSM, so you don’t even need to be anywhere near them.
It’s been a few years since I saw the video, but IIRC the vehicle controls are on a separate board that should not be reachable from the other smart vehicle system. However, they were able to reverse engineer a way how to abuse framework update mechanism as a bridge, and use it to patch the framework to get it under their control. And then they discovered that they could actually trigger the update remotely.