A team of researchers from prominent universities – including SUNY Buffalo, Iowa State, UNC Charlotte, and Purdue – were able to turn an autonomous vehicle (AV) operated on the open sourced Apollo driving platform from Chinese web giant Baidu into a deadly weapon by tricking its multi-sensor fusion system, and suggest the attack could be applied to other self-driving cars.
People seem to hold computers to a higher standard than other people when performing the same task.
I think human responses vary too much: could you follow a strategy that makes 50% of human drivers crash reliably? probably. Could you follow a strategy to make 100% of autonomous vehicles crash reliably? Almost certainly.