What bothers me is that this has to be designed under the assumption that the other car could be malicious and that any input received could be intentionally deceptive. I'm just not convinced that auto (or aerospace) companies have a high level of competency in thinking like this. They're used to thinking about physical defects, weather effects, bad users, and the like. It's very different if you have internet connected cars that could be (possibly in bulk) remotely hacked and given instructions to intentionally disrupt other cars.
Everything I have ever seen about vulnerabilities in car software systems has indicated a very poor understanding of the threat landscape on the part of car manufacturers and an embarrassingly weak ability to competently deal with these threats. So far this has been understandable since cars have been minimally networked, but going forward, I agree with GP that the appropriate term is "terrifying".
I think it’s one thing if you can spoof an input remotely and one hostile actor can target many vehicles simultaneously.
But if we can be certain based on the physics of the system that we are talking to the car in front of us, the fact is, there’s plenty of ways you can commit vehicular homicide today, and V2V doesn’t seem like a particularly “worse” way, and frankly, one of the most traceable ways you could probably try to hurt someone.
So while certainly you need to defend against broken and malfunctioning input, I’m not quite convinced the malicious input is actually a case that needs to be specifically defended against.
The vehicle will have a “flight envelope” based on its own local sensors and rules just like modern aircraft that don’t allow even bad inputs to stall the plane. The inputs from V2V would not let you leave the envelope any more than the autopilot inputs would. I believe the steering wheel would still be allowed to exceed the envelope, for as long as there is a steering wheel.
I think vehicular homicide is a lot less likely than pranks (causing traffic jams, etc.), or people trying to game the system so that they always get to pass through intersections without stopping.
However, intentionally causing injury isn't something that should be ruled out, either. It's only "traceable" if someone is sending the signal from their own car, registered under their real name. If someone pulls the transmitter from a junked car (or build their own, etc.), they could e.g. conceal it near an intersection, wait a day or two, and trigger it remotely, or attach it to the underside of someone else's vehicle, etc.
Someone could also jam the signals to potentially cause everything to stop working.
I'm with the crowd that thinks this is an inherently bad idea. The data is entirely untrusted, which makes it essentially useless for determining anything other than "there seems to be a radio transmitter at a particular location", and that's only if there are enough sensors to triangulate signal sources accurately.
I think auto manufacturing margins are slim enough where they will cut corners here by simply not hiring the best people and testing it long enough. It’s hard to know when you have a working system outside of astronomically expensive methods like formal verification, but it is really easy to know when you are out of budget.
Everything I have ever seen about vulnerabilities in car software systems has indicated a very poor understanding of the threat landscape on the part of car manufacturers and an embarrassingly weak ability to competently deal with these threats. So far this has been understandable since cars have been minimally networked, but going forward, I agree with GP that the appropriate term is "terrifying".