I don't have an answer for you, but maybe it stems from some sort of idea of identity? Maybe the car sends a message authenticated with your license ID. Cars are (hand waves) able to authenticate the validity of a message. If you send a message saying you were in a collision, that comes with the side effects of saying you were in a collision - emergency contacts and police may be notified, maybe something pings cars registered to you to see if it was actually in an accident based on sensor data. Your insurance rates may go up. If it turns out you were lying, maybe your car gets flagged for repairs next time you try to re-register your tags. Or maybe you are flagged for trolling and you get the "boy who cried wolf" effect.
Sure, the actual vehicle systems had better work and be properly authenticated, and that could be achieved in numerous ways... but none of this stops a hacker war-driving with a laptop, or sitting on an overhead bridge, or on the other side of the world.
There is no doubt that intelligent highways would be a massive target for terrorist and cyber attacks. However, one thing in our favour is the fact there is so little technology already in place, since they won't be burdened with decades' worth of insecure legacy systems: at least they'll be able to start out building infrastructure in full knowledge that it better be secure.
In some ways we have methods to mitigate this already in place. They have, of course, been exposed as insecure when they were implemented badly[1][2]. That being said, there is prior art on this, in nature.
You have your own trusted sources of information, and you have your secondary sources of information, which you apply trust levels to. Your own trusted sources of information are your own senses, your sight, hearing, feeling (touch), smell, etc. You may have something communicated to you about your condition or environment through these senses, but it's silly to take that at face value without confirming with your own senses. If your friend next to you that you trust tells you that you're about to walk into a wall, you look ahead. If you can't see a wall, you might slow and exercise caution until you figure out why you were relayed that information, but unless you can completely stop without problem, you don't do that without cause.
I think it's a mistake to think of cards in the future communicating as a swarm. They need to be able to function independently, and also to take in extra information from the group when the group is available, and make decisions on that. That's less swarm behavior than social behavior, so we should consider groups of cars on the road a social groups, and the same information dynamics exhibited in those groups apply.
Exactly, and it is not just this one case: serious, avoidable security cock-ups have happened every time a new class of device has been put on the internet, with the IoT DDOS debacle being perhaps the latest example.
Sadly, the lessons learned in legacy systems do not necessarily address the problems we have now. Older systems were typically combating noise, signal degradation, power consumption, processor speeds, poor user interface technology, etc. Often they were extremely proprietary and if they included any kind of security it was commonly by obscurity, and even if not, the crypto used would be easily crackable by a modern script kiddie packing Aircrack or similar.
The lesson we do need to learn is future-proofing: whatever infrastructure we install on the millions of miles of public highway will need to be serviceable for decades.
So, a malicious actor could spoof the vehicle of somebody they dislike, and cause their insurance to go up, or pave the path for a later incident, where when police/emergency are contacted, the person is ignored?