Hacker News new | past | comments | ask | show | jobs | submit login

Not the OP, but the precedents are here: Always-connected cars (eg. teslas): check. Being able to take control of a car via the CAN bus[1]: check. The only thing missing in the exploit chain is something that allows the attacker to jump from the modem/infotainment system to the CAN bus or ecu.

[1] https://www.wired.com/2016/08/jeep-hackers-return-high-speed...




Tesla uses a pretty different architecture from the dumpster fire that was OnStar.

Been a while since I looked in the details but from what I recall only very limited, well scrutinized communication is allowed to bridge the Ethernet subsystem over to the CAN bus.


Well fundamentally, they can be breached. Since the self-driving AI system can be updated (afaik), then vehicle control can certainly be remotely altered.

I guess those risks are a bit like nuclear launch risks. Someone, somewhere (that controls the right key) could unleash an enormous accident. You just hope the system as a whole has enough redundancy for it to be sufficiently unlikely. In nuclear arms technology we have multiple authorizations required for launches. Then of course the underlying system has to be robust enough so none of those measures can be bypassed.

There is no fundamental barrier that I can see -- we know how to build incredibly robust systems using methods like formal verification of software and just thorough quality assurance and testing. In a way, much of the world is already exposed to those kinds of risks: most individuals's phones, laptops and even industrial computers can be remotely updated and incapacitated, exposing a risk of generalized trouble.

The adequate amount of resources to be allocated on those risky but extremely remote scenarios is what's important. I think there needs to be oversight guaranteeing every system is getting verification, testing and attention proportional to risk for society. The usual punitive incentives don't work very well for those cases. I don't think we have any agencies with this wide of an outlook currently.


Tesla uses a pretty different architecture from the dumpster fire that was OnStar.

And probably other manufacturers will use their own designs.

Unfortunately, this is one of those issues where we have to be lucky every time, and the bad guys only have to be lucky once. Given the number of different manufacturers whose systems have demonstrably been compromised in the past, the odds of avoiding catastrophic compromises in the future as cars gain more autonomous features don't look great here.


I guess we should just give up and leave everything wide open then right? No point in trying to do defense in depth if someone will eventually hack something?


I didn't say that at all.

One plausible alternative is that we don't deploy systems like this, which have the ability to cause widespread damage including loss of life in moments, until we have worked out how to secure them properly against a single point of catastrophic failure like that. There are things that could be done to mitigate that threat in the meantime.

This is set against the knowledge that existing human-driven vehicles are involved in many tragic accidents per year, also causing widespread damage including loss of life. But there are other things that can be done to limit the damage there as well, without relying on autonomous vehicles as a silver bullet.


Try rental cars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: