The pricing on these bug bounties always blows my mind.
If this hack had been exploited Tesla market capitalization would've taken a multi-million if not billion dollar hit. And here they are, paying out relative chump change to a guy that alerted them to it.
But that's the point. Who's out there that would exploit this because they thought $50,000 wasn't worth it, but would change their minds for $1,000,000?
Realistically there's only two types of people who would maliciously exploit something of this magnitude: the mentally unstable (people who just like to cause chaos), and state-sponsored actors attempting to disrupt other nations. Neither of those groups seem particularly likely to change their mind for an extra zero or two.
The "pay more than the black market will" model works for smaller bugs, but for ones like this that would immediately get every three letter agency on the planet trying to find you, the $50,000 isn't a valuation of the worth of that bug report, it's a gratuity. And for the average bug reporter, that's an extremely nice one.
Can they pay more? Yes, absolutely. Should they? Probably, yeah. Do they have any reason to? No.
The solution to this is to have legal requirements for security, and extremely heavy fines for having released dangerous software (some portion of this fine financing a similar bug bounty program). Take the option of how much money to hand out away from the companies, and they'll be incentivised to take security much more seriously in the first place.
Of course, this requires lawmakers to have a basic understanding of technology, so we're at least 20 years and 3 major catastrophes away from getting anywhere near that actually occurring.
>Can they pay more? Yes, absolutely. Should they? Probably, yeah. Do they have any reason to? No.
Yeah, they do. It's a self declared measure of how seriously they take their security. They valued avoiding the takeover of their fleet at 0.0000125% of their market cap.
The reason I left lastpass was because the bug bounty for a bug that could expose all of everybody's passwords just by visiting a website was, like, about $1k. The company became dead to me in a split second and I wanted out immediately.
....and it's not doing too well these days, from what I can tell.
What was the half-life on that vulnerability? From the moment Lastpass wrote whatever the fix was to the point at which attackers can no longer exploit it afresh, how much time elapses? If it's a serverside fix, so that the number is something like "a day or so while it's deployed", that's your answer about why nobody is outbidding Lastpass for this bug.
Tell a story about the "scammer" or "fraudster" that buys this. What do they do with it? What do they win, how many times do they win it, and with what likelihood of success? How much work goes into realizing value from the vulnerability before they get their first dollar, or whatever it is they're getting? Is that work already done, or do they have to do it speculatively on this one vulnerability?
Reason your way through it all the way, and you'll see why real vulnerabilities sold on darker markets are paid in tranches, and why nobody pays real money for one-off serversides.
> Who's out there that would exploit this because they thought $50,000 wasn't worth it, but would change their minds for $1,000,000?
The article says the max bug bounty was increased to $15k eventually, so it was even less than that at the time even though they gave him $50k. Kudos to whoever at Tesla stepped up and gave him extra.
I'd seriously consider not reporting something like that for $15k unless I was worried about someone else exploiting it and having a trail of access logs lead back to me. People that discover bugs like that with massive destructive potential must be on every TLA list on the planet afterwards and I don't think that's worth $15k.
$1 million is life changing and puts you into a higher social class. IE: Poor == probably a criminal. Rich == probably not a criminal. It's sad, but that's the way it works and I'd rather be rich if I were on a short list of "dangerous" hackers.
It's pretty silly to suggest that a state-level adversary needs the help of the person who stumbled across the baked-in credentials in an obfuscated Python binary to accomplish a CNE task. If a state wants to target Tesla, someone will submit a petty cash request to contract someone else to develop Tesla vulnerabilities.
If you're able to sell a Tesla vulnerability to the supply chain of a state-level actor, it's probably because they're already actively exploiting Tesla vulnerabilities. By the time random discoveries like this are part of the supply chain, the supply chain is already chugging along.
I think a good rule of thumb is that no serious actor --- not a state, not a crime ring, not a competitor --- does speculative engineering to accept and operationalize a third-party vulnerability. If they're buying, it's because they already have an operational infrastructure to drop the bug into. When you're figuring out the dollar value a vulnerability has, start by telling yourself the story about the entity that already has a bug just like it, is exploiting it for some articulable purpose, and wants a replacement or 10 in the hopper for later. (I don't think this is a perfectly reliable heuristic, but it's where most of this kind of thinking should start).
> It's pretty silly to suggest that a state-level adversary needs the help of the person who stumbled across the baked-in credentials in an obfuscated Python binary to accomplish a CNE task.
I didn't mean they'd want your help. I meant you might end up on some hacker watchlist. You'll get extra attention and scrutiny from government agencies which wouldn't have much upside IMO. Maybe at airports you'll be randomly selected more often so security agencies can look at your devices and try to clone them.
Would you really feel 100% comfortable going to China after being in the news as the person that could have controlled the entire Tesla fleet? I think there are hard to measure social costs for gaining that kind of notoriety and current bug bounty programs aren't properly compensating for them.
Agreed. Another could be solo blackhats who just want to make money, who have no state sponsorship. Tangental, but I also hesitate to create such a massive bucket for "mental instability" like that. It's easy to find when someone who does something difficult to understand, or against what we would do ourselves, and then just say "well they're mentally unstable." Definitely the case for some, but it seems like a lazy dismissal with no attempt or interest at understanding.
I was using "maliciously exploit" here to describe what would basically be the worst case scenario of such a bug (instructing every Tesla to deliberately crash at high speed). I don't think it's in any way a stretch to characterise someone who would do that as mentally unstable.
Of course there's many other ways you could exploit such a bug, but in the context of a "multi-billion dollar" event, it's really only The Big One that's in frame here.
Someone could be sociopathic enough to cause the crashes, but still prefer the money. It definitely seems like you could negotiate for more if you can play the part of that sociopath and don't mind a little bit of extortion.
Those two types in particular are examples of actors that are willing to break the law in this way. Competitors aren't going to contract a hack - like the parent comment said, every 3 letter agency would be after you and suddenly your executives are going to prison.
Public confidence is priceless in the automative space. The risk of bleedover onto the market segment as a whole would make that an incredibly risky (read: stupid) stunt for a competitor to pull, not to mention the legal and reputational risk if they're discovered.
> Who's out there that would exploit this because they thought $50,000 wasn't worth it, but would change their minds for $1,000,000? […] people who just like to cause chaos, and state-sponsored actors […].
Makes me think of the recent Twitter account take-overs. The amateur attackers acquired access which could have caused enormous damage, and used it to scam ~$100,000. The difference between $50k and $1m in bounty could have turned them towards responsible disclosure.
(That said: they probably hoped to scam much more. And they got caught. And the way they obtained access was probably way out of the scope of a bug bounty program / the law.)
You don’t need to pay more than the black market would, but the more you pay the more time people can spend on it. If the bounties are high enough, you can attract more, and better, white hats to test your system for you. The black hats are out there anyway doing what they will do.
I agree to an extent. I think security obligations are good but they should be practical. I know the privacy activists will hate this, because it's something that works, but if we tracked users irl and if banks already have the ability to reverse transactions then the stakes are much lower (because they would be able to identify theft) than something like remotely updated cars or medical devices which can be patched but not before a lot of people have died. Software is advancing rapidly in a way that's valuable, the goal should be to preserve that except when it kills people in the real way.
For a vulnerability of that scope, I assume selling it to a short-seller to publish in bad faith would be more valuable than selling on the actual black market anyway. Hell, the impression I get is that unless you're fairly well connected already, selling large $ value hacks on the black market isn't exactly easy (see Twitter hack).
I don't know if this is strictly legal either, but definitely more plausible deniability.
> I don't know if this is strictly legal either, but definitely more plausible deniability.
Presumably you're into the system by the time you've discovered the exploit, so you're on the wrong side of the CFAA in the US and IMO the law would come down on you _hard_ if you acted in bad faith like that.
Even failing to report it might ruffle enough feathers for the company to use their political connections to have you prosecuted. I suspect that's also part of the reason the bounties are so low.
Alternatively, document it with trusted timestamps and don't report it. Then if someone else exploits it you could parlay the media frenzy into a lot of publicity that's probably worth more than the tiny bounties many companies pay.
"Oh, we discovered that 2 years ago, but the bug bounty program didn't make it worth reporting. Want to buy a security audit?"
I wonder if at some level of bounty payment, you run into the problem of encouraging people to introduce bugs to get a bounty. Probably no one with commit access in a major tech company would risk their career for a few months salary. But for ten years' salary...
It just needs to be a subtle bug designed by someone much smarter than the comitter, that's plausibly deniable. They certainly don't need to understand how it works, or how it's going to be used months or years later. And I understand that this sort of thing happens with governments, and TLAs, and the people leave after a few years to start their own gig with VC funding and subsequent acquisitions and no-one's the wiser.
Theoretically, one person who's reviewing a pull request could notice a flaw and decide to say nothing about it, hoping to exploit it later. That would be less risky than introducing the flaw themselves—although it does require lying in wait for the opportunity and could take arbitrarily long. But if person A introduces the flaw by mistake, and person B sees the opportunity...
> They certainly don't need to understand how it works
They must need to know something about it in order to verify that it does the malicious thing correctly. It's hard enough to get code right when there's a whole team of people who know exactly what it's supposed to do.
It depends on how active the person has been in choosing the target and the exploit. If a nation-state actor has pored over the source code for some time before/after approaching a person in a tech company with commit privileges, they might be in a position to give them code to introduce that's as limited as possible and which does exactly what they need it to, while seemingly being entirely in keeping with that person's prior work and the organisation's development practices. For the attacker, the less exposure their insider has to actively thinking about how to subvert the system that they have access to (which they could later confess to if questioned/arrested/jailed) and the fewer opportunities there are for someone to notice that something's amiss and for the person to come under suspicion, the better.
We probably need to stop having these threads, because they're repetitive, usually pretty ill-informed, and prevent us from having discussions about the vulnerabilities themselves. All we do is recapitulate the same tedious discussion about how bounty prices work. That's fine, but maybe we should only have those discussions on stories about bug bounties, not any story where a bounty makes an appearance.
For the moment, rather than re-having this discussion, we can just note that bounty prices are what they are, and that no tech firm pays "existential" rates for new vulnerabilities (except, perhaps, Uber, where literally everyone involved in that story is now in the federal criminal court system).
They only need to pay out as much as is necessary to incentivize you to be upfront and report it in private rather than starting a media fuss around it (you get fame and $0) or exploiting the bug yourself (you might get a jail term). Compared to these alternatives, $50K and a clean record isn't a bad deal.
It's funny, we always talk about compensating leaders for the value they provide to the company. Yet when it comes to non-leaders, it's transforms into a question of "value relative to their current/recent income".
> It's funny, we always talk about compensating leaders for the value they provide to the company. Yet when it comes to non-leaders, it's transforms into a question of "value relative to their current/recent income".
That's maybe true for founders, but not really for hired executives:
> One major consideration that goes into how much a CEO should be paid is what other companies are paying. Compensation committees benchmark CEO pay against a self-selected peer group -- often 12 to 20 companies that may be of similar size and complexity, and have similar business models, according to Robin Ferracone, CEO of Farient Advisors, an executive compensation consulting firm.
How long do you think it takes for someone to find an exploit? Sure, a long time ago I found problems in web pages by clicking "view source" and going "I wonder what happens if.." and doing POST/GET with a huge buffer, or with "\");...." embedded in it.
These days companies that take their security seriously are hopefully harder to exploit. If it takes someone a couple months of slow fuzzing/etc to find an exploit that is probably below market for the persons skills here in the US.
Maybe a part of these bug bounties should be not only how critical the bug is, but some metric of how much work the individual put in before finding the problem.
> He didn’t end up getting a new Tesla, but the automaker awarded him a special $50,000 bug report reward — several times higher than the max official bug reward limit:
You're looking at the $5,000 bounty awarded for exposing Supercharger-related data that Tesla "didn't want [...] out there", which is obviously a much less severe issue than remote control of the entire fleet.
No, $5k was for an earlier bug. "the automaker awarded him a special $50,000 bug report reward — several times higher than the max official bug reward limit"
It's not very often that serious vulnerabilities affect the stock price.
Check out the stock price of Bank of America after their servers got rooted several years back. Or that Breach that Deloitte had. How about Cloudflare?
I get that it doesn't seem to make a lot of sense, but is there some market principle that can be used to explain why so many companies act as they do, and that it is in fact rational? Must it be a black swan fallacy?
Great question. I think I'd say the big difference is that people, for the most part, aren't putting their/others lives in Garmin's hands when they use their devices.
That said, I think they have some hiking/trekking oriented products which could cause problems if you were relying on them.
The headline "electric car fleet hacked" is a lot scarier than "smart watches hacked".
Then again, maybe people really don't give a shit about this stuff, and these bounties are priced correctly.
Yet this person did the right thing anyway and reported the vulnerability responsibly. So seemingly the level of the bounty was reasonable enough that it worked as intended, and a much higher bounty would have been a waste of money for Tesla.
I think the high likelihood of being caught and going to prison is also already a pretty big deterrent for people. Just think of all the challenges of actually pulling a hack like this off without being caught. For one thing, just the poking around that led to the discovery of the vulnerability has probably already logged a bunch of potentially suspicious activity linked to this guy's VIN number. So even if he sold it to someone else who did the hack he could probably be caught already. If he tried to orchestrate the hack himself, not only does he need to not be caught directly, but he'd also have to make a very large, very suspicious short trade right before the hack without it being traced back to him. Plus there's always a possibility that Tesla would have been able to lock him out quickly anyway or had some other kind of rate-limiting or other measures in place to prevent significant damage, or that even if he pulled off the hack perfectly the stock price wouldn't drop as much as expected.
> So seemingly the level of the bounty was reasonable enough that it worked as intended, and a much higher bounty would have been a waste of money for Tesla.
I think it's more likely that the person who reported the vulnerability would have done the right thing regardless of any bounty.
What would be the legality of sharing the hack publicly and allowing someone else to exploit it while shorting the stock?
I also wonder when something becomes a "hack". Some systems are so insecure you can almost accidentally exploit them. In this case the API just required an ID for access. How would someone know if that was by design, or a mistake?
As soon as you access something you're not supposed to. If a house is left unlocked and you walk in and take a look around, you're trespassing and it's a crime. And of course if you cause any damage or steal something, that's an even bigger crime.
Except with hacking, the punishments can be even more severe relative to the actual crime committed, because almost nobody in the legal system will understand the details of what happened so they can make you seem as dangerous as they want. Just look at Aaron Swartz and countless other examples of the heavy charges that have been given out for very minor, borderline cases of "hacking".
This is what holds me back from 'smart' devices that have the potential to cause real harm...
We've been making motors (electric or combustion) for over a hundred years, and gotten pretty damn good at making them safe and reliable. Same thing with stoves, HVAC equipment, small appliances, etc. These are all mature technologies that we can practically trust our lives with.
Internet-connected smart vehicles aren't a mature technology. Not in the sense of this being the win2k era of that tech, but that our assumptions about how to build these systems might be fundamentally wrong. I don't know if it will ever be safe enough to trust human lives to it.
Until then, I'll only want to buy cars made before 2010.
"Internet-connected smart vehicles aren't a mature technology. Not in the sense of this being the win2k era of that tech, but that our assumptions about how to build these systems might be fundamentally wrong. I don't know if it will ever be safe enough to trust human lives to it."
I often hear this kind of thing and am really surprised by it. Specifically for the tech in vehicles example, it seems like a real double standard. Around 37,000 people in the US die in car accidents every year[1]. That's 100 people a DAY. There is a huge cost to not adopting new safety measures, even if it depends on immature tech, and that needs to be factored against the potential new unknown risks.
Driving to work is almost certainly the riskiest thing you do most days. I find it plausible that people 50 years from now will think that the cars we drove before 2010 were unconscionable death traps.
You're comparing apples and oranges. It can both be true that security (and overall software) quality is poor, and that automated buggy cars are better than erratic people.
However, when you are considering system risk (e.g. that a bad actor could crash 100k cars at the same time) the worst case outcome could be much worse than the mean outcome.
>There is a huge cost to not adopting new safety measures
That statement doesn't seem meaningful to me in a vacuum, without further qualification. It's not at all guaranteed that safety features have net benefits, either measured financially or in estimated human welfare. Have you noticed there are a lot of high tech safety features in modern cars, but insurance companies are selective about which ones receive discounts?
If self-driving cars are substantially safer, insurance companies will be able to give substantial discounts. I vaguely remember Tesla making noises about providing insurance to their own customers, but I don't know if it came to anything.
Hopefully in 50 years the whole idea of single-occupancy vehicles will be considered a quaint relic of a much more decadent time, especially vehicles which were allowed such extravagant externalities in terms of pollution and endangerment to vulnerable road users.
> Specifically for the tech in vehicles example, it seems like a real double standard. Around 37,000 people in the US die in car accidents every year
Depends on the feature you're talking about. With self-driving cars, for example, there is no evidence that they are any safer than human drivers. In fact, it is possible that they are more dangerous than human drivers.
Maybe it's maturity in the sense of relationships.
An immature relationship might be based on selfish interests, compounded with power struggles, boundary issues, jealousy, spying, and a little rage and stockholm syndrome thrown in for fun.
A mature relationship is based on trust and mutual respect. It does not concern itself with knowing every detail or trepidation regarding other suitors. The relationship is there because it is valued and there are willing participants.
I think the key here, and unfortunately most companies don't give a sh*t, is to allow the user to gain control over his device and/or take it offline if it pleases him. For example, a Tesla car should come with an option to disable any remote control features, or a way to control it over short distance only when it's offline (I don't know if its already the case, I don't have a tesla).
If you look at adversarial machine learning you'll see it is shaping up as a bit of an evolutionary battle. It is quite likely that a years-offline Telsa (or similar) could be deceived into a fatal collision.
We need safe updates with transparent documentation as to all the changes, with hardware enforced feature switches. Forcing them to go through an approval process doesn't sound bad until you've met regulators like the FAA -- or Apple.
You have to weigh the risks of manipulating cars one at a time because they don't have the latest updates with the risk of manipulating all cars at once because they can be remote controlled. A more expensive alternative to over the air updates is requiring regular updates done by a mechanic, e.g. when the vehicle is due for an inspection.
I don't really think OTA updates is the problem, so much as the automatic application of the updates without owner awareness (or consent to possibly significant feature changes). Preventing OTA is not the same as preventing remote control -- vehicles WILL be networked, if only for collision avoidance. OTA could be very useful if a bad exploit becomes widely available -- like a protocol bug in the V2X network that lets you crash the V2X module. Many bugs are discovered after being in production for years.
If you allow users to approve and roll back updates then there is some ability to recover after bad updates. Of course, you don't want theives to be able to roll back to a vulnerable release, but you can require the master owner key & code for rollback.
I think the key here...is to allow the user to gain control over his device and/or take it offline if it pleases him.
I wonder how long it will be before we start to see legal or regulatory interventions in this area. Mandatory self-updating and phone-home functionality is rapidly infecting technologies we rely on every day, from our cars to our home computers to our TV sets.
This always-connected, always-updated approach inevitably introduces some risks. It often causes intrusions into privacy or brings changes after purchase that users of these technologies don't necessarily want.
Competition in these markets is evidently insufficient to provide alternatives for those who don't want anything to do with this modern culture. I don't believe that is limited to a small group of eccentric tinfoil-hat fans any more.
Hopefully it won't take some sort of widespread disaster to wake the politicians up to the dangers here, though given the past performance of the political class around the world when it comes to technology issues, I'm not particularly optimistic.
Fortunately, there is a world outside the US, and much of it is more enlightened. If other large markets impose limitations on this kind of technology or create a penalty regime for failures that makes it worthwhile for the manufacturers to invest in more reliable systems, that will probably benefit everyone indirectly, even users in the US if a US government of whatever colour sells them out.
> I wonder how long it will be before we start to see legal or regulatory interventions in this area.
Not long IMO. It'll be sold as a safety measure, but the real purpose will be to limit competition. I think it's similar to "warranty void if removed" stickers, but I'm worried we're not going to get the same pragmatic legislation that makes those ignorable.
I've noticed a lot of older software engineers seem to avoid anything "smart", and quite a few of them are into vintage cars too. I don't think that's coincidental; my daily driver is approaching 50, and completely lacks any computer or electronics for its main purpose.
I dunno if that means anything. Some of my software engineer friends have Tesla cars and love that kinda stuff. Others have old cars with very little in the way of electronics. Some have both!
Gotta remember, that every day people entrust their lives to lots of software and electronic systems. Of course, it's important to have reasonable security measure taken, but just because something's electronic doesn't make that a bad thing.
"The first thing that happens is nothing. Your smartphone stays black while you swipe at it and press the various buttons. Has the battery gone flat? You could have sworn you left the house with a full charge. Now you start to wonder how you’ll get your car out of the parking structure without a working mobile. That thought hadn’t occurred to you before.
It’s the least of your worries.
Still fussing with your smartphone, you gradually begin to realise you’re not the only one having this problem. In fact, it would seem that everyone waiting at the pick-up area is in various stages of agitation with their own smartphones. Some are pressing odd combinations of buttons, trying to reset the little beasties. Others, who have clearly had rough days now made worse, start to swear at their dead screens, as if cursing might shock them into life.
It’s weird, and almost a bit funny. For a brief moment.
The first smashing can be felt more than heard, a subsonic strike something like a vast drumhead being struck with a metre-wide mallet, but so quick, you barely even notice it until it’s over. The second one, however, isn’t far behind, and it’s a bit louder. That second thump gives away its location — whatever it was seems to be happening quite close by — in the direction of the parking structure.
At just this moment a car cruises through the pick-up zone at full speed, barreling along at least 100 kmh. It’s only because of some very fast reactions that no one gets hurt as it passes by. As it zooms past, you notice there’s no one behind the wheel.
Before you have any time to process that, another huge thump nearby causes a section of the barrier wall of an upper floor of the concrete parking structure to shear off. A pile of rubble falls to the ground not very far away from you."
It's pretty well written, but somewhat unrealistic from technical standpoint.
The first part is OK (except author forgot about "fire exit" laws, so the mall doors would not be locked)
The second part, the one which describes "consensus" system from technical standpoint, is wrong. The protection does not work this way. As in, it can surely be implemented, but it has no benefits that author claims and would not protect from stuxnet-like "viruses" (sic!) at all.
The third part, where the bitcoin is mentioned, is outright science fiction.
Someday, all cars from a particular brand will be made to crash during rush hour. The carnage will be immense. Emergency services will have to go off-road to bypass the snarl. There won't be enough helicopters to meet the demand.
The brand that could cause the most damage is probably Bosch, a major automotive component manufacturer.
>Someday, all cars from a particular brand will be made to crash during rush hour.
This is also why it's always very, very wrong to compare potential faults of automated cars to humans as in "the automated car is X percent safer!", becuase it ignores the fact that mistakes in automated systems, at least as they are built now, are highly correlated.
If there is one bug in an ML system that is rolled out to an entire fleet that results in an unknown weather condition leading to fatal crashes you may create mass carnage.
Human driver errors are not correlated like this, which makes them much more robust as an ecosystem.
I think this is a good point, and largely agree. However, there are also correlations in driver behavior, like it being more dangerous driving on July 4 or New Year's Eve in America as so many people celebrate and drive drunk, increasing accident rates.
The thing that worries me is that it doesn't take self-driving tech to make this an issue. Existing safety systems on most cars can control brakes, steering, throttle, airbags, etc.
The threshold issue is remote updates of car software. And Tesla had made that more mainstream and attractive to other manufacturers.
It's like being in 1995 and predicting that Windows botnets will be created. The coming disaster is inevitable.
State-sponsored hackers are not going to ignore the opportunity. They probably have the capability already, in dozens of countries, and are just waiting for orders from the leaders. If war is starting, the order will be given. Sanctions could be enough to trigger it.
So you predicted botnets in 1995 and ..... nothing much happened. botnets suck but there wasn't some world crashing event like the person above is predicting for computer controlled cars. All Windows computers didn't shut off on one day.
I'm struggling to understand your point. It's a different threat model; of course it will manifest differently, no? A mothership-style model seems much more vulnerable to every node being compromised than the decentralized vector Win95 botnets have to go through.
I could see arguing with the inevitability of the exploit -- Win95 botnets seemed much more inevitable to me than this Tesla mothership threat does. But it seems like you're arguing that they will both have similar impact if exploited. That doesn't make sense to me, because they're completely different threats, but it's possible I'm misunderstanding your argument in some way.
I would disagree, sure botnets can't be used for threatening life directly; but botnets have been proven to be quite effective in attacking services and do denial of services attacks. The one thing we can take from botnets is, vulnerable and unpatched devices can be infiltrated in high numbers and attackers can lie low until they decide to pull the trigger.
Imagine even 0.1% cars being controlled, the mayhem and loss of life that they can cause is just immense.
Power plants are also dangerous targets in a similar sense, but hopefully there is network separation for control services.
I think this is well known to tech literate people. Problem is that that's not most people. On HN we have sampling bias because we're more likely to be associated with people with similar interests, i.e. tech.
Not the OP, but the precedents are here: Always-connected cars (eg. teslas): check. Being able to take control of a car via the CAN bus[1]: check. The only thing missing in the exploit chain is something that allows the attacker to jump from the modem/infotainment system to the CAN bus or ecu.
Tesla uses a pretty different architecture from the dumpster fire that was OnStar.
Been a while since I looked in the details but from what I recall only very limited, well scrutinized communication is allowed to bridge the Ethernet subsystem over to the CAN bus.
Well fundamentally, they can be breached. Since the self-driving AI system can be updated (afaik), then vehicle control can certainly be remotely altered.
I guess those risks are a bit like nuclear launch risks. Someone, somewhere (that controls the right key) could unleash an enormous accident. You just hope the system as a whole has enough redundancy for it to be sufficiently unlikely. In nuclear arms technology we have multiple authorizations required for launches. Then of course the underlying system has to be robust enough so none of those measures can be bypassed.
There is no fundamental barrier that I can see -- we know how to build incredibly robust systems using methods like formal verification of software and just thorough quality assurance and testing. In a way, much of the world is already exposed to those kinds of risks: most individuals's phones, laptops and even industrial computers can be remotely updated and incapacitated, exposing a risk of generalized trouble.
The adequate amount of resources to be allocated on those risky but extremely remote scenarios is what's important. I think there needs to be oversight guaranteeing every system is getting verification, testing and attention proportional to risk for society. The usual punitive incentives don't work very well for those cases. I don't think we have any agencies with this wide of an outlook currently.
Tesla uses a pretty different architecture from the dumpster fire that was OnStar.
And probably other manufacturers will use their own designs.
Unfortunately, this is one of those issues where we have to be lucky every time, and the bad guys only have to be lucky once. Given the number of different manufacturers whose systems have demonstrably been compromised in the past, the odds of avoiding catastrophic compromises in the future as cars gain more autonomous features don't look great here.
I guess we should just give up and leave everything wide open then right? No point in trying to do defense in depth if someone will eventually hack something?
One plausible alternative is that we don't deploy systems like this, which have the ability to cause widespread damage including loss of life in moments, until we have worked out how to secure them properly against a single point of catastrophic failure like that. There are things that could be done to mitigate that threat in the meantime.
This is set against the knowledge that existing human-driven vehicles are involved in many tragic accidents per year, also causing widespread damage including loss of life. But there are other things that can be done to limit the damage there as well, without relying on autonomous vehicles as a silver bullet.
Whoever did this would likely select some combination of valuable/soft/flammable targets. Control over a sizable fraction of all vehicles in a country would enable them to create utter pandemonium in tunnels, bridges and underpasses during rush-hour - even larger highways. Aside from fire, I'd imagine that the "disable vehicle on sensing a crash" functionality would end up being hackable as well. Cars on the whole had become less effective as murder weapons up until now, but I suppose that all changes when you can control them remotely via software at scale.
Most vehicles won’t let you reprogram the firmware without power cycling the car. Disable sensing of a crash is definitely its own ECM that is on a high priority bus. I am assuming your common <$40k car. When you head into bmw, merc Benz land this statement changes slightly.
I'd definitely like to think that all of that stuff would be air-gapped, hard-wired and baked into the silicon (and E2E-encrypted, with a Trusted Computing model and auditable supply-chains), but I do worry that people are going to cut corners, fudge things when they're approaching deadlines, and not take into account an appropriate threat model when they're designing this mass-market consumer automotive stuff. The Chrysler hack in 2015 managed to get some fairly low-level remote access to things like the braking system. I'm also considering the possibility that governments might backdoor their own manufacturers with their knowledge in order to gain exploits to systems overseas or to carry out the odd covert assassination.
As someone who’d already been a bit worried about future mass-car hacks, I found the zombie car hacking sequence in 2017's "The Fate of the Furious" particularly terrifying to see in the theater. Rewatching it now, it actually looks somewhat tame compared to what might be since the hacked cars only inflict property damage, not injury.
> Villainess: I want every with chip with a 0-day exploit in a two mile radius around that motorcade now.
An easy solution would be to not allow self driving or remotely updated cars until there's a reliable solution to this. People already go to auto-shops for repairs, certified auto-shops could easily double as places to update software and the certification requirements can be tailored to require an external oversight agent come and evaluate their security practices.
I definitely think Tesla can be over aggressive with their updates. However that doesn't mean that the basic idea of remote updating cars is inherently flawed or unsafe when compared with the alternative. It is all about trade-offs.
Both the Prius[1] and the Model 3[2] had similar software bugs related to their anti-lock brakes. Both companies had a software fix a few days after the bugs were discovered. Tesla's fix was pushed out immediately to every vehicle. Toyota couldn't push out a fix. They had to issue a recall and have a technician update the software whenever that car ended up being serviced. How many months or more likely years did it take for every Prius to be updated with the software? How many miles were driven in cars that were known to have faulty brake software because it was hard to update them? You have to consider situations like this when discuss banning remote updating.
> You have to consider situations like this when discuss banning remote updating.
Isn't that exactly what we're doing? If someone pushes out a malicious change, do you know how long it would take for that change to propagate to the entire Toyota fleet?
> Toyota couldn't push out a fix. They had to issue a recall and have a technician update the software whenever that car ended up being serviced.
This gives all the more incentive to get the software correct in the first place. The model of "get the software as bug-free as possible upfront using stringent processes, testing, formal methods, and not using software in the first place when it's not actually needed" is better than the model of "put software into as many components as possible to make it shiny and get the software good enough to release before our competitors and play whack-a-mole on the bugs later through updates". Instantaneous updates make it easier for an attacker to take control of the update infrastructure and push an update that will trigger a mass-crash of cars during rush hour. When people have to asynchronously take the car to dealerships over many months, it makes this attack harder to go undetected.
The difference is that the pc is exposed to everything it interacts with including the internet while the car would only be interacting with shops that were meeting their certification obligations. Tying updates to physical locations also reduces the severity of a successful bad actor since most people in a city don't all go to the same auto-shop. A problem like a nation-wide cyber attack on vehicles is only possible if we allow personal vehicles to support the attack vector.
It reduces the attack surface to places that have been verified to be following security obligations. Otherwise a physical attack becomes much easier because you just need to work at an auto-shop that doesn't pay much attention to you. The same sort of attack is still possible with certs but it's more difficult because it would require compromising both the shop and the person who vets their practices.
People still want regular map updates, live updating traffic information, and play back stuff from their phone on the in-car entertainment system. All this exposes cars to data communication outside of the car repair shop. Yes, the entertainment system is different from the system that runs the car, but there is some level of communication between the two.
Yes, the entertainment system is different from the system that runs the car, but there is some level of communication between the two.
I think you've just identified the root cause of at least one set of problems. The essential control systems in a vehicle should ideally be separated from other vehicle functions to prevent interference, whether accidental or deliberate, from compromising vehicle safety. The approach now being taken by manufacturers with their always-online, increasingly automated cars may be undermining that separation, without necessarily having adequate safeguards in place to ensure safety and reliability are maintained.
The problem is that total separation is difficult to achieve with the requirements being placed on these vehicles. Sure, whatever is playing your favourite music tracks over the speakers probably doesn't need to know anything about steering and acceleration. However, your self-driving software is going to have a tough time getting your car to a location it doesn't know exists because its onboard navigation maps predate the existence of that building, or planning a route that avoids an accident it doesn't know about because it has no real-time information about road closures.
If you read my comments elsewhere in this discussion, you'll see that I've been one of those people.
In this thread, I'm simply observing that there can't be a clear separation between control systems and information systems if we're going to achieve these kinds of autonomous functionality. We need a more nuanced solution.
I disagree. I think in an ideal world this is true but the reality is that the bar for security is exceptionally low for industries that aren't traditionally software industries (not that software is excluded). When the stakes are just accounts and transactions that can be reversed the situation is different, when the stakes are life and death, in the real way, I don't think it's unwise to exercise caution. Let someone else take the risk of cyber terrorism and if they go a decade without any hiccups then leapfrog them. There's no reason to expose consumers to these sorts of risks just because it gets the futurists hard.
Perhaps due to knowing quite a few people whose careers were spent working with Siemens and Bosch machines in heavy industry, it seems unfathomable and mildly alarming to me that one can
- know of the existence of Bosch,
- be completely ignorant of the company being an absolute giant in the engineering space, and
- be confident enough that they're some tiny power drill manufacturer to mock them publicly without pausing to look them up
It's like hearing someone say "the guys who make the Xbox are providing cloud services for the Pentagon? Who is running that ship lol".
Meanwhile there's the company who took Mobileye's LKA,
which was already in cars with the same hardware but not continuously enabled because of inherent flaws related to non-moving objects... and turned it on all the time so they could call it Autopilot.
Then it killed some people because of the flaws that kept other manufacturers from using it the way they were.
And they're writing the software for self-driving cars? Who's running that ship lol
-
In a better world people with experience with safety critical stuff and the culture for it would partner with companies like Tesla to create something like a "plug in" system for SDCs. Where the safety guys could focus on defining a minimal viable envelope of operation the same way existing ABS and ESC systems mesh with drivers. Something like if the car is less than 100ms from crashing intervene separately of "normal" object avoidance
(And before someone nitpicks that's an arbitrary number of ms, yes there's no "isCrashing" variable, and yes it would be hard work to define how SDCs would handle intervention, but people crashing into parked firetrucks is worthwhile "hard thing" to solve)
A “fire extinguisher” company makes many of the fire/smoke/overheat detection and suppression systems used in commercial and military aircraft. These kinds of companies are massive and have a lot more depth to them than you seem to realize.
> (I will note, I was never barred from disclosing any of this publicly in any way. As a courtesy, I felt it would be the right thing to do to hold off on public disclosure for a while, potentially indefinitely. Years later, it seems worthwhile to disclose this information and highlight just how far things have come and how Tesla's software security has improved dramatically since then.)
I think it's reasonable to expect people to intuit that "A malicious hacker gained control over the entire Tesla fleet" would be a much bigger deal than a couple hundred points on HN. No cars currently being located in my living room is a pretty big giveaway, for one.
Shit like this is why cars need to be functional without cellular / wifi access, and updates impossible without the user pressing a button, along with direct connection to the the car for features like summon.
Which is pretty much the opposite like Tesla operates.
I hope this serves as a reminder to everyone here that THIS is why you should have a physical disconnect switch. I should be able to pull a breaker to disable self driving on my Tesla when I’m not using it.
> Also, Tesla owners will supposedly soon get two-factor authentication for their Tesla account.
This was the biggest line in the story, for me. You can spend $100k+ on a vehicle and you can’t even have security to protect it that was standard FIVE YEARS AGO.
Lack of 2FA is a showstopper for services an order of magnitude less expensive than a vehicle. Tesla simply must not care about security very much, a fact reflected in their low bug bounty prices.
> The hacker shared the data on the Tesla Motors Club forum, and the automaker seemingly wasn’t happy about it.
> Someone who appeared to be working at Tesla posted anonymously about how they didn’t want the data out there.
> Hughes responded that he would be happy to discuss it with them.
> 20 minutes later, he was on a conference call with the head of the Supercharger network and the head of software security at Tesla.
> They kindly explained to him that they would prefer for him not to share the data, which was technically accessible through the vehicles. Hughes then agreed to stop scraping and sharing the Supercharger data.
> After reporting his server exploit through Tesla’s bug reporting service, he received a $5,000 reward for exposing the vulnerability.
What's the difference between this and what Uber's former Security Chief was charged with?
The hack you are talking about is unrelated to the one that let him control the Tesla network. In that one it sounds like he just put together a custom client that requested supercharger data from Tesla, which I wouldn't really consider hacking.
This is another reason why I think it's incredibly foolish to own a vehicle with an internet connection. Even if the vehicle doesn't support remote control like Tesla there may be a chain of bugs that could be used to do just that or cause other problems.
That's not even considering the major privacy issues that come with such vehicles.
I remember a long time ago reading about the updates to mandated car electronics and thinking I never wanted to own a car newer than...1996 maybe? And now I can't even remember what I was thinking and why. It might have been OBD-II.
So how do vulnerabilities like this happen these days? I just work in games and everyone still knows having an API that takes a user ID (in this case the VIN) is asking to get abused. Is the description just a gross oversimplification of the hack or was Tesla security really that bad?
It seems like a classic case of mistaking (or not taking seriously enough) the difference between authentication and authorization. He accessed the server through the car's VPN, so there was an authenticated and authorized connection to the server.
However, an authorized connection to the server is not authorization to make any arbitrary request on the server.
It happens for the same reason that many games get MAX_INT high scores at launch.
Yeap, pretty accurate. I can predict it is going to get bad. It is always fascinating to me that companies invest billions in everything aesthetic related. But when it comes to security and system software, no money!
You can really see how government intelligence agencies just go around hacking and dumping repositories of other governments and companies keeping what they find in their back pocket.
It's funny you say that, and there are shops that are very careful with this sort of stuff. Unfortunately, I fear that when the big cloud hack comes, it's going to hurt everyone (i.e. those with and without publicly addressable S3 buckets).
Isn't a "fleet-wide hack of autonomous vehicles" an oxymoron? They clearly aren't autonomous if they are controlled by an outside force that can be hacked.
Maybe it depends on perspective, with the manufacturer seeing owners as outside forces, from which their vehicles are autonomous? Rolled up with the liability question is the question of who does control the vehicles and who they are autonomous from.
The vehicles are autonomous when asked, such as if you click "summon" in the app. If they gained access to Mothership after summon was introduced (they accessed it in 2017, but summon came out in 2019[0]), it could have meant accessing a car and summoning it to attacker.
"Autonomous" in this instance means "able to maneuver without a driver", not "able to make independent decisions without the help of an external system".
If this hack had been exploited Tesla market capitalization would've taken a multi-million if not billion dollar hit. And here they are, paying out relative chump change to a guy that alerted them to it.