Some questions for the researchers, or anyone else who thinks this was okay:
1) Were public roadways and speeds of 70mph absolutely necessary to demo this?
2) What was the plan if the trucker approaching at 70mph hadn't seen the Jeep stalled early and had to swerve or panic stop, possibly crashing and injuring themselves or others?
3) Anyone notify the Missouri State Highway Patrol about this? They may be contacting the researchers with questions about this demo if they weren't consulted in advance.
4) What's the plan if they trigger a bug in the car software of the people they had tested this with earlier? The article mentions them tracking people remotely as they attempt to learn more about the exploit.
I could go on but why bother? In case any of you think this was cool or even remotely (no pun intended) ethical, I'd like to know if you have a problem with letting these two test this on a loved one's car. How about they remotely poke around your husband or wife's car and explore, as long as they promise not to intentionally trigger anything?
If I ever learned this had been tested on a vehicle I was in, I'd make sure this cost the researchers dearly.
EDIT: I've just phoned 'Troop C' of the Highway Patrol at their main number, +1-636-300-2800 and they seemed pretty keen to follow up. The fact that the vehicle was disabled where there was no shoulder, was impeding traffic, and the demo not cleared with them in advance has them concerned. I'm all for testing exploits and security research, but this isn't the right way to do it. And to film it and post it to a high traffic site is nuts.
Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior. A much less aggressive (and thoughtful) move would be to contact the researchers directly. Wow.
Back to the article, I think that this type of exploit will become more and more common as vehicles become more connected and automated. We need to know that we can trust the software and firmware running on the devices that literally have the power of life and death over us. Unfortunately, this is a VERY complicated issue, and no one has a solution yet AFAIK.
I watched a talk by Cory Doctorow last year where he suggested validation at the hardware level (a la trusted platform modules), but unlike the typical TPMs that only allow vendor software to be authenticated, these TPMs would allow the user to directly authenticate the firmware. If you know the firmware is good, then each layer can validate the next layer up all the way to the OS.
I have yet to hear of a system that allows the user to directly authenticate software/firmware at the hardware level. Is anybody working on research of this nature? Or are there insurmountable problems with this approach?
Too late to edit my original comment again so I'll post a reply here as a general reply to those who reacted negatively to my decision to phone the police.
While I strongly support free speech and believe security researchers should be given some extra latitude when appropriate, what I saw was not at all appropriate. I saw two well respected security researchers sitting in a room like Beavis and Butthead laughing and remotely disabling a vehicle on a multi-lane interstate highway, like it was a big joke. The reporter in the Jeep literally says "This is dangerous" and asks urgently for help. This all filmed and posted to Wired for the world to see, like they are proud of it.
Before working with computers I drove tractor-trailers for a while and was lucky to achieve a million-mile safe driving award. I have a pretty good idea of the dangers here and I know that stretch of road well, I've crossed it many times. I know from experience that a car stopped in the middle of a multi-lane interstate is one of the most dangerous situations you can be in. I've had people hit me who didn't see my huge trailer with flashers on and warning triangles out on a sunny day - it happens quite often. I've seen dozens of people killed in situations exactly like this. You see it coming and a random driver just plows into the stopped vehicle.
I exercised my judgement and decided to phone the local Highway Patrol office. I've read the negative comments and I disagree, I still think it was the correct thing to do. If you are a researcher and you do something this dangerous, and are foolish enough to then post it on a high-traffic site like Wired, I think you forfeit any right to a discreet warning and you deserve to have the police show up demanding answers to some tough questions.
I appreciate your call to the cops and your reasoning. I also have driven a significant number of miles for work and have seen a number of people killed in traffic accidents. This "test" was extremely irresponsible. I know I will be downvoted for saying this, but I think you made the correct decision.
Agreed. I missed the video the first time and didn't believe the text that described the shutdown, video shows the stupidity here, let alone release a recording of it. I expect that will come down soon.
Important research but very poorly tested. Wired and Chrysler (research was funded by Chrysler?) legal teams would not like the contents of this video.
Reporter: "Seriously, this is fucking dangerous. I need to move."
And that was while the security researchers caused the radio to blare so loud that he couldn't hear them on the other end of the phone. The more I see, the more I think they were really negligent in how they planned this out, and I was already firmly in that camp.
So watching the video, I don't see a vehicle stalled on the highway.
What I see is a vehicle slowed considerably, but at least nominally over the legal minimum speed of 40 MPH on highways, and without the driver being able to accelerate on his own. He's travelling in the rightmost lane, explicitly with his hazard lights on. This is not an unusual occurrence on highways. He's then told that to regain control he needs to stop and restart the car, which he does while remaining in motion.
I was surprised, since this is quite different from the way it's being talked about here, as if he was stopped in the middle of the freeway. See GGP comment about "a car stopped in the middle of a multi-lane interstate."
Here's my attempt at a partial transcript starting from shortly after they disable the accelerator:
Driver: "It says 43 miles an hour, but it's not really that fast."
[voiceover omitted]
Driver: "Guys, I'm stuck on the highway."
Researcher A: "I think he's panicking."
Researcher A: "He's not going to be able to hear us with that radio. So loud."
Driver: "Guys, I need the accelerator to work again."
Researcher A: "The accelerator..."
Researcher B: "It won't work! You're doomed!"
Driver: "Seriously [beep] dangerous, I need to move."
Researcher A: "You gotta turn the car off!"
Many cars can be seen passing them on the left in the video during the test.
Right, but the video never shows the car stalled on the highway. It's moving in every highway shot. It's in the righthand lane, not in the center. The driver is somewhat panicked. We can see how fast he's moving relative to the background.
This discussion has been distorted and sensationalized, and it has not been based on observable recorded facts.
A car stalling does not necessarily indicate it is stopped. Stalled can indicate the vehicle is stopped, or it can also indicate the motor has stopped. Airplanes stall, and obviously they are not entirely stopped, it's just an indication that the motor has stopped. It's unclear as to whether the motor actually stopped, but it's not without precedent to use "stall" to indicate no power available for propulsion.
I don't think this discussion has been distorted. It's based on the information they provided. They put a vehicle on a public highway traveling at the faster end of what's legal in the US on public roads, and then removed a large portion of the drivers ability to control the vehicle. It's unclear whether this affected the steering or brakes, which in a modern vehicle would both be power assisted, generally through the vacuum system of the vehicle. The vacuum is provided by the engine, so if the engine was actually off (which is unknown, but I think it's more likely they just forced the car into neutral), then they removed a large portion of his ability to control the car.
The bottom line is that they put a driver in a situation not only unsafe to himself (which they could have gotten consent to), but unsafe for the other drivers on the road. They did not have consent from the other people on the road to do this (indeed, it's not possible they could have), and if what they purport to happen in the article and video did happen, then they endangered those people. I've seen accidents from stopped cars being hit by others. If the highway is busy enough, the initial accident isn't even necessarily the largest damage, but it moves vehicles into even more obstructing positions and causes follow-on accidents.
I can agree that the car is not shown at a full stall in the video, however it is the case that the driver reports that they are unable to control the vehicle during the test. I cannot agree that this would matter regarding the idea that this is "[beep] dangerous" as was stated by the driver, because that is supported by the driver's own statements as well as observable facts.
They've risked people's lives to produce real life looking footage documenting a life threatening event.
Without such event present in the footage, car manufacturers can just say "Meh - no big deal". And continue recklessly risking lives by manufacturing unsafe cars without air gap between CAN bus and Internet.
Remember, it's the car manufacturers that are the bad guys here, not the white hats... And just think how hard was this decision. It's a choice between risking lives and having footage that doesn't catch attention and thus allows car manufacturers to continue making unsafe cars with horrible security vulnerabilities. Amazing.
So demo it at a race track. The essential point here is that the uninvolved public were placed at real risk of maiming or death.
Your argument is ludicrous, because you're attempting to cast the actors as either good or bad. IMHO they are guys with a good idea and motivation who did a bad thing.
We are a very visual culture, unfortunately. Unless there's a video of your average Joe driving on a regular highway and a regular car going wild, everyone would just dismiss the problem as limited to "race track" and would not connect the vulnerability to his/her own car.
edit: as per the article "researchers already did test these exploits in controlled environments and presented these tests to auto manufacturers. Said tests were dismissed by said manufacturers.".
>We are a very visual culture, unfortunately. Unless there's a video of your average Joe driving on a regular highway and a regular car going wild, everyone would just dismiss the problem as limited to "race track" and would not connect the vulnerability to his/her own car.
If optics is your justification for this, then perhaps having these two irresponsible researchers arrested would bring even more attention to this.
>edit: as per the article "researchers already did test these exploits in controlled environments and presented these tests to auto manufacturers. Said tests were dismissed by said manufacturers.".
Where do you see that in the article? Only thing I read was manufacturers downplaying a wired-in attack they demoed.
> "researchers arrested would bring even more attention to this."
Yep.
> Where do you see that in the article? Only thing I read was manufacturers downplaying a wired-in attack they demoed.
No "air gap" between "CAN bus and Internet" equals vulnerable.
We know that. Auto manufacturers know that.
Yet they dismiss the possibility of a hack and continue producing unsafe vehicles. And the trend is toward more vulnerabilities.
I was to lazy to search a direct quote, but here it is now: "Miller and Valasek represent the second act in a good-cop/bad-cop routine. Carmakers who failed to heed polite warnings in 2011 now face the possibility of a public dump of their vehicles’ security flaws.".
That is very much NOT a quote from this article, if you are quoting another article by mistake please link it. As this article does not even use the word "presented"
In this article it mentions how Chrysler is working with them and has developed a patch, indicating that they did not dismiss previously done tests. So basically saying the opposite of what I take your point to be.
Yeah, you and your family. Well, you are lucky. These researchers and this reporter had already risked their reputations, lives and their livelihoods. So you, now, don't have to. And maybe you'll be even able to benefit from all their hard work, because were would be fewer vulnerable cars around. Although you would probably never know that.
No. They absolutely did not have to produce a life threatening event. They could have done it 5MPH and car manufacturers would still take notice because it would still spread like wildfire on the Internet. What they did was supremely irresponsible and the cops should have been called.
They already did do it at slower speeds in parking lots. Manufacturers didn't care. They probably still won't care, which means that it's a matter of time before someone even less morally-bound decides to wreak havoc on traffic.
> Without such event present in the footage, car manufacturers can just say "Meh - no big deal". And continue recklessly risking lives by manufacturing unsafe cars without air gap between CAN bus and Internet.
Oh really, can you point to the responsible tests that were done in the past that proved inconsequential necessitating this reckless alternative? Or are you just inventing that the car manufacturers would ignore this and somehow the story would just go away?
The actions - according to the article - of auto manufacturers in response to prior more-controlled tests is exactly equivalent to that. The manufacturers basically said "hey, thanks for showing us this crash-test footage that shows our vehicles are literal fucking coffins on wheels; we don't really care", leaving the researchers with no results after taking more "sane" measures.
Researchers perform controlled experiments. Controlled experiments are ignored. Researchers opt for more damning (though less controlled) experiments to further prove their point, and now they're suddenly the bad guys here.
Researchers opt for more damning (though less controlled) experiments to further prove their point, and now they're suddenly the bad guys here.
Much of the commentary here focuses on the recklessness of the highway test and doesn't weigh in too heavily on who the bad guys are.
I think people mostly find the idea of remotely exploitable and controllable cars so terrible that there isn't anything to discuss about that aspect of it, it's nearly universally considered unacceptable (hence the epic thread about the side issue).
Maybe try reading the comments without imputing a side that the writer is taking.
What they should have done was involve the police from step #1. If the video had been conducted on a closed section of roadway with ambulances standing by, police escorts, and lots of badges and sirens, it would have been even harder for the automakers to blow off.
It wouldn't have been difficult to do this right. Cops love drama and publicity. It wouldn't have taken much convincing to get them on board, and the video would gained a lot of credibility.
I agree completely; there were a lot of formalities that were neglected - and had they not be neglected, there would be less backlash against the researchers.
However, this doesn't change the fact that vulnerabilities were demonstrated, nor does it change the implication that auto manufacturers are excessively sluggish about security patches on things that can and do kill people on a regular basis. Even an imperfectly-conducted demonstration like this particular case is preferable to such a demonstration not occurring at all.
Blocking the visibility through the windscreen, then shutting off the transmission of a car, that is driving on an interstate overpass in traffic, is not white hat by any stretch of the imagination.
Perhaps not, but it's necessary to get the attention of auto makers so that they stop building such trivially-compromisable systems. This was a couple of security researchers on one car for a proof-of-concept; better to demonstrate these flaws early and with a more limited sample than to watch the pileup of epic proportions that would happen should someone even less scrupulous acquire such control over vehicles on the road.
I don't exactly condone the ethics (or lack thereof) of the researchers, either, but if that's the only way to get proper attention (after previous, more polite and reasoned attempts were simply dismissed by manufacturers), then so be it.
Had that Jeep run into you or you ran into it as a result of this experiment, you may have found that you have a profoundly different threshold for what is, "necessary to get the attention of auto makers".
Just because automakers are seemingly keen on ignoring security vulnerabilities does not justify putting people's lives at risk. And let's face it – a multi-ton vehicle that is not entirely in its driver's control puts lives at risk in just about any situation. The reason you and others argue that the demo's methodology is effective is precisely because of the risks involved; not in spite of them.
It is the responsibility of researchers to demonstrate risks without exercising the extent of those risks. Imagine if virologists regularly demonstrated communicability risk by injecting humans with disease outside of the lab.
> Just because automakers are seemingly keen on ignoring security vulnerabilities does not justify putting people's lives at risk.
So condemn the auto manufacturers for putting hundreds of thousands - if not millions - of lives at risk instead of yammering about a couple of nerds who put at most 2 vehicles in probably-nonfatal danger in a worst-case scenario.
And as busy as that highway was in the video, it was far more than just 2 vehicles, especially if one of those vehicles was the 18 wheeler.
At the very least they could have done this on a less busy stretch of highway that had a wide shoulder and with control vehicles in front and behind with paramedics at the ready (just like a movie production that is shooting on public streets). Instead the researchers and the journalist chose to be reckless.
Nobody's saying you can't. I certainly do (I strongly disagree with the researchers' obstruction of communication between themselves and their test subject).
My only point is that there's a massive difference in scale between a couple dented fenders and hundreds of thousands of dead/maimed innocents.
Difference of scale? Ok, I agree with you there, but characterizing the risk as "a couple dented fenders" is intellectually dishonest. A high speed accident on an interstate could easily involve serious, even fatal injuries.
It could in some situations, yes. This was not one of those situations.
We're talking about someone coasting uphill with absolutely no braking whatsoever. There's plenty of reaction time in such situations (as I happen to know firsthand, as was the case when my SUV ran out of gas and I had to coast a quarter-mile over a hill to get to the next offramp while merging from the fast lane to the far right at 70MPH). Even for semis, the reporter's car wouldn't mean having to slam on the brakes. Not to mention that the uphill helps with stopping.
The story would be different if the researchers slammed the car's brakes. If that were the case, then yes, death would be possible. That wasn't the case.
No intellectual dishonesty here. Just thorough examination of the situation as described by the author of the article.
Because scale. One is very limited in scope, ie: On one day, in one city, on one road, for a few minutes, one car caused a few other vehicles to make otherwise unnecessary lane-changes. vs the vulnerabilities exposed which affect tens or hundreds of thousands of vehicles in every city, every day, on almost every road, at almost any time.
Agreed, the researchers deserve some criticism, but let's not lose sight of the forest for these two goofball trees.
> it's necessary to get the attention of auto makers
That's mere conjecture. And it's an assertion you could easily test by first doing the remote hack in a controlled environment (e.g. a racetrack) and seeing if automakers respond before trying this on an actual freeway!
If you read the article, you'd know full well that the researchers already did test these exploits in controlled environments and presented these tests to auto manufacturers. Said tests were dismissed by said manufacturers.
I've read the article. Where does it mention controlled environments? The only mention of exploits being dismissed by manufacturers was in regard to a wired exploit, not a remote one.
The paragraphs after the photo of Charlie Miller describe the process of identifying and isolating wireless exploits, including remote-activation of windshield wipers on a vehicle in one of the researchers' driveways. This did admittedly escalate quickly to passive "tagging" of vulnerable vehicles by VIN, but that's a far cry from the experiment in question.
The findings before physical tests (identifying cars with a lack of airgapping or other basic security measures) were also reported to Cadillac (as one example among others); said findings were basically dismissed with a "well we've already released a newer Escalade model with some more security features, so whatever".
This isn't to mention that the wired exploits should've been enough to at least spark some level of concern.
First, there's no indication in the article that the researchers or Wired presented the remote windshield wiper hack to the car's manufacturer and that they subsequently ignored it.
Second, there is plenty of indication that the exact opposite is true. The remote windshield wiper hack occurred this June, whereas the article states that they've been working with Chrysler on this for nearly nine months and that Chrysler released a patch prior to the publication of this article.
Third, the Cadillac anecdote isn't really relevant here. For starters, it looks like they were contacted by Wired, not the researchers, so it's unclear whether they were contacted before the dangerous freeway demonstration took place. And while the mention of the newer model is a bit odd, the statement also mentions devoting more resources and hiring a new cyber-security officer, making it unfair to characterize it as a "whatever" response.
Sure, it'd be nice if Cadillac was a little more proactive here, but keep in mind that the researchers hacked a Jeep (made by Chrysler), NOT a Cadillac (made by GM). The researchers think the Cadillac is also vulnerable based on its feature set, but absent a specific flaw to patch and given the short amount of time since the initial demonstration (less than two months), it's unclear what GM is supposed to do here.
My point wasn't about Chrysler specifically. My point was about auto manufacturers in general (and I've made this clear from the beginning). By pinning it to Chrysler alone, you're also reaching, I'd reckon.
Also, it's worth noting that the root flaw here - a hole in UConnect - is not limited to Chrysler. The article mentions tracking and surveilling GM vehicles, too (particularly Dodge), which makes sense, seeing as a lot of recent Dodge vehicles have UConnect as well (per http://www.driveuconnect.com/features/uconnect_access/packag...).
> For starters, it looks like they [Cadillac] were contacted by Wired, not the researchers, so it's unclear whether they were contacted before the dangerous freeway demonstration took place.
The article doesn't actually say that. Infiniti was contacted by Wired according to the article, but the initiator of Cadillac's response isn't specified (as far as I can tell).
If they were contacted in the same manner as Infiniti, then it's implied that said contact happened after the wireless hack, since the Infiniti contact involves a notification that the researchers' predictions were "borne out" in at least one of the three of them (in this case, Chrysler).
If you want to get their attention, you demonstrate it on a test track, for a court, as part of a lawsuit against them, for introducing such dangerous features into their vehicles.
I've seen Mr. Miller present at Black Hat and have talked with him about my own vulnerability reports to automotive vendors (I worked with one about two years ago to fix a rather embarrassing remotely exploitable flaw). However, I do not support testing or demonstrating any of the flaws on open public roads. Unforeseen things can and do happen. If the reporter would have been rear-ended this wouldn't have gone well for either researcher & that's enough right there to justify not doing this on public roads. The term "keep it to the track" which is often applied to the automotive racing scene is more than applicable here. The general public already tends to have a negative view of security researchers and performing "research" like this just re-enforces the perception.
You did the right thing. This was completely irresponsible. I'm shocked that Wired, the author, or either of the researchers have yet posted a "we screwed up, sorry" statement.
It's a shame because this is an incredible story and the work they did was great, but what a completely reckless stunt they pulled. Totally unnecessary too, the story would have been just as effective if the demo happened on a test track or empty parking lot.
No he did not do the wrong thing. Reporting them is completely wrong. When we report the people who protect us, well this sounds like a plot to a movie. PS: in movies usually a lot of people suffer before the resolution
These people did the exact opposite. They put others in potentially mortal danger.
They could have killed someone's daughter, son, mom or dad.
Stop and think about that for 10 minutes before you continue posting with this unreasonable point of view. Would your mom, dad or siblings life be worth this test? Imagine they collided with this car and died. Close your eyes and imagine that for a moment. Imagine receiving that call. Going to the hospital. Seeing the, all torn-up and suffering befor they die due to the injuries.
And then you find out it was due to two fuckers who thought it'd be funny/interesting/whatever to disable a car remotely.
To be fair, a good proportion of the blame -- and a very good proportion of my subsequent lawsuit -- would be directed at the car company whose negligent engineering made the wreck possible in the first place.
Although I do agree with you, I modded you down and the GP up in this case because appeals to emotion aren't the answer. Your post is a form of the "If it saves just one child" thought-ending pattern.
You're repeatedly posting on this thread that "the manufacturers" have ignored the researchers' previous tests.
That doesn't seem to be true; Chrysler (the singular manufacturer involved) have, after being alerted to the hack prior to this report, already issued a patch. The researchers are practicing responsible disclosure, have the co-operation of the manufacturer, and are going public with the details at Blackhat next month.
This Wired piece is NOT part of that responsible vulnerability disclosure, it's a teaser to hype up their blackhat talk. It was not necessary to get this piece in Wired to save "hundreds of thousands" of lives. I guess you could argue that the vividness of this imagery will encourage people to follow through on getting their Jeeps patched, so there's that, I suppose.
Since you've repeatedly made this claim, do you have a link to back up the assertion that there are manufacturers who were ignoring this research who will now pay attention because of the crazy stunt Wired pulled?
My remarks were strictly based on the claims of the article. Nothing more, nothing less. The article claims that the researchers performed prior tests, and that said tests were dismissed by auto manufacturers. If we're going to take one component at face value (the idea of the reporter putting others in danger), it would be unfair to not extend the same courtesy to the rest of the article.
Responding to a Hacker News discussion about the article based in information from the article seems quite reasonable. Perhaps it does not deserve phrases like "saying stupid, reckless things". Could you step back for a moment and consider how YOU want others to perceive YOUR postings? I, for one, am a big fan of civil discourse on HN.
Given this is in the context of people loudly condemning a poster here for actually being concerned about other human beings' well-being, I think your claimed concern about "civility" in this one instance is dubious at best.
If someone prefers people making stupid and reckless arguments to other people civilly pointing out that those arguments are stupid and reckless, I'm not concerned about their perception of me.
In that case, the commenter who started this whole discussion of whether or not the researchers' behavior was in the wrong should've also educated him/herself before making phone calls to law enforcement agencies based on the claims of a WiReD article.
Or is basing one's statements on the subject matter alone only valid when you happen to agree with it?
"Before working with computers I drove tractor-trailers for a while and was lucky to achieve a million-mile safe driving award. I have a pretty good idea of the dangers here and I know that stretch of road well, I've crossed it many times."
I agree with your move. Not sure who else was supporting you so I figured I'd offer my support. This was stupid as hell and I'm sure was likely set-up/suggested by Wired as a shock film.
I can't say that I completely disagree with you but I do lack your faith in the authorities' ability to respond appropriately. I wouldn't mind if Beavis and Butthead recalibrated their ideas about how to conduct a demonstration. I also hope that they've edited the video for maximum effect, and that the reality was a little less exciting; but the fact is that a stalled automobile is an everyday occurrence that is about as mundane as mundane gets. Minor events such as this cannot be prevented in all cases, and therefore drivers absolutely must watch and be prepared for such an event. It wasn't the safest thing to do, but it isn't outside the normal range of "dangerous" events that one will experience on their commute daily, often more than once daily. IMO it doesn't increase the danger nearly as much as traffic patrol conducting a routine traffic stop on the freeway. If we're prepared to accept traffic patrol on busy freeways, then I don't think it's justified to treat a rare, even if foolish demonstration such as this one as anything more than a nuisance.
There's calling the police, and then there's publishing their phone number in the hopes of directing an angry mob.
Angry mobs are dangerous and volatile and can push prosecutors to overreact. And prosecutors and politicians love to overreact when it comes to hacking.
He published the number of the local law enforcement agency. That is not directing an angry mob, that's helping people voice their opinion to the people responsible for enforcing laws.
Bogging down the local police dispatch isn't a responsible way to voice your opinion.
Imagine: you're a local and you're trying to call the police. But, you can't, because the number is busy. Or you wait forever on hold, because people on the internet are angry about a reckless driving incident that happened weeks ago and that the police already know about.
OP called the police, that's enough. They know about it now. If you want to express your opinion, write the editor of Wired or, if you're really angry, the local district attorney.
I think the level of exposure comes into play here. I'm not sure there's enough people that will make that call from the crowd here to cause an actual problem, but if it's an emergency response number better to not risk it, so I do agree with your reasoning to a point. Perhaps directing people to call a number that is not expected to handle emergency requests would have been more appropriate (and I see OP has amended his comment to remove the number).
Is it really inconceivable that someone who has literally driven tractor trailers more than a million miles has seen dozens of people killed in accidents?
No, not at all. I'm just a regular middle aged driver with perhaps 300k miles under my belt and I've passed numerous fatal accidents on the roadway.
I just sat back and tried to estimate (since of course, I don't have a CB in my car and can't follow traffic details like a trucker) but I'd guess I've driven past at least 5 fatalities -- and that's just fatalities I've noticed (body on stretcher, read about it on the news later, etc)
Whenever I'm hauling >10K lbs with my pickup truck, I get most nervous going downhill through mountains. The only thing between me and hitting something are a thin set of brake pads on all four disc brakes and the extremely limited compression braking on a ~6L engine.
>I exercised my judgement and decided to phone the local Highway Patrol office.
That was rash. You called the police within an hour of reading this article? You didn't think it's possible the writer is embellishing or exaggerating the danger he was in here? As of right now, everything they've done has been done in good faith to try to point out the need for extra security.
Also, if they get arrested, even convicted of a crime, then what? You have two extremely angry researchers who know how to hack your car, and what, you're hoping some jail time might help them see the error of their ways and use more caution in the future? You can't see any potential problems if one of them feels vindictive about being jailed over your phone call when they weren't trying to do anything wrong in the first place?
While you may question the morality of notifying the Highway Patrol, your second paragraph doesn't address that issue at all. In fact, if you think that's how the security researchers think and behave, then I'd argue notifying the Highway Patrol was definitely the right choice.
That doesn't really mean anything. They "cut the transmission", which could mean they forced the vehicle into neutral. The accelerator may be supplying gas to the engine, but if the engine is not supplying power to the wheels, the accelerator will have now bearing on whether you accelerate.
Making factual statements contrary to how a situation was reported by those involved is fraught with pitfalls you can't anticipate, from things you don't know about. Until there's careful investigation, or the people involved recant or make factually impossible statements, you should be careful about making assumptions.
In any case, their reported actions are what people are upset about. If someone makes false statements about illegal activity and it results in the police showing up, they have only themselves to blame.
The part before that showed the tach dropping to zero. The cut away to the journalist talking about how he didn't have control then shows the tach at normal. It doesn't take a rocket scientist to tell that it's staged.
That's easily explained. If the car is indeed in neutral, taking your foot off the accelerator would result in a zero (well, idling, say 600-900) tach reading. Pressing the accelerator would result in a higher reading. This of course has no affect on the power to the wheels, exactly the same as if you are stopped and put your car into neutral and do the same.
I understand it's tempting to see a single bit of evidence and want to use it to invalidate an entire narrative, but is it so hard to accept that while editing could have made a situation less dangerous seem more dangerous, the inverse could be true as well? We really have no authoritative source of what happened other than the story put forth. The story may indeed be fabricated or subject to hyperbole in parts, but unless you have a source beyond what's here, you are not qualified to make that assessment from the little evidence presented.
> Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior.
But putting many lives in danger is considered acceptable behavior by security researchers? Does it actually matter that it was security researchers? Do security researchers working on banking software need to steal a million dollars in order to prove that they've found an issue? Would calling the police be acceptable in that situation?
Actual outrage is acceptable when there's actual danger. Calling the cops on a loud neighbor might not be acceptable, but calling the cops on a neighbor firing a gun in the general direction of your house certainly would be.
> But putting many lives in danger is considered acceptable behavior by security researchers?
We don't know, for sure, what happened. There might be some creative license in the journalism. There might be some omission of them talking to authorities (even if someone called the highway patrol and they said "oh, we don't know about this," all it proves is there's bureaucracy at the highway patrol). etc.
Calling the cops is invoking a powerful and hard-to-control force. Unless you think that the cops and the legal system are capable of talking to people fairly and understanding security research and making sense of tech journalism and reaching a reliably just outcome, there are better ways to exert your outrage.
For instance, the researchers are very well known in the security community. They're not rogues or mobsters or fugitives. If they recklessly endangered lives, we can pressure them to turn themselves in to the legal system.
I happen to have more faith in the ability of the hacker community to fairly figure out what happened than the legal system to fairly figure out what happened. If I shouldn't, then we have a serious problem as a community.
Not saying I agree or disagree, but there are a LOT of people who would really disagree with this, and a lot more who would think that someone saying "why involved the actual people the public has chosen to deal with this, we can deal with it ourselves" would be very wrong.
It's hard to imagine how anyone familiar with any part of the police/legal system's pattern of clueless, ham-handed, hierarchy-ridden interaction with technology over the last 30 years could find justification to continue extending them trust.
So what are we supposed to do instead? Rely on self policing by the individuals or the industry? The the police or judicial system is off track, you attempt to correct them, not ignore them and route around them. That may unfortunately end up with injustice for some while the correction is ongoing, but that's the normal state. There's always correction that needs to happen, and there's always injustice, that's the normal state, the point is to try to minimize the injustice as much as possible. Ignoring the built in mechanisms for steering the government in the right direction doesn't yield a better situation in the end.
"So what are we supposed to do instead" and "Do I trust the police right now" are different questions.
I don't like the fact that I trust self-policing by any community (even my own) more than the institution that is supposed to be doing policing. But whether I like it doesn't affect things.
I agree that it is important for us, as a free society, to fix policing. In the meantime, the best way to minimize injustice is not to invoke police when there is no immedate threat that can't be otherwise solved.
This is exactly why I don't think self policing is suitable. If someone broke the law and endangered other people, you or I as possible community members should not be able to decide no police action is involved when others are the people that were actually endangered. Do the people on the road during this situation not have a voice? We are not incentivized correctly to handle this situation suitably.
The police and judicial system sometimes have conflicting incentives as well, but at least they are aligned more with the public good than ours are. There are laws and they are, for the most part, rewarded for enforcing them.
>Do the people on the road during this situation not have a voice?
Of course they do. They can call the highway patrol or 911 and report a disabled vehicle. Let's imagine how that call would go.
911: 911, What's your emergency?
Driver: Hi, there was a vehicle travelling slowly on the freeway with its emergency flashers on, I had to switch lanes.
Pause...
Driver: Hello?
911: Sorry, I was waiting for you to finish. Was there any other information? Did the driver or occupants appear to be in distress?
Driver: I don't think so, he was alone and appeared to be talking to someone, perhaps on hands-free, or maybe On-Star?
911: 911 is for emergencies only, in the future please report events of this type to local authorities' non-emergency number accessible via 411. Goodbye.
Exactly. In this situation, the people on the road have less information that we do after the fact. People deserve to know if they were put in danger and why. Since this was done on a public highway, anyone that uses public highways has a right to feel upset about the behavior.
I would like to think that a call to 911 with more information (which of course a fellow driver wouldn't have) would be handled differently:
911: 911, What's your emergency?
Driver: Hi, someone on the freeway has purposefully disabled their vehicle in a location without shoulders, and is slowing while driving, impeding traffic. I'm not sure if the power brakes or steering are functioning, but the driver is definitely not in full control of the vehicle.
911: We've dispatched an officer to your location. Has there been an accident yet? Has the driver recovered control of the vehicle?
With other recent news releases about insiders being fired in internal affairs departments for findings against the police, I believe, we must assume that the police are self-policing in their own right.
If an angry bear is terrorizing your campground, then yes, call fish & wildlife so they can shoot it with a tranq dart and haul it off somewhere safe. In the meantime, though, do you go about your life as though nothing is wrong? Hell no, you get the fuck away from the angry bear!
And tossing chocolate bars into your neighbor's campsite in hopes that the angry bear will wreck their stuff is just not cool.
I agree, but I fail to see how that relates to the current context.
Unless the unruly bear is these security researchers, the fleeing campers are other security researchers in the same field, and their fleeing is them correctly assessing that some LEA is going to be taking down any bears nearby that even twitch wrong after this.
It is true that bad actors ruin it for everyone, and that's why it does not make sense to interact with cops if you have any way of avoiding them. You have no way of knowing which ones are the bad actors until it's too late to do anything about it, and you have no recourse once they have decided to mess with you. Furthermore, they have effectively unlimited resources when it comes to making your life difficult.
You seem to think that because the police are theoretically under democratic oversight, that one can safely interact with cops as though the nominal rules of engagement will restrict them, but even if - in the long run - it is possible to rein them in, the law enforcement system we actually have right now is unpredictable, unjust, and unsafe.
If you don't want to interact with the police, you shouldn't do illegal things, or present your actions as possibly illegal.
If you distrust the police to the degree that you think even if your actions weren't illegal you will still have negative consequences from interacting with them, definitely don't do the above.
When someone's actions extend to endangering the public to the degree we see here (which I think is obvious once you've watched the video), they are past any good will I would have extended them in not contacting the police for fear of an overreaction. Their clear disregard for public safety is reason enough for me.
Additionally, on the chance that it was entirely intentional and they are counting on the media and possibly even law enforcement response to help make this an issue, they they definitely don't need our restraint, and nor do they want it.
Dude, have you not been paying any attention to the War on Drugs, or the Ferguson thing or really any of the Black Lives Matter stuff? The cops will fuck with you if they want to fuck with you, and they will write up whatever paperwork they need to write up to justify it afterward. The courts will believe their testimony by default. The only way to get around this is to release video afterward showing that the cop lied on the stand, and even then the best you can hope for is an overturned conviction; the officer is extremely unlikely to face any consequences. This is the system we have.
It does not matter that we theoretically have democratic oversight. In practice, what we have is a system where cops can do whatever they think fit and expect to get away with it. They are armed and dangerous; it is not safe to interact with them. It is not a good idea to call them, or to talk with them if someone else calls them, because they - the cops - have a clear disregard for public safety when it is counter to their own interests.
You see, the thing is some of us still believe the the police are staff by people, not some faceless conglomeration of drones that all follow the same horrible behavior, and that while there are some, probably many bad police officers, and many systemic problems, they still serve a purpose, and that life without any form of law enforcement would be a big step back in many, many ways. The amount the media reports on something often has no bearing on how common it is.
If I was robbed, I would call the police. If I saw someone waving a gun around in public, I would call the police. If I saw someone using a car as a weapon, I would call the police. If I see a situation where people are endangering the public and someone might get hurt, I would call the police. Not doing so when I clearly knew I should would make me feel somewhat responsible for any negative outcomes otherwise.
I'm not really interesting in continuing a discussion where the other side's position seems to be "the police are racist scumbags and they will ruin your life with the slightest contact, so don't call them on criminals." You might find that characterization unfair, but then again, you're the one over-generalizing using large media events as evidence instead statistics.
Edit: Removed reference to ad-hominem, which wasn't factually correct.
> you're the one pulling an ad-hominem on the police
While I agree with much of the rest of what you right in that comment, this is not accurate: overgeneralizing a negative stereotype of someone other than the other party in a debate isn't "pulling an ad hominem".
Have you seen the damage being caused by "hackers" lately? I don't trust the police, but I don't trust my fellow hackers any more, especially when you consider their clueless, ham-handed interaction with the general public over the last 30 years.
Hackers can steal your money. You may be able to get reimbursed, depending on how they did it. The novel thing about this Jeep story is precisely the fact that hackers are demonstrating an ability to do something worse than stealing money.
The cops can wreck your house, break your stuff, take your money, shoot your dog, deprive you of your liberty, and - if they think they can get away with describing you as a threat - shoot you dead. Even if you spend the time and money it would take to prove in court that all of this activity was illegal and unjustified, most of the time you'll fail, and even if you succeed you'll never get anything back.
I don't trust hackers or cops, but I can sure as hell see which one is the bigger threat.
It's not really novel because hackers are having a physical presence, it's novel because hackers are attacking cars. Even then, it's novel because hackers are remotely controlling cars. They've been able to unlock your car and drive away with nothing but their Android phone for years.
Meanwhile hackers can break into the control systems for the power grid and shut down electricity, causing major damage. They can open or close hydroelectric dams, causing flooding and death. They can control hospital systems and kill or injure patients. They can control the airplane you're riding on while sitting in their seat. And all of these have been demoed at security conferences I've been to. I've seen these all in person.
There's a lot of physical harm that hackers can do. Sure, with a cop it's more personal since they're standing there in front of you pulling the trigger and a hacker doesn't even have to see your face.
Love them or hate them, both hackers and cops exist for a reason and are not going anywhere any time soon. One of the reason hackers exist is to point out dangerous security flaws like this. One of the reasons cops exist is because sometimes hackers are just as dangerous as the actions they're trying to draw attention to.
If there's been journalistic licence employed then they should be able to demonstrate that to the authorities since the entire footage was recorded. So there isn't an issue.
It also should be noted (since nobody else has raised this point) that some people within the police force likely read Wired anyway. So complaining about the police involvement for a piece posted to a popular news site is like moaning that your boss reads your Twitter feed.
If only there was some agency or agencies responsible for investigating possible illegal activity, we could have contacted them instead, and then they could investigate the matter with the input of the judicial system, interview the people involved, and come to the appropriate response.
Slowing down and eventually driving off on to a grass shoulder wouldn't even crack the bottom 1% of crazy shit I've seen people do on highways, on purpose.
IMO, the danger to the public caused by the researchers somewhat-controlled exploit is utterly dwarfed by the danger Jeep/uConnect is causing by directly connecting its cars to the internet. If the researchers are successful in getting car manufacturers to remove features like this entirely, the net result will ultimately save lives.
> Slowing down and eventually driving off on to a grass shoulder wouldn't even crack the bottom 1% of crazy shit I've seen people do on highways, on purpose.
The article claims that the transmission was cut on a section of the freeway with no shoulder, so I'm curious how being stuck in the middle of the freeway translates to "slowing down and eventually driving off onto a grass shoulder." (And just because something "isn't the craziest thing I've seen" doesn't mean it isn't dangerous)
With or without shoulder, a stalled automobile is an everyday occurrence that drivers must absolutely watch and be prepared for. It wasn't the safest thing to do, but it isn't outside the normal range of "dangerous" events that one will experience on their commute daily, often more than once daily. IMO it doesn't increase the danger nearly as much as traffic patrol conducting a routine traffic stop on the freeway. If we're prepared to accept traffic patrol on busy freeways, then I don't think it's justified to treat a rare, even if foolish demonstration such as this one as anything more than a nuisance.
I agree that it probably presented some level of danger to the public, but I maintain that (i) the added danger was small relative to the normal everyday danger of driving with humans; and (ii) the media exposure they've achieved by doing this on a public road has the potential to pressure Chrysler to remove tens of thousands of hazards (read: vulnerable Jeep Cherokees) from the road, which could ultimately reduce danger and save lives. It's not clear-cut when you're dealing with a vendor who chooses to ignore and/or litigate upon an initial disclosure instead of fix their product.
I think the point is that you, and the researchers, have no right to make that decision for other people. It's illegal to purposefully stop in the middle of the highway. It's illegal to cause someone else to do so as well.
People who routinely test things that have the capability for real harm learn to take precautions for as many of the the things you don't think off as you can. For example, the Mythbusters are routinely testing things with cars and other bits of machinery that can cause physical harm if something goes wrong. They also routinely retire to abandoned airforce bases, remote locations, and the salt flats to test things.
Point taken. Maybe I'm just being horribly jaded and/or emotionally invested from too much driving, but after a while it gets hard to discern malice from incompetence, and you start (wisely or unwisely) worrying more about actual outcomes than intent. If I'm hit by a security researcher or a drunk driver or a million miler who had one black swan of a bad day, the result is the same to me: I'm hurt or dead.
But perhaps a little more objectively, there's certainly a moral hazard here; if every security researcher did this on public roads it'd likely be chaos. An appropriate response, IMO, would be for the police to make a phone call and tell them not to do it again.
I agree, I don't necessarily want them to go to jail, but I would be happy if they were suitably scared shitless for a while as the enormity of how bad they fucked up (if the facts are as they seem) hits them. Part of the benefit of the extreme reactions from the people here is that future security researchers working on interactions between software and hardware appliances that may pose a physical threat will have an example of exactly why you should show your proof in a controlled environment.
It's actually not that different than pure software security research. You don't show a POC for your new DNS exploit by doing it against Comcast or AT&T public DNS servers without expecting some blowback. You set up a test environment.
> the danger was small relative to the normal everyday danger of driving with humans
The danger of a car having its transmission crap out on the freeway is non-zero, yes. On the other hand, they explicitly cut a vehicle's transmission on the freeway. The danger for the people in the immediate area of this vehicle was increased because the situation (a cut transmission) went from "maybe it will happen" to "it is definitely happening." At that point, whether or not an accident happened depended entirely on the skill and attention of the drivers around this vehicle something completely out of the control of the researchers.
That photo is next to the parking lot where he "found an empty lot where [he] could safely continue the experiment" and then "they cut the Jeep’s brakes, leaving [him] frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch"
FYI, here's a link to a video[1] of the situation (which bengali3 posted up-thread). The grass shoulder was later, cutting the power was actually done on a fairly busy stretch of highway with no shoulder. The reporter's words during that incident are particularly telling.
Demoing it on a test track with no other vehicles and a volunteer driver with helmet and roll cage -- that'd be acceptable, maybe, with suitable safeguards.
But doing it on the open highway with unaware third parties driving past, merely telling the test guinea pig "not to lose control" while being blasted with cold air and loud noise, having the controls disabled, and visibility impaired? That's gross recklessness with public safety. (Here's a clue: you could have put me in that car and given me all the warning in the world and I could not guarantee maintaining control or not causing a potentially fatal accident at 70mph under those conditions.)
You shouldn't run experiments on big powerful machines in public places where you can't keep by-standers out. Gross ethical breach. I just hope the journalist is exaggerating or making things up.
To me it seems like it is gross recklessness with public safety from car manufacturers. The car manufacturers are risking lives of all these people by not keeping the air gap between CAN and Internet...
It can be both, security researchers don't get a free pass just because they are exposing a wrong.
Had someone died you might (in countries which have it) get corporate manslaughter on a company that ignored security warnings. You absolutely would on the researchers and the journalist for their reckless disregard for the lives of others.
Yes, researchers don't get a free pass. Nothing is free. They've risked lives and their reputations to save lives. It had happened before in the history. And hopefully it will happen again. Some times it is worth it.
(*) without risking lives there wouldn't have been a video documenting these life-threatening vulnerabilities in the cars.
Rubbish - there would be a video of them on a test track. Much like all the videos proving quite how fast the cars can go. You don't drive a McLaren F1 at 240mph on the Interstate and post it to Wired expecting to get away with it...
They did not only risk their own lives. They put other people around them at an increase risk when it was unnecessary. That is the argument, that it was not necessary. This same demonstration could have been done on a track or other controlled environment where the public was not in danger.
Had my car stall on the highway once. Pretty scary because you lose power-brakes and power-steering as you're trying to pullover.
Was it a hacker? Nope, just a dumb mechanic that got trash deep into the air intake during a routine oil change.
How many (dumb mechanics)*(routine oil changes) are there in this country? Five-Six orders of magnitude more than auto hackers, which is why I don't see any harm in one more (where the driver knew ahead of time it was going to happen).
The difference is your 'dumb mechanic' made an unintentional mistake.
These people, on the other hand, knowingly and deliberately disabled a car on a highway. Yes, they had a plan, but they are still running their little experiment on other unwitting drivers on the highway. I don't consider the manner in which they ran their test to be ethical; it should have been performed on a closed track.
I will point out that the car didn't stall. Power brakes and power steering weren't affected. The transmission was forced into neutral which did affect his ability to accelerate. Still reckless and stupid. And probably worthy of a call to the police -- though debatable. I don't understand why they didn't just test this in the drive way or on jackstands or at a track or on a dyno. Driving the car on a public street was not needed.
Your recklessness is judged by the extra harm risked.
You can't just waive it away because other risk is more dangerous across the entire nation. Under that standard: One little murder is a rounding error compared to the 2.5 mil who die each year.
The fact that the driver knew ahead of time does mitigate the danger but does not excuse the remaining danger. The fact that there are already lots of dangerous stalls seems completely besides the point to me.
> Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior. A much less aggressive (and thoughtful) move would be to contact the researchers directly. Wow.
What would contacting the researchers achieve? They arbitrarily did the experiment in a public highway "to make the headline more shocking".
As much as I support any kind of security research, and as much as I support getting the attention of people to raise awareness, contacting the authorities was the correct move; a public highway is not a research lab without the proper permission from authorities.
>Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior.
There is even a bigger problem. These researchers, even if they were negligent, are far more at risk of legal punishment for creating a small risk for the sake of increasing safety standards overall than the people who choose to cut security funding and put magnitudes more people at risk for the sake of making more money.
Isn't it odd that we have a legal system where the ones attempt to expose and fix the problem for the sake of safety are facing far greater legal trouble than those who knowingly allowed for the problem to first occur due to increasing profits?
I think researchers should have complete 100% legal cover if they test private vehicles and private roads.
But as someone who says the car manufacturer ought to face legal consequences for failing to fix a remotely exploitable stall-out in a timely manner (even without demonstration of anyone being harmed), I also say that people who fuck with moving cars on the road are a menace as well.
Unfair but also of great benefit to the corporations creating products with security holes. It reminds me of 'identity theft' where the average citizen bears the risk due to the poor verification practices of loaning institutions instead of the institutions who create the risk as a byproduct of profiting off making the loans. If anyone thinks this is accidental, you should contact me as I have a great deal on this new bridge that is about to hit the market.
A small risk? Disabling a car on a busy highway is not a small risk.
What about this "experiment" could not be done in controlled environment on a track… or a country road… or an empty parking lot.
I suppose we could just have infectious disease researchers set up shop on a street corner by this logic. Whatever! It's just a small risk! They're doing it for the sake of increasing safety standards!
Small compared to the risk of under-funding security research as a cost cutting measure knowing that weaken security will allow for these exploits to occur.
Using violent methods (such as intentionally sabotaging a car on a busy freeway with someone in it) to get media attention in order to further a political goal sounds a lot like the definition of terrorism.
Only if your sense of scale has stopped functioning. It is a dangerous journalistic prank that probably does deserve a telling off from traffic cops, to much the same level as someone who is drunk driving. But I think trying to classify it as terrorism is not helpful or particularly sane.
Seeing as how drunk driving kills a very large number of people every year and is now punishable by imprisonment and extremely steep fines, you might be onto something here.
If people were screwing with cars like this as often as drunks were driving, I think you would end up with mortality figures that were at least in the same ballpark.
To my mind, it would have to be some form of an attack, if untargeted, at least hundreds of cars, and if small would have to be targeted and strongly political, dangerous stupidity in a single instance for the purposes of having a good press story, doesn't qualify as either causing terror, or having an intent to, notwithstanding the broad legal definition that has been adopted over the past 15 years.
OK, just making sure I follow: They should exploit security holes and put people at risk to ensure that security research is not underfunded, which could lead to someone exploiting security holes, which would put people at risk.
Your argument would make sense if all exploits were equal. Think of it more like infecting people with weakened/dead forms of potentially deadly diseases so they will be better protected against that disease. The weakened form, while it may not be risk free, is not equal to the harm of a full own infection.
> Think of it more like infecting people with weakened/dead forms of potentially deadly diseases so they will be better protected against that disease.
If these guys want to be regarded as researchers, they need to act like them and be accountable like them. No ethics committee would ever approve a test like this.
The IRB as it currently stands it too strict with its regulations. Also, why should the researchers be regulated when the ones producing the things that are initially putting people into danger are not regulated (or are regulated by bureaucrats who couldn't tell you the difference between a buffer overflow and a SQL injection).
So they should get away with it to ensure that their funding isn't cut? I'm intrigued what you'd consider a "big enough" risk that they should face punitive measures.
It doesn't strike me as so odd, and your framing of the situations doesn't strike me as particularly conducive to honest discussion.
The people who "choose to cut security funding" (I'm assuming you mean congress?) acted within the bounds of the law and within their power as elected officials. They broke no laws and your disagreement with the results does not make them criminals.
I don't know if these researchers broke the law. All that's happening now is they are being investigated. If they did break the law, then it seems to me perfectly logical that they would be in "far greater legal trouble" than someone who didn't.
TL;DR it's not odd that someone who broke the law is in more legal trouble than someone who didn't
I understand why they did it this way, and I'm glad that this issue getting more publicity. But that does not mean that the means they chose are justified by the end of greater publicity. They simply did not have the right to endanger the other people on that road without their consent.
Test pilots work from very careful plans in order to gradually test the envelope of huge powerful machines filled with explosive fuels.
Their test-engineers don't say "hey, we're going to do some stuff, but try not to kill anyone".
Gross negligence.
Perhaps you are simply unaware of how dangerous the situation was? Several experts (ex-truckers) have described how they have seen people killed in circumstances like this. If you're being intellectually honest, that should inform your responses.
What would be more thoughtful is for the researchers to plan this better. I think calling the police was the right action in this case. Why do researchers get a free pass? If it were some pranksters publishing the exact same thing on youtube, would you still consider it unacceptable behaviour? No, what I find unacceptable is your hypocrisy in trying to shame the guy who did the right thing.
> Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior. A much less aggressive (and thoughtful) move would be to contact the researchers directly. Wow.
Reminds me of people who will call the police on a loud neighbor instead of just, you know, talking to them first.
No, these are knowledgable security researchers doing serious work who are probably amenable to discussing their research methods with concerned party via email or phone instead of the concerned party immediately phoning the police.
I don't agree. They breached their own covenant, which doesn't afford them the benefit of the doubt you might assign to 'knowledgeable researchers doing serious work': "We won't do anything life threatening." Then they did precisely that: they disabled the transmission on an uphill freeway ramp with no shoulder, with a very large truck bearing down on the Jeep. Trucks cannot stop quickly. Substantial danger for the driver, the trucker, and everyone around them.
Doesn't matter who they are. They have a loaded gun in their hands. Use it somewhere private or not at all. Anything else is unacceptable and extremely dangerous.
Well, let's just have these guys arrested for marauding around the freeways with a "loaded gun" and we'll let the Russian hackers figure out the Cherokee's vulnerabilities.
You honestly don't think these "security researchers" did anything wrong? They could have easily demonstrated the same vulnerability on a private course and made the same point, no need to recklessly endanger everyday people.
You have not met our neighbours! Drug dealers, they run a vehicle repair 'service' from their garden despite local council enforcement notices. They regularly have fights in the street, my wife has been verbally abused and followed on numerous occcasions and there have been two police 'drugs raids' that have resulted in absolutlely nothing useful happening. I could go on, but it's not relevant.
This has no bearing on the original issue ('Calling police on security researchers'), but I'm just saying that you can't debate nicely with everyone.
You're certainly right, you can't debate nicely with everyone. Is there evidence here that the researchers in question aren't open to critique or aren't willing to discuss the safety issues involved with their research methods?
> Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior. A much less aggressive (and thoughtful) move would be to contact the researchers directly. Wow.
So, you don't care about the fact that this experiment on public roads could have killed people? Just because it's for security research it's ok to recklessly endanger lives? What Wow's me is your cavalier attitude, I'm glad he informed the police and I hope they face repercussions. What they did needlessly endangered people's lives and public safety to add a sensational bit to a story, I find that way more "aggressive" than informing the proper authorities of those actions.
This is like testing the new trigger safety on a gun by firing into a crowd. Its incredibly negligent and unethical. That section of i-64 is very busy and the police should get involved.
> Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior
> they cut the transmission. [...] Immediately my accelerator stopped working. As I frantically pressed the pedal and watched the RPMs climb, the Jeep lost half its speed
In what world do we not call the police on this kind of behavior?
>unlike the typical TPMs that only allow vendor software to be authenticated, these TPMs would allow the user to directly authenticate the firmware. If you know the firmware is good, then each layer can validate the next layer up all the way to the OS.
nothing novel there in terms of having to have some "new" TPM. Just OEMs choose to lock down their boot chain. Probably most secure boots are minimally implemented to only support the use case of secure/trusted boot (device/chip/OEM key) xor untrusted boot (no key).
If both are supported, whatever functionality that relies on OEM firmware or chain of trust would be disabled if it is an untrusted boot (like fastboot oem unlock for some android devices) situation.
May be tricky to enable certain desirable/required features if user wants to run their own firmware.
>I have yet to hear of a system that allows the user to directly authenticate software/firmware at the hardware level. Is anybody working on research of this nature? Or are there insurmountable problems with this approach?
I think chromebooks/chromeOS folks have been looking at this. Not sure of the current state of things.
p.s. TPMs kind of suck if they are not able to be updated OTA.
The security researches behaved irresponsibly by performing this demo on a public highway, especially one with no shoulder. They could have driven around a private track just as easily. Performing the test on a public road endangers the journalist, other drivers, emergency assistance personnel, etc.
I'm not for agitating the authorities on this one. Sometimes security research like this requires a little performance art to get the message across. Guys like Elon Musk just need a proof of concept to modify his designs, incumbents like Fiat Chrysler need exactly this. Remember Toyota?
Calling the police on security researchers...I honestly cannot believe this is considered acceptable behavior.
Who cares what their job title is? They deliberately blocked visibility and then cut the transmission of a vehicle being driven on a public road in traffic. That is well into the territory of criminal negligence.
Calling the police was completely inappropriate, but downvoting the comment as a way to signal your disapproval with his action in the real world isn't helpful. The comment itself is well written, on topic, and leading to good discussion.
I agree with what others have already said: Since nobody was actually hurt he should have contacted the researchers to make his point.
Right, yesterday my neighbor shot a gun at me several times but since he missed and no one was hurt, it would have been utterly inappropriate for me to contact the authorities.
One problem with this reasoning is that the researchers really didn't know what they were doing with 100% certainty. Their code could have accidentally affected the stability control subsystem that most cars have nowadays -- the one that's designed to apply full braking to a single wheel to recover from a skid. In fact, just corrupting the data from the steering-wheel angle sensor could have had that effect (which I personally find rather terrifying in itself.) Good job, guys, now you've caused a 70 MPH rollover in traffic.
The right way to do this would have been for the researchers to call the police up front and arrange a demonstration on a closed road with police escort. That would have lent the video more credibility and shielded the researchers from liability, while addressing any concerns about safety or ethics.
Firing a gun at someone is dangerous. A car slowing down and/or with reduced visibility on a busy highway is dangerous. Then either of these situations happen by accident, we understand that there's not a lot to do about them because there was no intentional behavior that needs correction. When they are purposefully done, that's endangering people, and is unacceptable behavior that needs correcting. In this respect, they are no different.
It is a never ending discussion around here but my take away is that votes express opinion as well as quality of the posting.
We are here not an intellectual debate club were people take pro and contra sides and points are distributed based on the rigor of the argument, but discussions around here are about real world problems.
And somebody calling the cops on security researchers just because he read an article on the intern is in my view highly questionable behavior for somebody familiar with the tech community.
I was thinking about how dangerous it was while I was reading it too, but I came away far less concerned than you I guess. The deceleration on the highway was the most worrisome, but it's not even in the ballpark of common driving hazards like distracted folks on cellphones or flying debris. A crash from such a thing is unlikely and the inconvenience is pretty minimal.
Even you, the busybody who called the cops because you read an article, said "What was the plan if the trucker approaching at 70mph hadn't seen the Jeep stalled early..." which implies that the trucker would have been following too closely or not paying attention (or both).
It's worth pointing out that the driver was aware of the situation and they didn't do anything dramatic like lock the brakes or throw the car in reverse. They chose a gentle deceleration in a stretch of road that had no shoulder to make it feel dangerous, but, on the spectrum of hazards that most drivers face every time they take the car out of the garage, this is pretty tame.
The fact is, had something happened, it wouldn't have been the disabled car that was at fault.
I think the researchers are in the clear, and for you to have read the article and been bothered enough to call the cops (and post the number for, presumably, the convenience of other hyper-sensitive folk who might otherwise just go back to staring at the neighbor kids from their bedroom window with their phones in their hands and 911 on their speed dial) is nuts.
> "What was the plan if the trucker approaching at 70mph hadn't seen the Jeep stalled early..." which implies that the trucker would have been following too closely or not paying attention (or both).
Say there was a person working at a grown-up lab that deals with traffic safety. Like the University of Michigan Transportation Research Institute http://www.umtri.umich.edu/ .
The person wants to know what happens when someone slams on their brakes on a 70mph road. He says "don't worry, if anyone hits me, it will be their fault, because they were following too close."
They intentionally disabled a car in an area with no shoulder. They intentionally introduced a large hazard on the highway. If something happened, they would have been partly responsible. You do not just stop on the highway just because you feel like it.
They intentionally put people at risk when there were better, legal alternatives. Responsible researchers do not do tests that subject the general public to risk.
I do think that educating them with regards to better choices would be helpful, but they appear to have committed an offense and documented it on camera in the news. I think they were going to end up in trouble one way or another here.
When it comes to ethics and moral responsibility, intent and agency are everything! For ethical purposes, it is similar to injecting a person with a flu virus to test if their acaiberry diet has improved their immunity. Yes, they could, even without your intervention, have caught flu and also spread it to others, but as an agent, you have increased that probability of flu occurring and spreading in the community to close to 100% when it could have been very close to zero.
Similarly, the researchers have increased the probability of a crash from the near-zero probabilities that are typical of actuarial tables to close to 100%.
It doesn't appear like the researchers have access to the car's firmware, so how can they guarantee their code will not have other unpredictable effects? Automotive parts and firmware are put through endless testing before being allowed onto public roads. Why should I have to be at an unnecessarily increased risk of an accident when this could've been done on a track.
They did NOT "slam on their brakes" and it was NOT a large hazard. I understand that, if something had happened, they would have FELT partly responsible. But it's the kind of responsible that people with a lot of bumper stickers experience when someone wrecks because someone was paying too much attention to the stickers and not enough attention to the road.
Yes, they created conditions that might have made it possible for a lousy driver to wreck a car, but, no, they did not do anything inherently dangerous. A driver--ANY driver--is expected to be able to handle gently decelerating cars on the highway. They should also be able to pay attention despite big billboards, confusing traffic signs, and attractive people gallivanting on the sidewalks.
The average traffic jam is much more likely to cause an accident, but it typically doesn't and, when it does, we blame the driver that rear ends someone, not the masses of people who have actually stopped on the highway, often NOT gently.
Wow, I'm shocked at how contentious this comment is. Count me in the "thanks for being a responsible citizen" column.
I feel like there's a lot of cargo cult thinking going on here. The situation is _almost_, but not quite, like a lot of other ones where the security researcher is unreasonably blamed. For example, I could easily see some people being up in arms about announcing this exploit at Black Hat.
But that's not the case here. I have a healthy fear and respect of a ton of metal flying down the road at 70mph. And this stunt, done just to generate headlines, was needlessly reckless. It could have just as easily been demoed in a private lot or something.
> t could have just as easily been demoed in a private lot or something.
It was previously demoed in parking lots and other controlled environments by these researchers, according to the article. Said demonstrations were ignored by the auto manufacturers, with some manufacturers - like Toyota - trying to claim that their systems were still "secure".
The public and the manufacturers need a proper wakeup call. My fear is that even a "reckless" test like this one isn't enough of a wakeup call.
Life is hard. Sometimes people don't pay attention. Pulling irresponsible stunts isn't an appropriate response "to make people pay the proper amount of attention."
If someone had died from this stunt, the total number of deaths from remote hacking of cars would be 1.
NB: I highly favor a bounty system where someone who can demonstrate the ability to take over a car without touching it gets paid lots of money, and if the company fails to fix it they get fined even more money. But "someone else is doing something bad, too" is never a good justification.
> If someone had died from this stunt, the total number of deaths from remote hacking of cars would be 1.
If this stunt had never happened, we'd be in a position where some less-scrupulous actor would demonstrate such exploits on a much bigger scale. I can guarantee you that the total number of deaths from remote hacking of cars would be far greater than 1.
If we're going to play the "OH NO THINK OF THE CHILDREN^H^H^H^H^H^H^H^HHYPOTHETICAL DEATHS" game, then let's put this into some goddamn perspective, eh? 1 v. hundreds of thousands (if not millions) that are currently vulnerable to remote hacking right this very instant.
In all actuality, of course, that "1" death was highly unlikely; at most, we'd probably see a few dented bumbers and a couple grand in car repairs. Maybe somebody with whiplash.
But you're ignoring the fact that this exploit could have been demonstrated in a safe manner on a racetrack or similar with just as much effectiveness.
It could have been demonstrated, yes. It's the effectiveness that's in question, seeing as similar demonstrations weren't particularly effective.
And yes, they could've easily done this demonstration with better safety constraints (particularly regarding communication between the researchers and the driver; said communication was seriously impaired), but the implication is that the researchers believed a "live" test to be necessary to actually get that attention. The point is less "this is what happens to your car" than "this is the sort of danger your car poses to the general public".
My fear, of course, is that even this won't be effective. Hopefully proper basic security measures (like, say, not connecting the transmission, brakes, and steering to the bloody Internet) will be taken seriously before some multi-fatality catastrophe happens because of such security flaws.
Presumably you are leaning on this paragraph when you say that their earlier attacks were ignored?
When they demonstrated a wired-in attack on those vehicles at the DefCon hacker conference in 2013, though, Toyota, Ford, and others in the automotive industry downplayed the significance of their work, pointing out that the hack had required physical access to the vehicles. Toyota, in particular, argued that its systems were “robust and secure” against wireless attacks. “We didn’t have the impact with the manufacturers that we wanted,” Miller says. To get their attention, they’d need to find a way to hack a vehicle remotely.
But you are apparently ignoring this paragraph, which discusses Chrysler responding to the hack, as I read it, prior to the events in the article:
Second, Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference. On July 16, owners of vehicles with the Uconnect feature were notified of the patch in a post on Chrysler’s website that didn’t offer any details or acknowledge Miller and Valasek’s research. “[Fiat Chrysler Automobiles] has a program in place to continuously test vehicles systems to identify vulnerabilities and develop solutions,” reads a statement a Chrysler spokesperson sent to WIRED. “FCA is committed to providing customers with the latest software updates to secure vehicles against any potential vulnerability.”
The way I put the information in those two paragraphs together, it's the fact that the attack can be done without physical access to the car that got the attention of Chrysler, not the publication of a stunt in some web rag.
Even Chrysler is ignoring the root problem that was demonstrated even with wired access: that should the outermost layer of security be compromised in a modern car, the whole car is likely compromised due to a lack of separation between the car's inner workings and the numerous attack surfaces. That's why the paragraph about Ford and Toyota is very relevant here; once that wireless exploit is found (and believe me, it will be found; this is a question of when, not if), drivers of Toyotas and Fords are hosed. Being anywhere on that list of "hackable" cars [0] should be recognized as a significant problem, but manufacturers are continuing to blow off the core problem and only react to specific breaches.
Basically, folks like Chrysler, Ford, and Toyota (and other mentioned manufacturers, too, like Cadillac) are relying on white hats and grey hats to be the ones finding the zero-day exploits in their wireless systems. And even when those exploits are found, they're being "addressed" with half-assed solutions like requiring an upgrade via USB (never mind that if a remote attacker can hijack the brakes and transmission, of all things, an OTA upgrade should at least be possible).
In other words, I'm not ignoring Chrysler's "response" at all. Rather, I'm noting that their response isn't actually indicative of the attitude shift that's actually necessary to prevent death and maiming of drivers.
You called the cops on two security researchers and a journalist, because you disagreed with their methods and weren't sure what their plans were and what authorities they'd talked to? (And not just any cops, the cops in St. Louis, for bonus points.)
Are we still on Hacker News, or is the transformation to Enablers of Traditional American Power Structure News complete?
Are you not supposed to report dangerous, and possibly criminal, situations to the authorities? A witness to such events cannot know if it has already been reported or is known about, are they supposed to just go on with their day? Yep, just drive by that car accident with possible injuries without calling it in because I'm sure someone else has already taken care of it. What's the worse that could happen? The authorities tell you they are already aware of the situation, thank you? Are we now people who should no longer care what's going on around us or are we careless souls more worried about ourselves to not care over fellow human beings?
Sorry, but the cops lost that trust from me when the started sending swat teams and abusing power way to much. Since then, to me calling the cops has become a last resort. I dont trust ANY of them because of the few aholes that are abusing their power. Mainly caused because of their policies of shutting up and protecting each others. Until they fix this, i will not trust ANY cop again.
In my life I've experienced 2 burglaries, 3 vehicle vandalisms, and one time I was seriously assaulted in a beach town by 10 guys with a gun. In every one of those experiences, the police were unprofessional and completely useless.
I will still call the police in the future, but just so they can fill out a police report for insurance claims or potential law suits. Other than that, I don't expect the police to do anything unless they were on the scene and saw somebody break the law.
And even then, one of the times when my car was vandalized, the cops were there and they filed a police report and told me to try to figure out who will pay between the two dudes that jumped on my car and if that doesn't work, give the officer a call and he will help me with the next steps. Obviously those guys didn't wanna pay so I tried to get in touch with that cop and he was avoiding my calls. I called over 10 times over the course of a couple weeks and he was never there and never returned my calls.
Another time my neighbor was throwing eggs at my motorcycle for 3 nights in a row and I got him on video, call the cops and they come by about 12 hours later and just laughed at the video, and then all of a sudden got a call to something more important and bounced. It may seem funny, but me having to pay someone $300 to clean egg out of all of the fairings and tubing is not funny. At the very least, do your job and file a fucking police report.
I'm not sure how this would make calgoo change his mind.
* Cops are under no obligation to go into harms way to protect the public.
* Cops primarily exist to collect evidence for prosecution after the fact.
* Just because criminals and crimes exist doesn't take away from cops' bad behavior.
* If calgoo becomes the victim of a crime, cops are unlikely to be able to make him whole again - for bodily injury, prosecution of the perpretrator can't restore his body or life - for property damage or theft, police usually can't be bothered with the small stuff.
There are problems with power and corruption.... But
Have you seen what happens to communities when police withdraw? They become overrun by gangs and other less accountable organizations.
Even favelas where cops act paramilitarily, say in Caracas, people still want cops because no cops is usually worse, unless you get a private version of cops, which is essentially cops by another name.
Not to disagree with you, but to add a data point (or rather an anecdote) - I heard a few people from Moscow claiming that they're more afraid of cops than criminals.
(Of course, even if it's not an exaggeration and even if it reflects the actual probabilities of getting hurt by cops and criminals there, it does not follow that things wouldn't be even worse without a police force.)
I fail to see how that bears any relevance to anything I stated. Not that I necessarily fully disagree with your sentiment, but you're presenting a related yet different topic.
There's plenty of safe ways to accomplish this kind of demonstration. The fact they choose to do so in a way that endangered the public is in fact criminal.
Being a security researcher or journalist doesn't give you a license to put the public in physical danger.
I'm sure the police are even more ill-equipped to understand the ramifications of this demonstration and will over-react and start jailing anyone with a laptop and suspicious intent.
I can't wait for the pathetic outrage when "racial profiling" now means harassing white kids with laptops that fit the profile of hacker.
This is a matter for a company like Google to take on politically, not some beat cop in St. Louis of all places.
I totally agree that there's a problem here, but I strongly disagree with the method of action.
Why not engage the FBI? This is not an issue specific to St. Louis. Throwing some researchers in jail solves nothing. This is a way bigger deal than some local offense.
You need an agency with the ability to see the bigger picture.
How is it not given they could have disabled any vehicle with that vulnerability?
This basically suggests thousands of cars could be driven off the road and deliberately crashed right now. I'd say that's a threat that they need to deal with at the national level on an immediate basis.
Would you rather wait for a malicious actor like North Korea to get involved before the FBI makes a move?
What I'm trying to say is I'd rather the FBI gets involved and works with these researchers to develop an expedient fix for this problem than some beat cop in St. Louis to bust them and throw them in jail where they help nobody and the threat remains extremely grave.
As much as it seem over the top, those researcher could have hurt people.
Calling the police will not have them go to jail or have their data deleted. It might (rightfully) get them a fine. It will however ensure that their next experiments are done in a safer, more legal way.
Calling the police isn't all about emergency. You can call them to talk about issues that worry you such as this one. They will take care of bringing the issue the the right entities, it's their job.
> Calling the police will not have them go to jail or have their data deleted.
Do we read the same Internet news? Having seen the way the law enforcement + prosecution machine works in cases like Aaron Schwartz, I would be surprised if these researchers did not spend time in jail, and didn't at least face charges of some Serious Nature.
If there's a case to be made, the police will build it. If they build it, the DA will prosecute it, and there could be (is going to be?) things like charges under the CFAA, since they could certainly try the perspective that the access needed to be authorized by the auto manufacturer, rather than the owner of the car.
I could totally see how invoking the power of the police on these researchers could, through the kind of progression we've seen many times before, destroy their lives. I really hope that's not the case -- I'd much rather they got some kind of warning like, "Don't you ever do this on a road with other people on it again". Even so, I agree with many others that their actions were pretty reckless, more so than I realized when I first read the article. This is the kind of thing that should have been done on a private test track, and doing it around others was reckless and negligent.
> Having seen the way the law enforcement + prosecution machine works in cases like Aaron Schwartz, I would be surprised if these researchers did not spend time in jail, and didn't at least face charges of some Serious Nature.
Having considered how dangerous their little stunt was, I'd almost expect them to be sentenced to some gaol time. What they did was pretty darn Serious!
If prior HN articles are anything to go by, it's a matter of time before SWAT kicks down their doors, beats them up a bit, and maybe even a few officers "fearing for their own lives" (yeah right) take a couple of shots in "self defense" against unarmed nerds.
You're delusional if you trust in a law enforcement agency to take reasoned and measured action in any situation.
> Calling the police will not have them go to jail or have their data deleted. It might (rightfully) get them a fine. It will however ensure that their next experiments are done in a safer, more legal way.
Or not done at all.
I may not fully agree with their methodology but I'm thankful this work is being done by people with good intentions instead of having these issues come to light when people with malicious intent find the vulnerability and kill or maim countless people.
Calling the police is just going to discourage more researchers from even attempting "safe" experiments. Calling the Highway patrol is like trying to open an egg with a sledgehammer.
Can you expand a little on how calling the police about something is more or less self righteous than doing the (at least somewhat dangerous) experiment on a public highway?
It's definitely elementary school stuff to snitch and play the informant on your colleagues. The worst thing that this was coming from an old fart stuckup who thinks that they could police every thing and everyone around them and act more royal than the king himself.
This isn't really engaging with his action.
Specifically, because called the cops because he believed that their methods put people in danger of physical harm. This objection isn't coherent without an argument either that:
1) He was unreasonable in his belief that they'd put people in harm's way.
or
2) It is not appropriate to contact law enforcement as a result of observing one person put another in harms way.
I'm guessing you're arguing both, correct?
aside: He contacted the state highway patrol, not the local St. Louis police.
aside2: Hi Geofft! How are things going?
If he was actually concerned, and not just outraged, he would have called the police and reported Chrysler for endangering thousands of people's lives.
My argument is roughly that we don't know for sure that people were put in harm's way, and we have good reason, as hackers, not to trust the legal system to reliably figure these sorts of things out (and they generally fail in the direction of being worse for both individual researchers and society of a whole). If we did empirically find the legal system reliable and fair, I'd be more convinced that the threshold for objecting should be "unreasonable".
The action is also over right now, and I see no indication that they intend to do so again. So this is either about punishment-as-retribution, or about dissuading future researchers from doing similar things. I don't think retribution is particularly justifiable, and I think there are better ways to dissuade future researchers, like having a conversation about it without the police involved. So I guess I'm arguing the specific sub-case of 2 that if they don't intend to put someone in harms' way again, law enforcement isn't necessarily right.
(You're right about the highway patrol thing, btw. I think I had it confused in my head with some other recent public/police conflict where the highway patrol was worse than the local police, but it looks like the opposite was true of the Ferguson protests.)
The researchers could have achieved the exact same results (albeit with fewer clicks) by conducting this experiment in a remote parking lot or a private road. Heck, if the writer had contacted the cops, they could have given him an escort to make sure nothing bad happens.
If you ask me, it is this kind of behavior that makes the work of real researchers harder, as the media is quick to paint all security researchers as clueless nerds who will put people at risk.
> The researchers could have achieved the exact same results (albeit with fewer clicks) by conducting this experiment in a remote parking lot or a private road.
According to the article, the researchers already did as early as 2013. Auto manufacturers ignored the reports while continuing to pretend that their vehicles are secure.
It still demonstrated the same root problem: that the computerized systems on cars today have very little in way of basic safeguards. And there was indeed quite a bit of cracking UConnect and wirelessly spying on Dodges and Chryslers throughout the country before the experiment.
If I were an auto manufacturer, I wouldn't wait until someone finds a wireless exploit (at which point it's too late to do anything about it before people die or are maimed unless I'm lucky enough for the zero-day to be found by a white-hat or grey-hat). I'd see those earlier reports, say "holy shit if we have one wireless bug, the whole car could be pwned", and start working on a better isolation of critical systems from internet-connected systems immediately.
You don't think there's a difference between exploiting physical access, and remote network exploits? Given physical access to a computer, you can break into it almost trivially; but you don't see people sweating about that.
> Given physical access to a computer, you can break into it almost trivially; but you don't see people sweating about that.
Sure you do. This is why large businesses (smart ones, anyway) require employees' smartphones to be locked with a password or PIN. This is why standards like HIPAA require secure data to be encrypted at rest. This is why laptops being stolen from government agencies leads to things like millions of confidential records disclosed (true story).
And you're still missing my point: that the likes of Toyota and Ford are relying on their wireless systems being secure. That's reckless, since now their wireless systems are the single point of security failure. The lack of even basic safeguards, access levels, etc. should a breach occur is the point of this article, more so than the specific UConnect breach. Having only one layer between "secure" and "pwned" is by no measure a good idea.
The fact of the matter is that this is Startup News not Hacker news. Hardly anybody on this website is a hacker. Most are people that code html and php in their day job and go home and do normal shit. These are people that complain about how the industry "pressures" them into coding in their free time.
> You called the cops on two security researchers and a journalist, because you disagreed with their methods...
Calling it a disagreement over methods glosses over the real issue, which is that it was a dangerous exercise and its perpetrators apparently don't have sufficiently good judgement to be left to their own devices.
This was my first thought. To repeat what others have said: why on earth did they do this on open roads and high speeds? I can only assume it was for additional 'shock impact' of the story.
Reckless in many, many ways, no matter how interesting the story actually is. In fact it's so reckless that it actually devalues the interesting and important core of the story itself.
> To repeat what others have said: why on earth did they do this on open roads and high speeds?
Because - according to the article, at least - they'd already demonstrated similar exploits in more controlled environments, and said demonstrations were handwaved and dismissed by the auto manufacturers.
I don't disagree with you; the researchers could have taken better safety measures (most notably, better communication between themselves and the reporter would have eliminated most of the risk by allowing the reporter to cut the experiment early), and had they done so, they would have been more clearly in the right.
However, there's some usefulness to the higher speed, since it indicates that the car can be isolated among highway traffic even at high speed. The researchers were also smart to not slam brakes (which would have turned the minimal danger from unpowered coasting into the maximal danger of sudden stops).
The security industry has unanimous voiced their concern that remote controllable cars and kill switches is one of the worst ideas possible, and will be exploited and cost human lives. Nothing has yet to happen from that.
So what should researchers do? Do nothing and keep their hand clean while waiting for the train wreck to happen? Continue in a fruitless effort to warn people on papers only other researchers reads, knowing by historical evidence that no change will come from it. Ask government for permission to do a live test, knowing that it would never be granted. Do a demo test on a demo road, knowing that neither government, industry or public would care.
I don't like this either, but it seems to me as a society we only really give researchers one option and that is to do nothing and wait for the bodies to pile up.
There is a big world between "do nothing" and "put third-parties at risk by stalling a car on a three-lane highway with concrete barriers."
Doing the right thing is often boring and takes lots of work. That's why it's called "doing the right thing" and not "doing the splashy thing" or "doing the easy thing."
They already had the attention of the media. Keep on working with the media to get more and more attention. Is it hard? Then do it some more.
Thank you. This was very irresponsible. Ignore the vigilantes saying this doesn't cause harm. My family had a close friend killed on the freeway when she ran into a stalled tractor trailer and it decapitated her.
We've had 2 MAJOR accidents just recently by my house (I-85 near Atlanta) due to foreign objects and/or stalled vehicles.
Anyone who thinks this isn't unsafe is absolutely delusional. Any unexpected failure is hazardous on the interstate. Especially failures to the drivetrain, suspension, steering, or braking system. Beyond that, I hope everyone can agree spraying the windshield with washer-fluid and thereby completely obscuring the vision of the driver while he was traveling at 70mph is absolutely a hazard (what if the car in front had stopped for some reason).
I agree: it was very poor judgment to demonstrate this on a public highway.
But when I read that you actually called the authorities and encouraged others to do the same by posting the number, a certain somewhat Tao-istic scene in The Big Lebowski [1] came to mind.
Performing this test on an open highway is incredibly irresponsible behavior on all parties. This is why they have test tracks (or even sandlots). Cutting the transmission to a vehicle going 70 miles an hour on an open highway is reckless endangerment -- even if the person behind the wheel knows it is going to happen.
You're not gonna make the news unless the media can spin up a headline that scares people
People won't pay attention until they're scared
People won't demand action if they're not paying attention
Nothing will happened if people don't demand action.
If nothing happens the status quo (vulnerable systems) will remain. Until some bad actor (I'm sure several nations states would love that capability) gets into onStar and turns every connected vehicle (every GM made in the last 8yr or so) into a brick at an inconvenient time (rush hour on a monday).
>I've just phoned 'Troop C' of the Highway Patrol at their main number, +1-636-300-2800 and they seemed pretty keen to follow up. The fact that the vehicle was disabled where there was no shoulder, was impeding traffic, and the demo not cleared with them in advance has them concerned. I'm all for testing exploits and security research, but this isn't the right way to do it. And to film it and post it to a high traffic site is nuts.
I'm not sure if you're actually this dense or just trolling. What good can involving the police, after the fact, in a situation where nobody was harmed do?
To clarify: If a story involving events of questionable legality, no matter how small to were hit the news the police would be obligated to investigate on some level. Think about the kind of message that "we saw it on the news but we don't think it's worth investigating" would send. By informing them before it hits the general news, one enables the "swat teams and more" knee-jerk response that the police love (if I had cool toys I'd want to play with them too) but without any media scrutiny. For example, law enforcement was plenty eager to screw the guy that "hacked and airplane" (through similar means I might add) until the story became more widespread and they had to use their discretion to act in a manner that would not reflect poorly on them.
By alerting the State Police in advance they're
I don't expect this to hit the news. University of IIRC Michigan (something with an M) was doing similar things at closer range (bluetooth) on a test track back in 09(?) and nobody cared.
And for all the people saying they were "reckless and dangerous, etc, etc," sure, yeah, to a small extent. If they wanted to be reckless they'd have made the car go instead of stop, swapped left and right on the electronic power steering, disabled the brakes on one side or end of the car, etc, etc.
>I'm not sure if you're actually this dense or just trolling. What good can involving the police, after the fact, in a situation where nobody was harmed do?
I don't know, maybe if they get in trouble the next researcher who wants to do a test by disabling a car doing 70mph on a public road will maybe just alert a few people and make sure that it would be impossible for someone innocent to die during their testing.
I was with your comment until you called the GP dense or a troll. Because to follow your logic, to get action, they should've just actually killed a random person. Then you'd be right, we would get some changes, pretty quick.
Who do you think should be the random person to get killed for change?
I agree they may not make news if they did this in a safe manner.
However, the goal of people researching security, shouldn't be to make news. And these people while admittedly working with Chrysler to see it fixed, seem to be forgetting that. Especially since they plan to release their code, despite the fact that Chrysler has to get people to manually update their cars.
"The two researchers say that even if their code makes it easier for malicious hackers to attack unpatched Jeeps, the release is nonetheless warranted because it allows their work to be proven through peer review."
Their justification for releasing their code, as someone who works in peer reviewed industries is weak and they clearly are prioritizing attention over security at this point.
I saw a presentation at a departmental colloquium 3 years ago which demonstrated similar capabilities. The point is, car companies are not responding well to this threat even though it is well known to them. In such situations it is in the public's best interest that information about the vulnerabilities be widely disseminated in order to keep the general public safe. Those with know how can already exploit these flaws and likely have been for years. The car companies need to act to secure their customer's systems.
> In such situations it is in the public's best interest that information about the vulnerabilities be widely disseminated
This assumes many facts not in evidence.
It may, in fact, be the best thing. But security people, as a rule, are strongly biased to love things that increase the social standing of security researchers, and chaos does that.
There are other ways of pressuring the car companies. I'd like to see companies failing to fix disclosed security holes in safety critical applications in a certain period of time face monetary damages, even without need to show harm was caused.
But lobbying is boring and getting on the top of HN is fun.
>> The point is, car companies are not responding well to this threat even though it is well known to them.
I think the problem is related to core competencies (sorry to throw in the MBA speak).
The old-school car companies are good at making cars, and not secure computer systems.
You can likely say the same about the skill sets of the decision-makers running these companies. Many of them just can't wrap their head around security implications, because they don't fully understand them.
Car companies, possibly more than anyone else in the world, are the home to people who understand how mechanical failure affects lives.
The car companies' failure to patch defects ought to have them facing severe fines. In fact, I would support a bounty system of millions of dollars for researchers who can demonstrate 1) finding a flaw, 2) telling the company, and 3) the company not fixing it in X months. All this finances by fines on the car companies.
The above facts doesn't mean that what these guys did was okay.
>> Car companies, possibly more than anyone else in the world, are the home to people who understand how mechanical failure affects lives.
You're completely right, but the key phrase in your sentence is "mechanical failure".
I've worked on analytics projects in the automotive industry for analyzing defects before they get into the "campaign" (aka recall) stage. They are incredibly good at that type of analysis. Most mechanical parts "make sense", since they're designed for only a few functions.
An Internet connected computer and software, on the other hand, doesn't always make sense to auto execs because they are significantly more complex.
As it relates to the article, I wouldn't be surprised if the car's computer system was perceived more as just a part having a particular set of features by Chrysler's top executives than as a computer system requiring the same types of security controls as, say, an ATM would.
They could have made still made the news if they had taken a few extra precautions to reduce the risk of an accident.
However, they do need to make the news. Them making the news makers it easier and more likely that politicians will prioritize the political capital of working to solve this over the lobbyist from the automotive industry.
If Chrysler and other car manufacturers were taking this sufficiently seriously the releases might not be necessary. They gave Chrysler plenty of warning, Chrysler could have issued a recall (and still can), the consequences are on Chrysler, not on the security researchers.
You do realize a recall doesn't make all the cars come back on their own to get fixed right? Hell many consumers don't even realize there was a recall till their product fails for the reason it was recalled.
Chrysler seems to be taking this sufficiently seriously enough that releasing the code will do more harm than good. Could they take it more seriously? Well everything can always be taken more seriously, and someone will always claim it should. So I will say that's a matter of opinion.
EDIT: If their plan to 'release their code' is nothing more than a bluff to raise awareness I would consider that a much more appropriate course of action.
>> I agree they may not make news if they did this in a safe manner.
Maybe, maybe not. All they need to get eyeballs is a linkbaity FUD headline with a few extra scary sentences thrown in.
It's not as if the TV news doesn't already do this with their teasers for "Is eating too much XYZ going to kill you? Find out after the commercial break" only for you to find out that the story about XYZ is overblown and poorly vetted.
They didnt have to do this on a public highway. A large parking lot would have be sufficient. Why should other motorists be subject to harm because a couple hackers and a reporter want to make a story. They could've easily gone on one of the 24hr news channels to scare the masses.
Or heck even a quiet public road. There are tons of inter-state roads that have single-digit numbers of cars an hour. Just drive out into the desert and play around.
PS - Plus due to the flatness of these roads, with large de-facto shoulders you can pull off onto, they're much safer even ignoring traffic levels.
Yes, yes. We wouldn't be having this big thread about the safety of the experiment, and consideration for other motorists, if they hadn't done it this way.
If they had done this in a parking lot at 25 MPH with a couple cops present, the way Mythbusters does things, they would have ONLY had a story about hacking a Jeep to shut it down. And if they played their cards right, they might even be able to start some LEO contracts for car-disabling equipment.
> I'm not sure if you're actually this dense or just trolling. What good can involving the police, after the fact, in a situation where nobody was harmed do?
People who do one reckless thing such as this demo are likely to do others. Calling the police about this incident means that they'll have a record of the people doing this, and if it becomes a pattern, handle it considerably harsher than an isolated incident.
Absolutely the right thing to do. I don't care how technically gifted these people are. They are morons who deserve whatever legal consequences this might bring on.
This isn't about security researchers. There's a HUGE GAP between security research and setting up a situation that could kill someone's daughter, son, mom or dad. That incredibly stupid at the least and criminal at worst.
There are levels of this in tech all over. I don't know if it is about social isolation or something else. Things ranging from the kinds of privacy decisions made by people coding social networks to the totalitarian and inhumane approach seen in dealing with various large web players. It's almost like you are dealing with a non-human race (the Borg?) that is almost completely devoid of human feelings, emotion, consideration, respect, a sense of community and simply making decisions that are humane rather than cold and mechanistic.
The other one is morons flying multicopters above people, neighborhoods and around firefighting aircraft. How does a human being go there mentally? I don't know.
I applaud your actions.
What's worst is that it is likely this was not the first time they did this.
You better have the Highway Patrol investigate every single person who doesn't maintain their car properly and takes it on the highway because they're causing far more risk than this demo came close to creating, IMHO.
Was it a stunt? Yes. Was it life threatening? Hardly. The real risk is the early 90s Civic with a torn up clutch and bald tires swerving between lanes.
Uhh what? It seems you cannot go a week without reading about a pile-up on a freeway. Just last week a big-rig lost a wheel, it rolled into the on-coming lane, and drivers swerving and braking to avoid it actually caused a pile up. Stopping even on the shoulder on a freeway is considered "risky" by most police officers and many (like triple digits) have been killed while stopped in the shoulder due to vehicles drifting, failing to pay attention, or otherwise being distracted.
I cannot remotely begin to fathom how anyone can think a car going 0-10 MpH on a freeway ISN'T dangerous. And it is absolutely life threatening. If a car behind didn't notice the change in speed, panicked and either hit you or the concrete barrier(s) that could very easily cost them their life. Or leave them with life-long disabilities. Bigger things like trucks and those "road-trains" are even bigger liabilities.
Honestly I'll defend security research strongly in almost all contexts, but when you put people's actual lives in danger you clearly cross a line. There's no shades of gray there, endangering people's lives and health to effectively show off is absolutely immoral and should be illegal (and likely is).
Saying "well nobody got hurt" completely misses the point. It is the intent that is wrong, not the result. The result could have multiplied the wrongness of the intent and resulting in tens of years of jail time, but luckily for them their only "crime" this time was the intent of their dangerous actions.
And let's be frank here luck is the only reason nobody got hurt. The only reason why these two won't be in jail for many years.
Poorly maintained vehicles that break down while driving surprise the driver. This happens daily on public roads. Should we fine them for failing to maintain their vehicle to your standards?
There are autonomous vehicles being tested on our roads with a failure mode of "coast to a stop". They may not even have a human inside to react to things around them. Do the operators deserve to be jailed?
People modify their cars with various after-market upgrades and take them onto the highway. If the car fails, do they deserve to be imprisoned?
What a slippery slope!
Driving is a risk. The most deadly risk you will take each day. Drive defensively, don't be a statistic.
"O.C.G.A. 40-8-7 (2010)
40-8-7. Driving unsafe or improperly equipped vehicle; punishment for violations of chapter generally; vehicle inspection by law enforcement officer without warrant"
"(A) No person shall drive or move, or cause or knowingly permit to be driven or moved, on any highway any vehicle or combination of vehicles which is in such unsafe condition as to endanger any person."
"24002. (a) It is unlawful to operate any vehicle or combination of
vehicles which is in an unsafe condition, or which is not safely
loaded, and which presents an immediate safety hazard."
This research appears to have happened in Missouri, where it's harder to find the actual laws on the subject. That said, I did find this: https://www.mshp.dps.missouri.gov/MSHPWeb/PatrolDivisions/MV... which tends to imply that there are laws to this effect that I cannot easily locate via internet searches.
because they had previously tested it and knew what each function was doing. this was the Hackers movie with them flying around a computer and poking and prodding random things.
They decelerated a car. The brakes weren't even applied. This happens all the time on highways. It is unfortunate that it happened where there was no shoulder on the road, but if an accident did happen then I'm not so sure the researchers or journalist would be at fault.
Here's a scenario:
Let's say a person is driving a car, when their car engine fails. There's no shoulder for them to drive onto, so they are just slowly decelerating when they are rear-ended by a vehicle behind them. Would you say that the car that had a mechanical failure is at fault, or the person behind them who wasn't paying attention is at fault?
> but if an accident did happen then I'm not so sure the researchers or journalist would be at fault.
So the people that purposely tried to cause the accident wouldn't be at fault for the accident if it occurred..?
I find it highly amusing that in your scenario you're using an unpredictable failure as an equal for an intentional act.
A better scenario would be:
I open your car bonnet while you go to the bathroom. I half-cut some cables knowing that they will fail when you knock them a few more times. You come out, get in your car, and drive down the freeway. A few miles later your car stops suddenly in the fast lane, and a big rig crashes into you while you sit there stopped going 70 MpH and you die. According to you I am not, at all, responsible for your death.
Or better yet still:
You just stop on the freeway just for fun/see what would happen. Someone drives into the back of you at 70 MpH and THEY die. According to you, you aren't at all responsible for that.
There's a huge difference between stopping on the highway and decelerating due to lack of engine power. The driver knew what was happening, turned on his hazard lights, and didn't apply the brakes. Slowing down on the highway, although annoying, shouldn't be an unfamiliar or unsafe scenario (ex: construction, traffic backup, etc.)
This would be a completely different story if the researchers applied full force to the brakes or accelerator since those are unexpected (to other drivers), sudden, and difficult to react to behaviours.
Everything else aside, slowing without good reason is likely to be a traffic infraction (in Missouri, a misdemeanor punishable by 1 year in jail!).
It isn't that convoluted to hold the driver responsible for the vehicle, they knew prior to driving into the area with a minimum speed that there was some intent to tamper with it.
They decelerated a car enough for other drivers to honk. It was slowed to a crawl. States have adopted minimum highway speeds for 50 years for a reason.
What if their proof-of-concept didn't work as predicted and did slam the brakes? This is just a reverse-engineered hack that was unleashed on a highway while the radio was blasting too loud to hear each other on the call.
There's a lot of bad drivers out there that make a lot of bad decisions, but saying "most" of them are essentially not in control of their vehicles is frankly ridiculous.
I have a 26 mile daily drive, all high-traffic interstate freeways, and I can usually count at least two occurrences per day where I have to take evasive action to avoid a collision – people illegally on their (hand-held) phones, people blowing across three lanes at 75mph and not even bothering to check their mirrors, drivers leaning over in to the back seat on the freeway, people who drift into the wrong lane in a concurrent two-lane left turn, people who tailgate leaving mere inches between them and the car in front of them despite you having nowhere to go, people braking as if their car weighed half of what it actually weighs, et cetera.
I'm honestly not sure what the solution is, but it's (i) legitimately terrifying every single day, and (ii) hard to believe that any other kind of transportation modality would accept the kind of outcomes that humans driving on the US interstate highway system produces.
A self-driving car is still being driven, by a computer that has control, situational awareness, and the ability to recognize and avoid dangerous situations. This demonstration was specifically about removing those three factors.
So what happens when (not if, but when) said computer encounters a fatal error? What happens when future security researchers like the ones in this article manage to break into said computers and manipulate them?
If we're going to condemn researchers for potential danger, then we might as well extend the same courtesy to car-driving AI and the makers thereof.
I believe people will need to be killed, or get their cars destroys before the rest of the population takes enough of a stance against "neglecting" security.
Nowhere near as reckless as missing oncoming traffic by mere feet at speed differentials exceeding 100MPH. Happens billions of times a day without anyone expressing the slightest concern. People are regularly killed and cars destroyed; the rest of the population doesn't care.
You're officially wrong per Federal Certifications (some courtesy Jeep, some the dealer,) so that's the Safe way for researchers to have approached it; mountains and no easement would've brought it down to rules for scratch journalists (please try to recover my GoPro...) State (etc.) laws are 80% hate speech against cyclists. Not even tagged Florida; my car's entertainment system made me climb a tree and launch t-shirts at traffic, blister, etc. If they'd done it as a vetted demo in a lot that could have been a Federal lot...insert stdSecLetter, stdClearance, stdDeclarationOfInterest...meh.
I was wondering why they wernt in constant communication(didnt he say he had to grab his phone and ask them to stop?), wjy the f*#@ would you test this at speed?
I agree with you, i hope that the author was lying to make his story more interesting (hows that for a bad wish).
I completly agree with you, seems to have a total disregard for anyone elses safety.
Let's not forget Wired's responsibility for this either. I wonder what editor Scott Dadich and owners Condé Nast have to say. OTOH, we don't know for certain that the tale of what really happened on the public highway didn't grow in the telling.
It's not staged, it's just a parking lot or something. It's mentioned directly in the article which it seems that most people have not even read:
"They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.
Miller and Valasek’s full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch."
It certainly doesn't seem to be from the main transmission-shutdown incident, at least. I'm much less interested in the photo than in the fact that Condé Nast corporate thought it was a good idea to proudly Tweet this article to the world.
While I agree with the fact that what they did was dangerous, the fact they did it that way will garner much more attention to the root cause of this problem - connected cars allow remote control of car's most basic mechanical features, which they shouldn't. Hopefully, this will result in better safety measures in car systems in the long run.
Edit:
They could've done it on the parking lot and the article would be put in a pile "some geeks are doing some geeky stuff" and forgotten. 70 MPH on the public highway is like a billboard with ten foot letters saying "PAY ATTENTION" in your face.
Those particular means are unjustified. What actually happened wasn't nearly as extreme as you're indicating, and given the previous behavior of auto manufacturers to security hole demonstrations in their cars, this sort of demonstration was viewed by the researchers as the next logical step.
I don't entirely agree with the methodology, but nobody was hurt, unlike what would would likely be the case should even less ethically-grounded "researchers" demonstrate similar capability - probably on a larger and more dangerous scale, mind you.
It was a gradual slowdown. That "non-zero" has enough zeroes after the decimal point for Japan to send the number to Hawaii and have another go at Pearl Harbor.
Worst-case scenario, somebody might've been rear-ended. Maybe a bit of whiplash. That's not great, either, but seeing as more-controlled tests by these researchers were outright ignored by auto manufacturers, your priorities have to be incredibly out of whack to villify the researchers over the auto manufacturers - who are willfully endangering hundreds of thousands, if not millions, of Americans every day - in this scenario.
Your estimates for both the "non-zero" probability of injury and the worst-case scenario are very far off from mine and from the those of the thread-starter, who appears to have some expertise in traffic considerations, and the dangers of semi trucks in particular. I wonder if your opinions about this would be different if you believed this was as dangerous as many of us believe it was, rather than merely having an extremely low probability danger of a harmless fender-bender.
My estimates come from some personal and professional experience (including being a former employee of a state highway patrol, mostly tasked with - among other things - processing traffic collision reports and dealing with phone calls from those involved; not a fun job, that was). Admittedly, probably not as much as a semi truck driver, but contrary to popular belief I'm not entirely inexperienced here :)
The reporter mentions that this was uphill. Semis generally have a hard time going uphill at an appreciable speed (as I know full well being stuck behind them regularly on the mountain pass highways that connect my town to the rest of the world; lines and lines of trucks at less than 45 MPH with their flashers on); more weight leads to a harder time fighting against gravity. The uphill slope should make it easier for the truck to slow down.
If the reporter had made an abrupt stop (i.e. if the researchers slammed his brakes or something), then yeah, I'd be more concerned. That wasn't the case, though. Rather, it was a gradual deceleration according to the article. Cars can actually coast quite a distance, even uphill, when they start at 70MPH; I know this firsthand from my own SUV running out of gas once on a busy interstate, and on an uphill no less. Even with the uphill, there was enough momentum for me to put on my flashers, merge right from the fast lane, and eventually coast into the next offramp a quarter-mile away. No shoulder, either.
Now, this isn't to say that it couldn't've been safer, nor do I disagree that more safety precautions should've been implemented. For one, the researchers could've - at the very least - told the reporter "hey, if our attack comes at a really bad time and you feel like you're about to die, turn the car off and on again and you'll regain control". However, even with the described scenario as-is, risk of life is quite slim. We're not talking about a driver slamming his brakes and going from 70 to 0 in seconds; we're talking about the equivalent of an engine stall, and thus a rather gradual slowdown - graudal enough for even semis, let alone smaller vehicles, to react to.
> I wonder if your opinions about this would be different
They probably would, yes. Slightly, though; ultimately, one injurious pileup is a drop in the bucket compared to the hundreds of thousands that might actually be prevented by demonstrating precisely why proper security measures on Internet-connected heavy machinery are worth taking seriously. Not that I think the possibility of the former should be dismissed (indeed, I agree that the researchers could've done things more safely while still getting the attention of auto makers), but said possibility needs to be weighed against the possibility of the latter, with the recognition that any demonstration - ideally a totally safe one, but even one with some degree of risk - is necessary to push auto manufacturers toward taking security seriously.
And I wouldn't call this a fight. Just an ethical debate. One that'll probably be a bit heated, of course, given the circumstances, but it's one that needs to be had.
Feynman had a nice story where he figured out a way to crack many of the safes in Los Alamos, then dutifully reported his method to some bigshot general. The general said "hmm interesting, thank you very much", and banned Feynman from entering rooms with safes or something. The safes stayed as unsafe as ever.
You remind me of that general. You should be hanging out on Catch The Hacker News, not Hacker News.
There can be times when it is okay to test on uninformed humans.
For example, I have relatives who do fire safety. How people do (or don't!) evacuate from buildings when fire alarms go off is a big area of research.
The ideal way to test this is to set off the fire alarm in a building where people do not know it is happening, along with some smoke and pyrotechnics.
HOWEVER, there are ethical concerns, and a review board would ask questions like:
1. Has anyone else done this study before? If not, why not? How sure are you that no one has done it before?
2. What does the previous research with similar protocols say? What key question are we trying to answer?
3. What is the harm that will be present to people? Are we doing everything we can do to reduce that harm?
4. What more could we do to reduce harm but that might impact the reliability of the research?
5. Quantify how much of a benefit this research would be so we can compare to the risk you are presenting.
6. Demonstrate that you have done all the preliminary work that is necessary to achieve good results, so that we can make sure that the research is used. It would be foolish to put humans at risk and then be unable to use the research because we forgot something we could have taken care of upfront.
These researchers would bomb most of these questions.
The reason for an INDEPENDENT review board is that researchers tend to follow this flow chart:
Have idea. ----> Wait, should I do this? ----> Yes, of course!
There's a pretty significant difference in scale between two nerds putting maybe 2 or 3 vehicles in danger of a dented bumper v. the world's largest military conducting live-fire bioweapon testing.
So either the journalist/researchers did something highly dangerous to the reporter and others on the highway as a stunt, or the journalist has no qualms about making up details for shock value? Sounds ethical.
There was a "documentary" of these guys when they were testing via a hardwire to the car's computers. They were also in the car with the driver as well as in a parking lot and on some non-busy country road.
I wonder if the reporter just added in those details about the highway to make it seem like more of a real threat or if they actually did test on a busy public roadway.
I agree that they definitely took it too far and weren't being very safe, but calling the cops seems to be taking it a little too far, also. What laws were broken?
Edit: Actually I've thought it about it, and they could probably be charged with reckless endangerment.
Jesus, just google similar researches in 2011, which has been done in "safe environment" without naming manufacturers etc. etc. and in 2015, after 4 years we see that manufacturers did say "meh, thank you but no, we aint gonna do shit about it"
How about that - "Fiat Chrysler now says that 10 vehicles from its 2013, 2014 and 2015 model years are vulnerable to hacking, including five 2013-2014 Ram truck models, the 2014 Jeep Cherokee and Grand Cherokee, the 2014 Dodge Durango and 2014 Dodge Viper, and some 2015 Chrysler 200s."
and that - "Miller and his associate, Chris Valasek, director of vehicle security research at the consultancy IOActive, estimates that hundreds of thousands of Fiat Chrysler vehicles on the road today could be vulnerable. That’s unsettling." just read people! 2013 models. And now, this a hole calling police because this guys opened your eyes. Wouldn't it be better if they made "safe" test again, manufacturers ignore it AGAIN & then some sick bastard simply crashed thousand of those?
And yes the main thing I like is - “Customers can either download and install this particular update themselves or, if preferred, their dealer can complete this one-time update at no cost to customers.” DOWNLOAD & INSTALL THEMSELVES? What? but yeah right blame the researches of course.
"In case any of you think this was cool or even remotely (no pun intended) ethical, I'd like to know if you have a problem with letting these two test this on a loved one's car. How about they remotely poke around your husband or wife's car and explore, as long as they promise not to intentionally trigger anything?"
I would certainly let this guys to check on my car and my wife's car, just to make sure that if it can be hacked then I'd better get rid of that crap and sue a holes which let me drive a car which can be controlled remotely. Cause I would rather trust ex NSA and current director of vehicle security research at the consultancy IOActive, rather than have even a 0,00001% chance that some unknown hacked crew can end my life sipping coffee in starbucks.
I would like to give some other perspective. FCA (parent co. of Jeep) have been slow about a number of safety recalls and are under increased scrutiny by NHTSA:
Here is a choice quote about the culture relating to safety at Fiat - Sergio Marchionne is CEO: >> Marchionne said in January that the auto industry may have “overreacted” to some safety issues, especially the massive air bag recalls, which may have been “overkill,” he said. << This is about the Takata recall in the news where the detonators can produce deadly shrapnel.
So yes the demonstration described in this article was somewhat reckless, but the facts that FCA has not notified owners beyond a posting online about a firmware update (who checks that?), tacitly condemns security researchers' decision to publish some details in their communications with Wired, all the while stonewalling recalls - for example in Jeep vehicles where they catch fire in rear end collisions, killing occupants - that upsets me much more.
In my opinion, when a company is notified of a safety or security issue, they should do all that can be done as quickly as possible, here instead FCA has once again done the minimum plus has the gall to respond in writing, "We appreciate the contributions of cybersecurity advocates to augment the industry’s understanding of potential vulnerabilities. However, we caution advocates that in the pursuit of improved public safety they not, in fact, compromise public safety." I guess I embrace hacker spirit more than anything else I considered here is what it boils down to.
So I would have written to the NHTSA trying to make this yet another recall if I thought it would have done any good, but in that culture of 21%-compliance-is-acceptable, I don't think it would do a lick of good, so I won't bother.
Also, I accidentally clicked on "flag" above when I wanted to click on "parent." I am sorry, that was not my intention, I just wanted to refer back to the Wired article as I was responding, and they are small and right next to each other. Ah, I notice when I refresh there is an unflag option, I have just taken that action, again sorry.
>Also, I accidentally clicked on "flag" above when I wanted to click on "parent." I am sorry, that was not my intention, I just wanted to refer back to the Wired article as I was responding, and they are small and right next to each other.
There should be an "unflag" where "flag" used to be.
Agreed, and now the headline reads more like "Hackers endanger people on public highways" instead of the more interesting (in its consequences) "Jeep cars can be taken over almost completely over an Internet connection while they are running". I'm sure this generates tons of traffic for Wired but this does not bring the necessary focus on the security issue.
Real world scenario was used to gain more publicity and to get mainstream media sources attention. People get attracted to catchy titles and can relate the incident to themselves because it happened on a highway. Nobody bats an eye if the test was done on a parking lot.
> If I ever learned this had been tested on a vehicle I was in, I'd make sure this cost the researchers dearly.
That's not how civil court works, as I'm sure you're referring to filing a suit against them, for... some nebulous thing? You have to prove damages to be awarded anything in a civil court.
Theoretically let's say that they tested some remote tracking on your vehicle without your consent. What then? If you can prove there was some damage to your vehicle, great, you'll be reimbursed for it. Otherwise?
Content aside, the self-satisfaction and smug attitude of this comment is disgusting.
I can't believe that you snitched on these guys and you're proud to share it with us! WoW!
The guy consented to their experiment and he voluntarily engaged with them. It is not like they set him up for this.
Maybe you could argue that they could have jeopardized the lives of people on the highway with their reckless behavior esp the engine shutdown stunt and I believe that they didn't exercise wise judgement in doing so but didn't they instruct the driver to switch off and back on to regain control of his vehicle and move ahead?
You also claimed that they're boasting of their act by publishing this video when it was Wired that produced and made the whole report and experiment and not them. The reporter himself the subject of this experiment didn't file any report with the authorities so you come and act more royal than the king!
What a mess!
What was that snitching for? This is completely uncalled for.
This is a knee jerk reaction from you and testament of your true character.
You should be ashamed of yourself snitching on your colleagues like this and your phony outrage at this act is not fooling anyone.
It's not that your points are irrelevant, it's that they needlessly draw attention from the issue at hand: that the car manufacturers are being criminally negligent.
As opposed to the alternative of someone specifically leveraging this with the intent to doreal harm? They could have gone about this in a safer manner but the person you should be upset with is the manufacturer not the person pointing out a large gaping problem.
This is the kind of completely uninteresting side discussion that people on internet forums love to get into. "Somebody did something dangerous on a road" is not a relevant topic on HN.
Blame the messenger hey?. How about calling the cops on the company that actually put that crap tech in your car instead?
To the OP: You don't have to scare me twice. I'm sporting pre 9/11 PC-hardware and now I'll be driving a pre 9/11 car.
If you tell me that someone can remote control my underwear then maybe I'll have to draw the line there because I'm not buying pre 9/11 undies dammit....not yet anyway.
> I've just phoned 'Troop C' of the Highway Patrol at their main number, +1-636-300-2800 and they seemed pretty keen to follow up. The fact that the vehicle was disabled where there was no shoulder, was impeding traffic, and the demo not cleared with them in advance has them concerned. I'm all for testing exploits and security research, but this isn't the right way to do it. And to film it and post it to a high traffic site is nuts.
Can you clarify exactly where the line is between what you've done here today, and an outright swatting?
I'm not trying to equivocate the two, but it would seem they both exist somewhere on the same continuum of personal information and police involvement.
How much do you think your decision to make this call was influenced by your perception of what law enforcement does in Europe vs. what law enforcement does here in the United States?
Outright swatting: "911, my name is <researcher name>, I live at <researcher home address>, and I'm currently holding my girlfriend hostage with a shotgun and plan on killing us both in 30 minutes."
What tombrossman did: "911, I saw a video of researchers shutting down a car on a freeway in the middle of the day with little concern to public safety, can you guys investigate and make sure that nobody's life was in danger for this experiment?"
I don't think there's even a tangential comparison between the two.
Are the police not capable of reading wired on their own? The article is quite public. I can't imagine this could be published without the relevant law enforcement agencies being aware of it on their own.
In general, I would feel uncomfortable alerting a local law enforcement agency in _another country_ about something I saw on the internet, both because the premise is silly, and because not all cops are loyal public servants dedicated to protecting people. Many just like the power trip they get from having a badge and a gun to wave at us plebs.
- Man drives car on public highway @ speeds of up to 70mph
- Hackers turn on windshield wipers and fluid to blur view
- Hackers Blare music and obscure any comms link to driver
- Hackers disable vehicle on Highway at location with no shoulder
And there are people who are not only ok with type of experiment but think there should be more of it.
I understand that these exploits need to get attention... but I really can't stop thinking about my wife and kids being behind this guy while he shows how dangerous this can be.
No, I think notifying the police, who then notify the IRB is how normal people would deal with this. I don't expect the normal person knows about the IRB (I'm not surprised it exists, but I was unaware of it).
In a similar vein, if you notified your local police about a kidnapping they would notify the FBI, because kidnapping is the FBI's jurisdiction.
The two options here are not "test on highway with other drivers" and "let flaw exist with no testing and no exposure". There are many ways to responsibly test this while not endangering others on a public road. For example, using a private road, a large empty parking lot, an abandoned airforce base, the salt flats, etc.
The Mythbusters test stuff like this all the time. What do they do? Use an abandoned airforce base or the Utah salt flats. Rule number one, don't endanger the public.
If you increase the danger of a situation to increase it's exposure, you can't be surprised when that causes repercussions. I doubt they will go to jail, but I do think alerting the authorities was the right call.
In fact, they may have been counting on that. If they really want to increase the exposure of a story, start a public debate. The easiest way to do that? Get some public outrage going. They called this all out, they'll need to deal with the consequences.
I don't disagree with you on that. The researchers (and WiReD) should certainly be aware of the risks here, and be prepared to accept the repercussions thereof.
On the other hand, while two wrongs don't make a right, I'm glad that the researchers made that choice, so long as said choice results in manufacturers actually taking car security seriously for once.
I didn't watch the full video, but at 6:45 it states
"Unlike most previously recorded car hacks, this demonstration was performed wirelessly over cellular networks. The vehicle shown is modified with third-party hardware for demo purposes. In stock form it is not vulnerable to these attacks"
which is quite different from the level of vulnerability showcased in the Wired article.
> What about all those wives and kids that would have been endangered if the flaw had continued to go unfixed and exploited in a more malicious manner?
Because, he said anything like that right? What a gratuitous use of a strawman.
It's not appealing to emotions, it's pretty rational to think this experiment could have easily caused an accident and hurt people. We all rationally know that driving is one of the most dangerous forms of travel. Seriously, a car is a dangerous, fast, multi-ton piece of metal, it's not a toy to experiment on when other people's safety is at stake. Respect the vehicle and the damage it could cause.
The exploits can and will be published and reported on quite easily without these reckless theatrics.
This comment is shockingly naive. Can you really not think of any other way to demonstrate the vulnerability they found without taking the whole thing onto open public roadways? Safety researchers who work for auto manufacturers don't go out on the highway to do crash test exams and we trust their outcomes.
Obviously they don't. But chrysler was informed and created a patch. But what they didn't do is ensuring that it gets rolled out to everyone vulnerable.
By saying this you are basically saying there was no reasonable alternative method of exposing this.
I am unwilling to say that. This argument that somehow the ends justify the means when there was a clearly more safe means has to stop. It's just ignorant.
Now imagine the exploit being used by a blackhat. The hackers aren't the problem here. The fact that somebody can even control cars over the Internet at all is.
Because a dangerous threat exists does not give a researcher license to endanger the public to prove it. This is especially the case when a safer alternative to demonstrate this exploit easily exists.
Robbers could enter your home and hold your family at gunpoint AT ANY TIME. That does not give me the right to prove to you how easy it is by entering your home and scaring the crap out of your family.
> This is especially the case when a safer alternative to demonstrate this exploit easily exists.
If you read the article, you'd know that said safer alternative was already attempted and presented to auto manufacturers, only to be met with dismissal.
"Second, Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference."
"WIRED has learned that senators Ed Markey and Richard Blumenthal plan to introduce an automotive security bill today to set new digital security standards for cars and trucks, first sparked when Markey took note of Miller and Valasek’s work in 2013."
> "Second, Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference."
I did admittedly miss the "nine months" portion of that, but that's still only one company out of many.
> "WIRED has learned that senators Ed Markey and Richard Blumenthal plan to introduce an automotive security bill today to set new digital security standards for cars and trucks, first sparked when Markey took note of Miller and Valasek’s work in 2013."
If you read further, you'll see the paragraphs on Markey's letters to auto makers regarding the 2013 findings; Markey's own findings only reinforce my point further.
Also, note that my point - that auto makers mostly ignored Miller and Valasek, according to the article - would not include senators (unless said senators build cars, of course).
> I did admittedly miss the "nine months" portion of that, but that's still only one company out of many.
Yes, it's the company that owns Jeep. The company that has a demonstrated the security flaw. How different automakers responded to different security issues isn't related to this article or discussion.
> Also, note that my point - that auto makers mostly ignored Miller and Valasek, according to the article - would not include senators (unless said senators build cars, of course).
Senators may not build cars, but they can (and are trying to) force auto makers to take security seriously.
The argument in this comment chain has been whether this problem could get the attention it needed without such a dangerous publicity stunt. The fact that automaker and lawmakers were convinced to take action by less dangerous demonstrations shows that this stunt was not necessary.
> How different automakers responded to different security issues isn't related to this article or discussion.
It is related to the article when the article discusses those responses.
> The fact that automaker and lawmakers were convinced to take action by less dangerous demonstrations shows that this stunt was not necessary.
One automaker (even this is dubious; Chrysler seriously expects people to believe that the only way to patch a bug that allows total control over a car's transmission and brakes - let alone the rest of the car - is via a USB stick, and that over-the-air patching isn't an option? Please.) and two senators. There are dozens more automakers and 98 more senators to convince. Hopefully the demo helps make that a better situation.
Meanwhile, a bunch of Dodges and Chryslers are driving around America totally susceptible to UConnect bugs, and a very large number of new cars on the road don't even have the most basic safety precautions (like, you know, not connecting the brakes and transmission to the Internet willy-nilly).
The convincing so far has been negligible. Hopefully that'll change soon, before someone with less-benevolent motives follows in Miller's and Valasek's footsteps.
And yet somehow the sins of the auto manufacturers in no way excuses the reckless behavior demonstrated in this video. It is possible that more than 1 person/entity in this story is in the wrong. Its clear you've already made up your mind and have posted more than a dozen comments here defending the researchers, so I'm not sure what more can be said. The ends simply do not justify the means.
I agree that both sides are in the wrong, actually. I've even stated my disagreement with the methodology in several of those "dozen" comments.
Most of those comments, however, are only clearing up a specific misconception: the belief that the researchers jumped straight to a "live" test before trying tests in closed conditions. My idea of "right" v. "wrong" does not factor into doing my part to ensure that discussions on this matter are based on accurate information.
My "defense" of the researchers is more just identifying a lesser evil. Between the evil of a couple of nerds hacking one car and the evil of auto manufacturers willingly putting hudreds of thousands - if not millions - of innocent people in mortal danger, I'd sooner take the former (assuming that it actually makes a difference re: security priorities of auto manufacturers; in reality, even this particular demonstration is probably insufficient, though perhaps I'm just jaded).
First off, a "dangerous thing you can do" and "exploit" are not synonyms. So examples like anthrax attacks or home invasion are stupid and massively miss the point.
Secondly, nobody would give a fuck about this exploit if it was performed in controlled environment. The researchers knew it because they did this kind of stuff before. Guess what, the cars did not become any safer!
This much should be obvious to anyone with a hacking mindset. The comments in this thread read more like "Moms Against Drunk Driving Bulletin Board" than "Hacker News".
Actually home invasion is an "exploit" of the home security (i.e. locked doors / windows).
Just because they do it from behind a screen doesn't make it a less culpable crime. Computers don't insulate you from ethics...
We put locks on our doors to prevent people from entering. They have always been exploitable but we use threat of laws to prevent it. Now we put locks in our software to prevent people from entering it (encryption). Somehow this generation believes that these locks are exempt from decency and law. It's sad that people think exposing vulnerabilities at any cost is righteous. There's plenty of people researching security in responsible ways. These two are not in that camp.
Have fun... it's no different than kicking your neighbors door down and tell him to pay you for exposing his security flaw. Still makes you are jerk.
Using your logic, the government doing control biological tests on an unaware population (e.g. to test the spread patterns and inform their response to an actual event) is ok because terrorists/foreign governments could do much worse with (e.g.) weaponized Anthrax. I somehow doubt that you would be ok with such actions by the government (though I could be wrong).
It's not either/or. You're presenting a false dilemma. You can demonstrate the problem without doing it where you put real lives in danger. The researchers acted recklessly.
> They responded with a patch, but the researchers didn't like their response.
It was my understanding that the patch was released in response to the live highway test, not the prior tests in controlled environments.
> They could have let the "test dummy" in on what was going to happen, so they could give feedback as to when it was safe to do so.
The article makes it sound like they did.
> They could have ensured constant two-way communication.
Indeed they could've. I agree with you about the recklessness of this particular element of the test.
> They could have done it when nobody was on the road.
Perhaps, and I agree that maybe they should've coordinated with local authorities (if they didn't already). However, between "do the test with vehicles on the road" and "don't do the test at all", I'd certainly pick the former.
Not to mention that the urgency involved with other vehicles on the road factors into the effectiveness of the demonstration.
Regarding the patch timeline, the article makes it clear they had been working on the patch for months before this went public.
> Miller and Valasek have been sharing their research with Chrysler for nearly nine months, enabling the company to quietly release a patch ahead of the Black Hat conference.
With respect to letting the driver in on it, it's pretty clear they withheld most information:
> Miller and Valasek refused to tell me ahead of time what kinds of attacks they planned to launch
And with respect to this:
> However, between "do the test with vehicles on the road" and "don't do the test at all", I'd certainly pick the former.
Oh look, another false dilemma. Between those two, I'd pick neither, and do the test responsibly.
> With respect to letting the driver in on it, it's pretty clear they withheld most information:
The reporter knew there were going to be attacks in the first place. There was also plenty of reason to believe said attacks could severely impair safety.
> Oh look, another false dilemma.
It's a trilemma; the concept of "do the test 'responsibly'" was already implied, so I merely provided the other outcomes. There's "perfect execution of demonstration" and "no demonstration"; between that is a spectrum of perfection, on which this demonstration happens to lie somewhere near the lower-middle.
I don't disagree that the demo could've been done with more safety precautions, but the desire to do a "live" demonstration like this seems pretty reasonable, and even a demonstration lower on the perfection spectrum is preferable to the bottom end of "nothing at all".
I take it you also can't stop thinking about your wife and kids being behind someone whose engine stalls, or who slams one's brakes to avoid hitting a deer, or who runs out of gas on an interstate (which has happened to me, thanks to my car's faulty fuel gauge). You must have a lot on your mind.
I don't agree with the methodology, either, but based on the information in the article, it sounded like the researchers didn't have much of a choice, seeing as how prior demonstrations were casually dismissed by the automakers. The auto manufacturers and the public both need a wakeup call, and this is a much more sane wakeup call than the even worse alternative of outright-malicious crackers breaking into vehicles en masse and reducing highway traffic to shrapnel in ridiculously-large pileups.
"I didn't have a choice but to fire a gun in the middle of a crowd of people. It was just a wake up call!. You always have a choice. One of those choices is to do the demo, but with the proper safety authorities helping. One of those choices is to perform the demo in a place where it is safe to pull your car over. One of those choices is to do the demo somewhere where the speeds are a little lower.
All of those things you listed are actual emergency situations that are hard to control. This is one thing where they were in full control of, and still decided to do it anyway. They had a choice, and they chose to endanger the public.
There's a big difference between firing a gun and a car gradually slowing down.
Yes, the researchers could have made better choices. They could have made worse choices, too. The "danger" here is significantly exaggerated given the descriptions of the scenario in the article (gradual slowdowns, contrary to popular belief, aren't that hard to react to in a timely manner), and it certainly does not compare to firing a gun.
So, it's becoming abundantly clear that vehicle companies (autos, jets...) have approximately zero knowledge how to hire software engineers. Presumably they're somewhat more successful hiring mechanical engineers because that's always been their "thing".
It's all well and good for us to chuckle at the terrifying software/systems decisions being made by these teams, but how do we address the root of the problem? It's very clear that entire meta-categories of horrific errors are being made at a very fundamental level. Is this a problem of outsourcing? Of confusing "coders" with engineers?
And how do we solve it? Shame the software team such that they can never get hired in a serious role again anywhere? Professionalize the job into a strictly licensed regime like other branches of engineering?
Whenever I read these types of articles, my main thought has always been, "so who, the hell, wrote the code?" It'd be interesting to know their story.
The fact that a dashboard system that controls your radio or AC has access to cut your transmission is also a hardware configuration issue. Accessories should be physically secured from ignition and drive train. The internet connected features of the car, in turn, should be severed from both of these. It should not be physically possible to turn on the wipers from the embedded processor that receives packets on the IP address of the car.
Read up on CAN-BUS. The entire industry is moving to one-wire protocols to reduce the labrynthine copper network that prevailed in the past. If you can put your transmissions diagnostic information on the radio's nice big LCD, why wouldn't you?
Some would say "this, this is why", but those people are not responsible for selling and maintaining millions of vehicles.
> If you can put your transmissions diagnostic information on the radio's nice big LCD, why wouldn't you?
Surely there's a way to make this information read-only. I can see information about my engine on my dashboard via the speedometer and tachometer; it would be ludicrous if I could kill my engine by grabbing the little needles and cranking them down to zero.
> Surely there's a way to make this information read-only.
There absolutely is a way. Just off the top of my head you could relay the information from the high-sec CAN bus to a low-sec one with a micro-controller. So the low-sec bus can only receive messages from the high-sec one.
Not enabling firmware loading over CAN on the relay is a must as well for obvious reasons, but the key is the code on the relay microcontroller can be kept very simple (easier to audit/secure).
Isn't that what the hacked car already doing? I don't know about Jeep specifically, but most cars have several CAN busses and some micro-controller passing messages from high-speed control network to low-speed infotainment network.
Problem is, most automotive engineers are clueless about security and most "hackers" are clueless about automotive hardware, software and protocols. There is no dialog.
I wish articles like these posted at least some specifics. A lot of these hacks in the past were completely impractical. Yes, yes, they had shown some interesting possibilities, but it was disingenuous to present them as real-life attacks (which many media outlets did).
For sures. The ECEs are very much complicit too. Perhaps this is a ECE<->software team communication or power problem? ECEs invent the systems architecture and some poor chap has to attempt to secure the thing? Or maybe the software guys have no avenue to push back against bad systems-level design.
And what are we going to do for self-driving cars? These are almost certainly going to rely heavily on internet access to perform basic driving functions. Figuring out how to make complex systems like these be secure in a trustworthy way is going to be a huge challenge as more and more critical devices are connected to the Internet.
An air gap is hardly a solution for traffic information the route planner HAS to talk to the computer responsible for getting from A to B. In the same way that firewalls are no longer particularly relevant, air gaps appear to be flawed now too, the only way to solve any security issue is better code quality..
"And what are we going to do for self-driving cars? These are almost certainly going to rely heavily on internet access to perform basic driving functions."
Actually, no. Google's Urmson has spoken at "connected vehicle" conferences and indicated they don't need car to car communication.
Agreed. The key word here is "basic". Maps have been and can be downloaded, and algorithms to get to point B from A have been embedded for a decade at least. It's only the newer devices (phones) that have managed to push that all server-side. Real-time data like traffic conditions or weather hazards can be added as supplementary data through secure channels, but the entire system does not require an internet connection to work.
Thinking that "basic driving functions" will need to rely _heavily_ on internet access is thinking wrongly.
I immediately thought of an unlocked firmware update (or boot over CAN) in some entertainment computer component that can then spoof other subsystems. The security model in modern cars is broken, esp with regard to control and data.
I don't think this has necessarily anything to do with engineering competence.
From a business perspective, security isn't a marketable feature until it becomes a problem—you don't install safety belts, or airbags, or protection against malware until after people start suffering from their absence in a vehicle.
Why? Because while you're busy building a well-secured system, your competitors are busy implementing new features that give them an actual advantage in the marketplace. As unfortunate as it might be, consumers tend to understand things like “remotely start your car with your phone” better than “your ability to brake won't be taken away from you while you're barrelling down the highway at 70 mph.”
It's sad and more than a little scary, but it's also nothing really new. Computer security, at least in the consumer sector, wasn't really a feature until viruses started showing up in the Eighties, and Internet security wasn't really a feature until the average Windows user's PC was getting taken over remotely the moment it was connected to the Net. Even Apple has only been able to tout security and privacy as a feature in its products by juxtaposing it to Google's business model—had the latter not existed and its data grab become part of public discourse, I doubt that Cupertino would have been able to make so much noise about it.
So, it's perfectly possible that every engineer and manager who worked on these systems is really quite competent and perfectly aware of the potential for security flaws (indeed, I doubt that they would have been able to make something so complex work otherwise), and still the sum of all the decisions made and market pressures applied caused the resulting product to be so vulnerable despite everyone's best intentions. It's not because people don't care or don't know, but rather because there are only so many resources available, and the market has pushed them all in a specific direction that happens to be away from security.
But this is also why we need this kind of research. Now that these problems are out in the open, and politicians are starting to take notice, security will become a feature that the public will care about, and, hopefully, car manufacturers will start adopting (or be forced to adopt) better standards.
Even if you disagree, preventing corporate liability is a component of competence in the law's opinion. That is, if the company is found liable, that's saying the employees responsible did something wrong, even if it's not holding them individually accountable.
Ianal, but from previous fallout on security issues, I'd assume legal liability stops well short of requiring actual competency.
Parent is 100% correct. It's market-adaptation. Same reason Samsung ships known-vulnerable extensions to Android: features >> security.
> So, it's perfectly possible that every engineer and manager who worked on these systems is really quite competent and perfectly aware of the potential for security flaws [...], and still the sum of all the decisions made and market pressures applied caused the resulting product to be so vulnerable despite everyone's best intentions.
I think this is key. Although I'd lump it more on management given that they allocate technical resources. When you have a lack of technical knowledge in management, you lose the ability to make technically informed decisions.
Sometimes the nuances of a situation can't be summed up in a PowerPoint slide. Especially when it's a slide that someone created to summarize a slide deck from an engineer that they saw.
You think at least some of the OPM vulnerabilities were internally unknown? Even with incompetence, you had to have actual engineers who looked at settings and/or lack of feedback and went "Hunh..."
In addition to your third paragraph, auto makers are disincentivized toward adding security features because they CAN'T advertise them. If Toyota started advertising the all new Prius with the feature "your ability to brake won't be taken away from you while you're barrelling down the highway at 70 mph", consumers wouldn't just not care, they would question why the fuck that wasn't there in the first place. This sort of stuff is expected to have been there from the start by consumers, so at this point its nothing but a cost sink to add it.
This will change as awareness of these attacks reaches the general public.
According to the article, US politicians are looking at introducing legislation to enforce cybersecurity measures. At that point, it will just be another safety rating that manufacturers can and do use to promote their vehicles.
For the same reason I am skeptical of the "Internet of Things". Do we really want every device in our home to be exploitable from someone across the globe? Continually applying firmware updates to appliances to close security holes? Something fundamental has to change about how we handle device security before I want to live in that world.
What? The Internet of Things is equally exploitable if your devices are connected over ethernet (or ethernet over powerlines). The key factor is whether or not it's connected to the Internet.
People, and businesses, respond to incentives. The company probably did the economically rational thing here - the money they make from their remote-access features is more than the money they will lose for the insecurity.
Companies in industries that need to find ways to make secure software; it's not a hard problem if you're willing to throw enough money at it. But as long as customers don't care whether their products or data are secure, we'll get the security we pay for.
People do care if it makes the news. But the current official ways of doing the testing doesn't make the news and testing that is news worth gets the cops called on you. How convenient that testing a security flaw is viewed as more negligent than allowing them in the first place as a cost saving measure.
Unknowingly allowing a risk is a very generous way to describe policies that cut costs by increasing the chance of these risks. The ones who are endangering a small number of people to try to overall increase safety are currently looking at far more legal harm than those who endangered magnitudes more people for the sake of making more money. Don't contribute to stupidity what can be explained by amoral greed.
I think this also comes down to the curriculum and training that Computer Engineers receive during undergrad. Since the core courses touch embedded, hardware design, and software we're left with only the basics. Coming from the Detroit/Big 3 area a lot of my colleagues went off to Toyota (in ann arbor), GM, Chrysler, Ford, or one of their suppliers/contractors (Denzo et al.); and from what i've been told is that they do put a lot of effort into software, however it's focus is more on stability and function as opposed to security and portability.
I imagine a not terribly experienced new team being told to do connected stuff in a car, not really understanding security and having ridiculous demands thrown at them as the manufacturer drools about getting subscription revenue from every car. But would be nice to have an inside story.
Which entire meta-categories of horrific errors do you ascribe to software development for jets? Commercial avionics, while not error-free, is surely one of the most stringent sectors of software development around today.
Oh, I know this one! Is it c) "The developers were competent at designing car control software, but insufficiently concerned about the possibilities for remote control, and therefore didn't test appropriately"?
All the researchers and the journalist had to do was to talk to the Highway Patrol and say, "We'd like to test this on a highway; what do we need to do to make that happen?"
That's it. Maybe the State Patrol would say, "Sorry, there's nothing you can do to test this here legally", or maybe they would have said, "Pay for overtime for 10 troopers and you can do it."
The point is, we don't know. We can speculate, but we don't know.
The 'researchers' and journalist elected instead to conduct this experiment on a state highway, in "real world" conditions, without any safety mechanisms in place. Not only is this unethical and dangerous, it is (and should be) illegal.
No one should stop these experiments from taking place; and the CFAA should be amended to allow security researchers to research issues; but the problem I have is the inherent danger in this experiment.
What would we be saying if the journalist had been killed, or a mother and her two kids because of this? Do you think public sentiment would support security researchers if this had turned out differently?
If anyone had gotten hurt, you'd be looking at legislation that strengthens penalties for security researchers; not at legislation that takes security research more seriously.
This was an extremely childish move that had the propensity to hurt our industry more than help it. It is incumbent upon us take safety seriously in conducting these experiments.
We can't count on level heads from outside the tech industry if we aren't willing to show that we care about people's lives and their safety when we're conducting these experiments.
> This was an extremely childish move that had the propensity to hurt our industry more than help it. It is incumbent upon us take safety seriously in conducting these experiments.
We can't count on level heads from outside the tech industry if we aren't willing to show that we care about people's lives and their safety when we're conducting these experiments.
Agreed. I would have no problems with them doing this on a test track, closed highway, or even a quiet road at low speeds. Even if we take the best possible negative scenario—say, the car is disabled going at 25 miles an hour, the driver can't handle manual steering, and then runs into someone's fence—the insurance companies are going to throw the book at the driver when they learn that they purposefully disabled their car and engaged in dangerous behavior on a public street.
I'm all for pushing the boundaries of security research (there are people in the labs all around me right now doing crazy stuff), but at least we in the academic world get our crazy stuff signed off on by a panel of competent experts.
All of this is possible only because Chrysler, like practically all carmakers, is doing its best to turn the modern automobile into a smartphone.
I think this is the biggest problem. Stop making "smart" cars with all these unnecessary features. Even if you can't resist adding entertainment or navigation, don't ever physically connect those systems to the critical systems like engine and transmission computers except through a one-way (to display information) link, like it's done on airplanes.
I'm happy to have a much older vehicle with none of these "enhancements". It has a physical throttle, hydraulic brakes, and steering linkage for which remote hijacking is physically impossible. I can add navigation and entertainment with a smartphone mounted on the dash. It may not be as fuel-efficient or safe(?) as the cars today, but maybe the tradeoff is worth it. That also suggests there could be a market for new "dumb" cars which have all the modern improvements to engines and safety, but none of these "smart" exploitable features.
(I'm not so paranoid as to get a mechanical EMP-proof diesel though...)
It amazes me that while more and more jurisdictions are banning cell phone use while driving, vehicle makers are increasingly resorting to touch screens for things like stereo and climate control. When using a smartphone while driving is illegal, how are in-vehicle touch screen controls meant to be operated by the driver not banned?
As much as I love Tesla and what they are trying to do to the car industry, they are the worst offenders in this. Hopefully by the time I can afford one, there will be legislation making entirely touch-screen dashes illegal and they'll have the usual 3 dials for climate control, that you can operate without having to take your eyes off the road.
Agreed. I got infotainment/navigation touch screen in my Subaru. It is really hard to use without looking at the screen. Even after 2 months of driving, I haven't been able to develop muscle memories like I have for other controls on the car.
Luckily, infotainment screen doesn't really host anything critical. My uses include navigation and phone. Both of those functions should be used while parked anyways. I just cannot imagine changing a/c settings only at stop light.
All the controls on the Tesla are 1.5" or larger which makes them easy to use out of your FoV while keeping your eyes on the road. Cellphones are a much smaller surface area and require a lot more focus.
How would you like those three dials to control the rest of the car systems? And, isn't this what BMW tried to do ages back with that single 'iButton' control that everyone hated?
Uhh, the same way almost every non-luxury car made between 1960 and 2010 did it? A dial each for temperature, fan speed, and where the air is blowing. Plus a button for air conditioning and/or recirculate. No touch screens, no menus.
People have been using cars without touch screens for 50+ years. The UX is a pretty much a solved problem by this point. Yet now car manufacturers seem to want to mess with something that worked great, just so their cars seem cutting-edge.
You do understand that there is more than just climate control that can be adjusted in a vehicle these days? There's navigation and direction finding, traffic alerts, radar proximity warnings from other traffic, radio and entertainment systems, telephony functions, systems monitoring and alerting for various components, current engine and transmission ststus, location data, environmental data...
Seriously, though. The trend that annoys me the most is touchscreens in cars. A touchscreen is the last thing I want on a car. I want physical buttons that I can find and operate by touch alone without having to take my eyes off the road. Maybe voice control if it can be made sufficiently reliable, but that's about it.
It's like car manufacturers nowadays want people to crash their cars so that they can sell more of them.
DARPA researchers demonstrated this stuff on a "60 Minutes" segment a few months ago [1]. The main difference is that they were in a large empty parking lot so as to not unethically put non-participants in danger.
That work and earlier work (including that shown in the 2014 Black Hat presentation by the researchers in the present article) drew interest of the Senate [2]. Senator Markey's office produced a detailed report, and has called for the NHTSA and the FTC to develop standards to deal with these issues (and also the numerous privacy issues modern cars raise) [3].
Lot of comments here are accusing the "hackers" of negligence, but do not forget the writer, camera crew, and editors of WIRED were fully in control of the demonstration. This happened in the context of journalism, not security research. Blame WIRED if you think they screwed up, not the folks behind the computer.
It was up to WIRED to ensure the safety of the demonstration, and evidently they failed given this passage,
After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp
Seems to me they should have at the very least had a chase car trailing the demo car with a sign, flashing lights, or flags to alert nearby drivers.
I think people are quick to single out the researchers because there is more at stake when researchers act recklessly. When a journalist does something reckless, it's a problem with that journalist. The Constitution protects other journalists continuing to do what they do.
When a hacker does something reckless, it's usually painted as a problem with all hackers. Which then fuels calls for draconian laws which would hinder future research.
I think it's because the article clearly names the two researchers, while WIRED is a faceless organization. It's a lot easier to get mad and blame the one with a face.
I see lots of people arguing about the safety of how these guys conducted the hack. Okay, sure, there is probably an issue there of some degree.
But it's a very small issue compared to the fact that hundreds of thousands of vehicles are arbitrarily hackable right now, with more rolling off the assembly line all the time, and people are driving these around right now.
Why is most of the discussion here about the minor issue? Why is everyone so eager to derail discussion from the major issue? I thought HN was trying to be a reasonable place.
> Why is most of the discussion here about the minor issue? Why is everyone so eager to derail discussion from the major issue? I thought HN was trying to be a reasonable place.
I find these criticisms _extremely_ reasonable. Plus, the big discussion is not about them doing something illegal, the big discussion is about people here being totally fine with it.
And given that the topic you (I assume) want to discuss is something along the lines of "negligent behavior in technology", I also find it very relevant that negligence is countered with more negligence.
They might be reasonable in isolation but are not being levied remotely in proportion.
I just realized what the problem is: this is bikeshedding. Everyone knows about people driving around and feels qualified to have moral indignation in that area, whereas few people know anything about actual cars.
Please make an effort to read and understand my point.
Yeah, maybe there was a case when a couple peoples' lives were at stake but nothing happened.
The real issue is that tens of thousands or hundreds of thousands of peoples' lives are at stake RIGHT NOW, under conditions that are much less controlled than what people are deriding as uncontrolled conditions. But people are griping about the 1-2 instead of the 10,000-400,000.
How is this not dead simple to understand? I don't get it.
> Please make an effort to read and understand my point.
Please don't assume disagreement implies I didn't make an effort. I understand your point, I just vehemently disagree with it. Bringing up the word "bikeshedding" when discussing putting people in danger of death is just something I can not agree with.
> Yeah, maybe there was a case when a couple peoples' lives were at stake but nothing happened.
This is exactly why I so vehemently disagree. If we found all the people who were in traffic with them at the time, do you think they'd agree that doing experiments in public traffic next to them is a purely aesthetical issue? Recklessness isn't defined by the outcome, but by the possible outcome.
> The real issue is that tens of thousands or hundreds of thousands of peoples' lives are at stake RIGHT NOW, under conditions that are much less controlled than what people are deriding as uncontrolled conditions. But people are griping about the 1-2 instead of the 10,000-400,000.
Yes, and I'm saying both are very important issues. Experiments on public roads in actual traffic are something that should not ever happen. I'm also not sure I'd put the amount of people they endangered in the 1-2 range, since there was at least one other person in the car, one in the truck behind, and others in the cars that passed them.
I also wouldn't go so far as calling this controlled, since that usually means a controlled setting, safety precautions and such.
> How is this not dead simple to understand? I don't get it.
As said, I do understand, I just disagree. It would also make it much easier to talk about this if you weren't trying to insult me and others.
I agree that the flaw itself is a very big issue. I disagree that the way the experiment was done isn't. And nobody here is arguing that it's good that the cars have that fault, so it's quite natural for the discussion here to be around topics people disagree on.
You know what the perfect location and media would have been for such an experiment? A television car show with a dedicated track and a big audience of people driving cars.
Why? Because this kind of idiocy makes it easier to push for laws that limit security research. Yesterday, Google had a top post on HN about new regulation. And Google's asking for exemptions that benefit them, but would still block a lot of security researchers from publishing as they see fit.
When these guys acted reckless, it hurts all of us. Even if the car company is 1000x worse.
Ethics isn't relative. You don't get to point to some other perceived problem and say "well that's worse, so what I'm doing is fine in comparison!".
Vulnerabilities in cars is an issue that needs to be fixed. Performing dangerous tests in uncontrolled environments is not a reasonable way to bring about change.
If they HAD caused an accident, how would you go about consoling the victims? "Oh, that sucks for you, but hey maybe your pain/death will convince the car companies to finally do something! Cool right!". It is not acceptable to introduce hazards to the general public to prove a point.
Should hackers actually kill somebody, I struggle to find a reason why the relevant automotive engineers and their managers shouldn't be charged and convicted of negligent homicide, or worse. After all, somebody had to make the decision to connect a radio receiver to the CAN bus. Others are aware of the wireless and choose not to remove it.
To be a professional is to have a duty to refuse to do stupid stuff like this, even if it's legal and even if your job depends on it. But is it legal? Why would we need any new laws for this? Connecting a wireless receiver to the same network that controls a car's brakes and steering seems to me like reckless endangerment. No need to wait for innocent people to die.
If history has shown us anything, it's that we cannot rely on software to separate two systems sharing a network. Only physics can do that. If we must have wireless for entertainment, then the entertainment and vehicle control networks must be air-gapped.
This seems blindingly obvious to me. What am I missing?
Criminal negligence is a high bar. We don't want to send people to jail for mistakes, accidents and miscalculations.
Civil liability is a lower bar. Regular negligence is essentially not using reasonable care. Whether air-gaping a cars computer is reasonable car would be up for debate. But I think you'd have a good case.
Product liability is similar to negligence. It holds the builder, designers, sellers, etc. liable for design defects. But I'm not familiar with caselaw about how hacking vulnerabilities intersect with design flaws.
>If history has shown us anything, it's that we cannot rely on software to separate two systems sharing a network. Only physics can do that.
Yet, a shocking number of critical systems are exposed to the internet.
When it comes to industrial safety, the main question when facing accusation of negligence is "what would a reasonable person have done in that situation". It takes into account things like:
- would a reasonable person have identified this feature as having an exploitable vulnerability?
- was it reasonably practicable to protect against it?
In this case, the manufacturer could argue that, in their review of the risks associated with their remote connection system, it was not reasonable to expect that it could be compromised and lead to a hazard.
Obviously, now that it has been demonstrated, there will be a much greater expectation that car manufacturers secure their remote access pathways.
"...whether the Internet-connected computers were properly isolated from critical driving systems, and whether those critical systems had “cyberphysical” components—whether digital commands could trigger physical actions like turning the wheel or activating brakes."
Shouldn't this be the most basic design consideration for any company building autos? The liability from litigation should bankrupt any company that doesn't prioritize this.
Well self driving cars should be a load of fun. There will be calls to isolate critical components as well as demands they are accessible to the likes of Law Enforcement so they can disable cars remotely.
So it will take legislation to sort out as liability concerns needs to addressed as well as the demands of law enforcement. Don't think for one minute they will accept self driving cars they cannot disable all of them for "safety reasons". Similar how they excuse options to black out cell service in areas
The obvious but security-oblivious way to do this is to just connect the entertainment system that has the internet connection to one of the car's microcontroller busses. Even if it just needs to send a single command, it's easier than adding another pin and another wire to the appropriate microcontroller on the other end. The problem is that everything on these busses is completely trusted, and there's no authentication. The window motor can't tell whether the signal to roll down the window came from a switch or from the entertainment system.
The simple solution is just to have a separate wire for everything, and source devices that aren't supposed to control destination devices don't get those wires connected. The problem is that the automakers went to microcontroller busses because this creates a rats nest of wires.
The level 2 solution is have some sort of low-level filtering on the commands that are going out from a controller on the bus, so any command that the entertainment system sends to turn off the transmission doesn't make it onto the bus.
The level 3 solution is to have some sort of cryptographic authentication of entities on the bus, so that the endpoint can decide what commands it's going to accept from what source.
As you go from level 1 to level 2 to level 3, the system is more flexible, adaptable, and upgradable, but it's more complex, and thus more brittle to attack. Sorting out how to handle this sort of thing is going to be a big challenge as IoT pushes into more devices.
Encrypted and authenticated data on the bus won't happen anytime soon for cost reasons. Filtering the commands the controller can put on the bus seams reasonable, but would only be useful, if implemented on a second controller (probably won't happen, either).
I think the best approach is to secure the internet connection properly. Don't permit incoming connections at all and just permit a single outgoing TLS connection to the server of the manufacturer, define a very simple protocol and spend enough time to be sure the client is secure and validates everything.
Given the risks to safety, it's necessary to use defense in depth and secure every layer by air gaps where possible, and a strict message whitelist where not. This might add $100 to the cost of each car, which is a ton of money when multiplied by millions of cars, but it simply has to be done.
Absolutely. But I can think of one very easy check that would solve many potentially serious problems.
Disable remote operation of car hardware when a conscious human is detected at the manual controls.
For some reason, this reminds me of Star Trek episodes where the crew has to transfer operation control of the Enterprise from the bridge down to engineering, or to another Starfleet ship. Even on a sci-fi television show, whenever that happened, it seems like they always had to enter a secret security code or have multiple bridge officers give their authorization codes.
It speaks poorly of your product design when writers for a television show give more thought to security than you.
It's usually convenient to a plot to have characters do things.
In real life, people generally prefer not doing things.
Which isn't meant to excuse a problematic implementation like is seen in this article, I'm just not sure the writers were actually sweating the system details when they did that stuff.
While I don't think the original writers paid much attention to any of that, by the time Star Trek: the Next Generation began, computer security cracking was present in the popular culture. That's how the invading Borg were defeated, after all. At some point, when the plot for a current episode demands that it be possible for a ship to be controlled remotely, they then have to ask the continuity expert how to fix it so that the newly introduced thing doesn't significantly impact previous canon. That burden adds up across multiple seasons of multiple spinoffs.
The plot solution didn't even have to make sense. All they need is some technobabble, ready to spout for any fan wearing plastic ears who might stand up at a con and ask, "If Enterprise had capability X in episode Y, why wasn't that used in episode Z?"
In this case, it is very reasonable that someone, somewhere, might have asked, "What should we do if this command is used while the owner is driving along a busy highway at 70 mph, and executing it would stall out the engine?" This is a question that would provoke a stop-and-assess moment in even the most dysfunctional software company I have ever worked in.
From the architectures we typically see for in-car computer networks, it looks like no one is asking these questions.
I was addressing the question of wireless access to vehicle systems, not trying to justify any particular feature that might be included in such an implementation.
It probably depends on how the exploit works. If you can OTA update a firmware that allows wireless control, that is a huge flaw.
But this article doesn't really talk about how the exploit is triggered. This is pure speculation, but I bet it requires some physical access to install. And then the wireless control works.
Why do I suspect that? Why else would the journalist have to travel all the way to St. Louis to test it.
Imagine how huge the story would be if these guys disabled a car they never got within 1000 miles of over the internet.
There is probably some CPU in the Navigation/Entertainment display that needs access to the CANBUS for stuff like warnings, speed, airpressures, etc. and also is connected to the UConnect thing for entertainment purposes. You can't airgap because you need the car data.
So the lazy solution is to just firewall it. Make sure the CANBus controller for the entertainment system only sends data and doesn't receive any. Maybe you encrypt all data transfers on the bus for good measure.
But with temp physical access you can reflash that controller to allow it to give commands it receives from the internet connected entertainment system.
Given that cars get recalled all the time for "this one part is kind of flimsy and might break 3% of the time", I am not sure why "some guy in China can drive your car off a cliff" is not grounds for an immediate and full recall.
If you talk to auto manufacturers in a way that they understand, they will understand.
Experimental Security Analysis of a Modern Automobile
"Even at speeds of up to 40 MPH on the runway, the attack packets had their intended effect, whether it was honking the horn, killing the engine, preventing the car from restarting, or blasting the heat. ... In particular, we were able to release the brakes and actually prevent our driver from braking; no amount of pressure on the brake pedal was able to activate the brakes. Even though we expected this effect, reversed it quickly, and had a safety mechanism in place, it was still a frightening experience for our driver."
I don't know if it will do much good, even if it did eventually lead to a recall, FCA (parent of Jeep) is under investigation by the NHTSA for allegedly poor handling of recalls.
I revisited this thread and thought: How would I go about running these tests and creating awareness for this issue?
A dynamometer would cover the vast majority of what they wanted to show. There was no need to create the danger they created with this vehicle. They really didn't know how the driver would react, "don't freak out" guarantees nothing. A professional driver (like a stunt driver) would have been far more appropriate.
The business about disabling the breaks should have been done a pile of hay bundles or something like that in front of the car.
For exposure they could have contacted any number of TV stations or networks who would have jumped on this immediately.
In all, the choices they made were reckless, stupid, dangerous and potentially criminal. I don't doubt their tech credentials at all. They are tech-smart people, no question about that. However, they have proven, beyond a reasonable doubt, that they are poster children for that stereotype of socially clueless engineers and/or the other stereotype of scientists/engineers who are so into what they are doing that they are completely blind to the idea that they could seriously harm people through their careless actions or inaction.
Wow. Just because you are savvy enough to do the research does not make you a researcher. These two really need to rethink the way they are "testing" this and perhaps educate themselves on ethics in research.
Their judgement collectively was worse than a pack of 5th graders with high grade fireworks.
If it were not demonstrated under real conditions, the car companies would just say "this was a fake test not representative of real-world conditions, isn't that true Mr. Journalist?" and the journalist would have to admit that that was true and then they would say "Under real-world conditions our cars are safe; customers have nothing to worry about."
This has been their playbook about everything for a long time so I don't know why you think it would be different in this case.
There is no way a "not real-world conditions" argument could be made if this same test was done on a test track. No automaker would even try it because it would generate even more bad press. The "researchers" did the test on a public, in-use highway for better press/cool factor. Completely irresponsible.
In response, the journalist would say "This was absolutely a real test; there was nothing fake about it. The conditions were as real-world as you can get: the vehicle was being operated at highway speeds with an average driver behind the wheel, the car's systems were connected to the internet in the exact same way as every other car of that model is, and the attackers were operating their exploit from a remote location as would be the case with every other vulnerable vehicle on the road. That the test was performed on a closed track is obviously for safety reasons as we did not want to endanger the public by causing the vehicle to fail in the middle of a busy highway."
For bonus points, throw in something like: "It's no different than your vehicle's advertisements displaying 'performed on a closed track' -- surely you're not arguing that the vehicle's performance in those advertisements is completely fake and you're deceiving consumers with said non-real-world advertising, are you?"
"On the other hand, I'd rather that they be doing this work with the way they did it than not at all..."
That's such a stupid tradeoff. Putting it as an either/or is silly. Doing this safely and demonstrating the alarming conclusion are not mutually exclusive.
I'd go as far as to say that the way they demonstrated this actually diminishes the message of the danger of this exploit and put's the focus on their stupidity.
> I'd go as far as to say that the way they demonstrated this actually diminishes the message of the danger of this exploit and put's the focus on their stupidity.
Doing it the way they did clearly increases the impact of their message. To believe otherwise belies ignorance of the way information gets spread in our culture. The question is only if the increased impact was properly balanced against the increased risk.
This is very serious, because it can be used on a large scale and has terrorism potential. This could be used to kill people or disrupt an entire city. Where's Homeland Security on this? This is their job.
Meanwhile, do not buy a Chrysler product with the "connectivity group". It's an option that costs about $500-$600.
So if I'm understanding this correctly, the initial vulnerability is remote-exploitable and relies on a firmware patch. Why wouldn't the manufacturer use the same exploit to patch all affected vehicles rather than calling them in for service?
The manufacturer presumably has permission to do so. As for the attackers patching vulnerable vehicles, I hadn't thought of that, but of course that's a possibility as well. Pretty much the definition of white hat. :)
One would think that car industry would be the one which has learnt about road safety the hard way. That experience should have manifested into extreme caution when adopting and implementing anything new, open and complex.
Sadly it is starting to look that everything they learn, they do by trial and error. Not by doing those things we take for granted from engineering and IT perspective.
Every single highway crash involving a loss of control is now potentially a high-end assassination, including those that have taken place in the last few years.
How many people do you think will be murdered this way before investigators and the justice system catch up?
When I studied real time systems, it was clear that critical systems (in this case the brakes, accelerator, wheel) needed to be in a physically separate network from non-critical ones (music, air conditioning). I guess it must be cheaper to build all in a single network, but it sounds irresponsible.
Also, this test should not have been done in a public road. That was irresponsible.
There often are multiple CAN buses (though there is a push to a single high speed bus) and the ECU (or BCM and so on) are supposed to make sure the commands are safe. In practice they often communicate with one another (manufactures want to sell features like remote start or use cheaper system such as electric parking brake) and the module does not do as much verification as it should.
> The most disturbing maneuver came when they cut the Jeep’s brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch
I was in that situation. No brakes, high rocks on one side, 100 meters cliff on other side, 20km of downhill in front off me. I guess I will not be buying Jeep anytime soon.
Does Wired really wonder why so many of their readers have ad blocking software? [1] I can't even read the article on their website because every ad I see is covering up some sort of text of the article. None of the ads have a close, hide, or dismiss button either so I can't just go and hide them.
This is scary. Malice or incompetence I think it's unacceptable for systems handling the breaks and other crucial functions of the car to be anywhere close to interacting with internet connection.
I am going to stick to old style cars for a while. This is really scary.
I'm not going to comment on the question of whether or not the highway patrol should have been called or not - just say I understand where poster tombrossman was coming from.
However, I fully agree this was a ridiculous stunt. They could have gotten the same results by demonstrating (on the highway, if they insisted) the air conditioner going full blast, the radio, and the picture of the hackers on the screen. Anything else (cutting transmission, obscuring visibility) should have been saved for a safer environment. The point still would have been made.
And it's got nothing to do with the vehicle and driver itself (though I wonder how the hackers knew the exact driving situation - was it plastered with cameras?) - what if two unrelated vehicles got in an accident for some reason and the test driver had to get out of the way, but couldn't?
And to make it worse, the cranked radio made it hard for the tester to communicate with the hackers. Very dangerous stunt.
Also, and I know it was unrelated to this particular hack, but if the UConnect recognizes voice commands (I assume so), and sends it back for processing, then might it not also be able to bug (eavesdrop) on the car's interior?
Many disturbing revelations came out of this, and I applaud them for making it known, but I criticize them harshly for the cavalier way they endangered public safety.
I feel like, public safety aside, people should be mad at these researchers because they give credibility to every ignorant politician, prosecutor, or journalist out there who says that all hackers threaten the public good. How are you supposed to draw a line between blackhat and ethical hackers when the "ethical" ones endanger public safety all the same?
Well luckily my Chrysler despite only being a year old does not have this connectivity. It does have Uconnect which I despise, I keep contacting Chrysler to demand that they offer the ability to use Apple Carplay or the Google equivalent. To be fair there has yet to be a vehicle that has a nice easy to use controls for radio or media.
Same situation, same sentiment. Chrysler is apparently "on the list" for CarPlay compatibility, but across all marquees, there are only two cars you can buy today that have it installed, and both are Ferraris. (The 2015 Volvo XC90 may have some form of CarPlay "preparation".) Don't hold your breath for backwards compatibility.
All I really want is two things:
1. The Voice button on my steering wheel to activate Siri. (Not the horrible UConnect voice assist.)
2. Waze maps on screen. (UConnect navigation is shit and not worth the price.)
As big of dicks as these researchers are, I just have to say, to anyone out there working on software to run cars, airplanes, robots, other mobile vehicles ... some day, within the next 5 or 25 years, it's pretty likely some nut job is going to use an exploit take control of one or more of these vehicles and crash them/use them as remote-controlled / swarm weapons, possibly killing lots of people.
If you're writing that software, make sure you do a really, really good job on security. Because no one wants to be the guy 'git blame' shows wrote the exploitable feature that led to ??? deaths.
The industry really should have stringent standards that prevent ridiculous breaches like this, I would say as well as simulators (or physical demo vehicles) available online/open source that people can pen-test against and win prize money. And maybe write all the code in Rust?
I don't get it. Everyone seems to be either upset at the researchers for their test or defending them.
Why is no one upset at car companies putting tech in our cars that allows for remote shutdown of said car. Is this what we bailed them out for?
Hopefully people selling armored cars and armored trucks have good pen testers on their teams. Run flat tires and armor-plated doors don't help much if an attacker can shut down the engine and open the doors remotely.
I own a 2015 Jeep Cherokee and have poked around with Uconnect and the API services it exposes. Theres a whole new world of exploitation (and eventually modding) that is coming.
Hah, I found it really funny as I scrolled down there's a big ad for a Fiat in the article: http://i.imgur.com/rSyYPO4.png Fiat owns Chrysler who owns Jeep... maybe not the best marketing idea to advertise your cars in an article about exploiting them with potentially catastrophic results.
The two researchers say that even if their code makes it easier for malicious hackers to attack unpatched Jeeps, the release is nonetheless warranted because it allows their work to be proven through peer review.
Huh? If they have a video of their turning a care off remotely, do they really need peer review of the details?
Yes, there have been numerous times when security researchers show video and then the peanut gallery argues if it was staged or not. There is just no way to be sure otherwise.
Research can be done to drive a point without making that into a drama. you can demonstrate the science without the hollywood so to speak. I think what was done here could have been demonstrated without the risk that was taken. I think the risk that was taken was poor judgement.
Exactly. But at least you'd break the current control link and possibly/hopefully be able to stop and steer.
Maybe in addition to breaking a link on a panic stop, stopping and steering would be set to a non-commanded mode that relies less or not at all (maybe impossible with current design?) on software commands.
Isn't the car calling their service center / 911 one of the major selling points of the service?
The root problem is that they were able to flash an ECU with custom code. From there they are 'trusted' on the vehicle network and can trigger or emulate any other component.
Requiring the firmware image to be signed, or not accepting a bootload from the physical interface that's connected to the internet would be a more comprehensive solution.
There is a patch available, but that is not a recall. A recall takes some time under the best circumstances and FCA pushes back hard on expensive ones.
The feds have been breathing down their necks lately due to too many defects. I'm sure they brought the hammer down hard (behind the scenes) on this one, thus forcing them to announce the recall this morning.
That was a lot of uninformative text to plow though...
Apparently someone has found a remote exploit that affects some model of Jeep. It requires an attacker to find the IP address of the Jeep. Which implies that a Jeep has an IP address. The communication between the Jeep and the world is something called Uconnect.
I'm still curious if this can be done without physical access to the car in advance. The last time one of these showed up, it was very understated that all the remote access stuff involved reprogramming an ECU and plugging in a phone you control.
The article doesn't provide any clear information on whether this works purely remotely.
EDIT: Scrap that, seems it does - weird that the article doesn't lead with that more prominently. In which case wow - pinging the network and grabbing GPS coordinates for cars?
Calling the police was indeed the right thing to do.
Maybe next time, the hackers can test on the vehicles driven by the car executives, while they are driving, have their family in the car with them, etc.
I recall that last time they used a piece of hardware to connect to the CAN bus via cellular. Are they now able to control the CAN bus via the infotainment system? Does it have it's own cellular transmitter?
meh. cars stall all the time. if they hadn't gone for the freakout factor with the reporter, there wouldn't be high profile press, embarrassed automotive executives and politicians scrambling to get a handle on the issue.
the real issue is that the automakers are producing fundamentally dangerous vehicles and the federal government is allowing it. these vehicles could be exploited maliciously to cause serious physical harm or death.
this is actually a problem. not some onetime stall of a jeep on the highway.
Did anyone bother to read the full article? If so, you would find out that it was a [somewhat] controlled experiment.
> To better simulate the experience of driving a vehicle while it’s being hijacked by an invisible, virtual force, Miller and Valasek refused to tell me ahead of time what kinds of attacks they planned to launch from Miller’s laptop in his house 10 miles west.
> Instead, they merely assured me that they wouldn’t do anything life-threatening.
> Then they told me to drive the Jeep onto the highway. “Remember, Andy,” Miller had said through my iPhone’s speaker just before I pulled onto the Interstate 64 on-ramp, “no matter what happens, don’t panic.”
Surprising the comments critical of how the test was performed publicly equate exploitation with guns:
> How about they remotely poke around your husband or wife's car and explore, as long as they promise not to intentionally trigger anything?
> Calling the cops on a loud neighbor might not be acceptable, but calling the cops on a neighbor firing a gun in the general direction of your house certainly would be.
> Anyone could shoot up a public place... should amateur researches be showing up in malls with firearms to test preparedness?
The lack of sound judgement _and_ arguments is astounding.
Were they really at a university? It looks like they were "inspired" by researchers at universities, which universally have review boards that stop idiots from disabling cars on an open road to see what happens.
Edit: The irony I see is not political, it is in saying They're not the sort of people that do some research before forming an opinion. while making a declaration that was not especially well researched.
I also probably disagree with your characterization of the people defending the researchers (but who knows what your definition of "leftists" encompasses).
I think there is a bias at work here, but not the one you're describing. Some commenters might have felt more free to openly criticize these researchers' methods before the presence of the police was invoked. After that point, the wagons were circled and any such criticism became subtly entangled with implied support for coercion and violence by the state.
After all, the thought-terminating mantra on HN isn't "only talk to the police when the situation warrants," it's "never talk to the police."
>> The attacker’s PC had been wired into the vehicles’ onboard diagnostic port, a feature that normally gives repair technicians access to information about the car’s electronically controlled systems.
Back then, however, their hacks had a comforting limitation: The attacker’s PC had been wired into the vehicles’ onboard diagnostic port, a feature that normally gives repair technicians access to information about the car’s electronically controlled systems.
A mere two years later, that carjacking has gone wireless.
So the issue is right there in the surrounding text.
Because the did it on public road, intentionally causing him to lose control of the vehicle.
Bunch of idiots if you ask me, no better than the clueless people you see talking on their cellphones distracted. A 1 ton vehicle is deadly in the wrong situation.
Like many other people mentioned, there's many ways to demonstrate this is a safe, controlled manner rather than out in the wild.
Jeep and uConnect have endangered people's lives – likely an order of magnitude more people. If it's as simple as that, do you support criminal charges for them too? Because playing in traffic almost seems more intelligent than hooking hundreds of thousands of multi-ton vehicles up to the internet.
They demonstrated that hackers can take lives using a laptop and a cell phone. And they displayed their own picture on the dashboard screen while doing it.
Yes, elite. I'm not bringing it back; they already did.
There are perfectly good test tracks out there, that'll let you screw around to your heart's content. Hell, drive around a walmart parking lot at 3am if you have to. Doing this on public roads? Sorry, you need to follow the laws like the rest of us.
I agree that a third party calling the police might be overkill but "these guys are elite, their demo was badass" has to be the worst possible justification.
I thought about this thread during my drive home. I thought about hacking my own car's systems while driving...
I immediately realized: no WAY. No, no, no. What if I accidentally BRICK the thing, while driving?
Remember that model of car that lost steering and braking when the ignition key fell out due to a manufacturing defect? What if the Jeep would lose steering when bricked?
It is morally unforgivable to even try to change the radio station while a human is sitting in the "brick" if said brick is moving at 70mph. Especially since the software is known to be faulty.
"Known to be faulty" does bring us back to the manufacturer. I think that much of the anger directed at the hackers should be directed back at the manufacturer.
That being said... Back to the original topic of this particular comment: I apologize. Thanks HN for making me think a little harder.
1) Were public roadways and speeds of 70mph absolutely necessary to demo this?
2) What was the plan if the trucker approaching at 70mph hadn't seen the Jeep stalled early and had to swerve or panic stop, possibly crashing and injuring themselves or others?
3) Anyone notify the Missouri State Highway Patrol about this? They may be contacting the researchers with questions about this demo if they weren't consulted in advance.
4) What's the plan if they trigger a bug in the car software of the people they had tested this with earlier? The article mentions them tracking people remotely as they attempt to learn more about the exploit.
I could go on but why bother? In case any of you think this was cool or even remotely (no pun intended) ethical, I'd like to know if you have a problem with letting these two test this on a loved one's car. How about they remotely poke around your husband or wife's car and explore, as long as they promise not to intentionally trigger anything?
If I ever learned this had been tested on a vehicle I was in, I'd make sure this cost the researchers dearly.
EDIT: I've just phoned 'Troop C' of the Highway Patrol at their main number, +1-636-300-2800 and they seemed pretty keen to follow up. The fact that the vehicle was disabled where there was no shoulder, was impeding traffic, and the demo not cleared with them in advance has them concerned. I'm all for testing exploits and security research, but this isn't the right way to do it. And to film it and post it to a high traffic site is nuts.