The author explicitly states that they believe that Tesla is correct to only include miles driven with autopilot enabled, specifically they say:
"I agree with Tesla’s methodology on Autopilot mileage because the road conditions under which a partial autonomy system is rated for operation (highways, clear lane markings, etc) are systematically different from manually-driven miles."
But if that were the case you must also only include crash data from manually driven cars in those same circumstances. But the majority of automobile accidents do not occur on freeways -- which are the most autopilot friendly roads being driven.
We can't easily work out that traffic split, but we can do something else that should be just as effective:
Compare accidents per million miles driven in autopilot enabled cars, to accidents per million miles driven in regular vehicles. If autopilot does meaningfully improve driving safety, we should expect that the average for autopilot enabled cars to to be lower than other cars by a statistically significant margin.
If we want to get specific per [1] more than 50% of accidents occur at intersections, given Tesla's sample will dramatically over-represent intersectional crashes we could go extreme and say that their statistics should be immediately less than half the crash rate of non-autopilot vehicles. (this is an exaggeration, but given they're not being honest I don't really care)
Personally I'd also control for class of cars we're comparing -- I suspect the stats for high end cars are different from low are different from sports cars, etc.
No, she is saying you shouldn’t just count the miles where autopilot is turned on. Her reasoning is that you can’t compare autopilot-on miles to regular cars because there is no comparable data for regular cars.
That might be acceptable if you compared Tesla autopilot capable cars to other luxury cars, where the only significant difference in safety is autopilot.
But that wouldn’t account for things like: maybe certain Tesla safety features save a lot of lives but get canceled out by a very dangerous autopilot.
> That might be acceptable if you compared Tesla autopilot capable cars to other luxury cars, where the only significant difference in safety is autopilot.
This is the comparison they should have done. Comparing with just the national average includes all the poorly maintained 20-year-old beaters being driven by teenagers.
They would do better to look at the few serious autopilot crashes Tesla has had. They tend to involve the car driving straight into a large stationary object, in situations where a much cheaper car with a basic automatic emergency braking system (AEB) would refuse to collide and would just stop.
It looks like Tesla’s over-sophisticated system which builds a live 3d map of the whole environment, too heavily filters out large non-moving objects, believing them to be things like overhead bridges which it can pass under.
It needs a supplemental and independent AEB system based purely on highly directional forward facing sensors which can confirm that the path ahead is clear. This is so cheap that it is becoming common on mainstream cars like Hondas and Toyotas. The current model S probably already has the sensor hardware onboard to enable this extra redundant safety system in software and save lives.
> They tend to involve the car driving straight into a large stationary object, in situations where a much cheaper car with a basic automatic emergency braking system (AEB) would refuse to collide and would just stop
> Volvo's semi-autonomous system, Pilot Assist, has the same shortcoming. Say the car in front of the Volvo changes lanes or turns off the road, leaving nothing between the Volvo and a stopped car. "Pilot Assist will ignore the stationary vehicle and instead accelerate to the stored speed," Volvo's manual reads, meaning the cruise speed the driver punched in. "The driver must then intervene and apply the brakes.” In other words, your Volvo won't brake to avoid hitting a stopped car that suddenly appears up ahead. It might even accelerate towards it.
> The same is true for any car currently equipped with adaptive cruise control, or automated emergency braking. It sounds like a glaring flaw, the kind of horrible mistake engineers race to eliminate. Nope. These systems are designed to ignore static obstacles because otherwise, they couldn't work at all.
> “You always have to make a balance between braking when it’s not really needed, and not braking when it is needed,” says Erik Coelingh, head of new technologies at Zenuity, a partnership between Volvo and Autoliv formed to develop driver assistance technologies and self-driving cars. He's talking about false positives. On the highway, slamming the brakes for no reason can be as dangerous as not stopping when you need to.
I don't work on these radar systems, but have played a lot with ultrasound for robotics. It seems possible these radar systems give almost continuous false positives. You could get a pretty strong return from potholes, slightly sunken drain covers, etc. Using doppler to reject any return that matches the car's own road speed just leaves the returns from moving vehicles. You can then track speed vs distance to sanitize that data. It becomes a tractable problem then to detect that the car in front is slowing quicker than you are. But telling the difference between a stationary car and a slightly misaligned bridge expansion joint is probably not trivial.
The Tesla has multiple different sensors, including ultrasound, radar, and cameras. Fusing information from the different sensors should help correct for such errors.
> These systems are designed to ignore static obstacles because otherwise, they couldn't work at all.
Just to be clear, it does not automatically follow that therefore it is acceptable for them to be on the road like this. That argument must be based on efficacy, not feasibility.
My understanding is that statistics show that these emergency braking systems do, on balance, reduce the severity and frequency of collisions. Perhaps that is because drivers of these vehicles are not being given the opportunity to stop paying attention to the road, but once that option is offered, the minimum acceptable performance for obstacle detection goes up. It's no use arguing that the drivers of level-three cars are supposed to be paying attention at all times, because people are human, and don't do passive attention well - acceptability criteria must be based on what actually happens, not what is supposed to happen.
My wife's reasonably cheap Subaru has both adaptive cruise control and AEB. It does not have the Volvo bug. The AEB and active cruise are separate systems and the AEB has priority.
Is there any public data on how many false-positive obstacles these systems detect and then ignore? Or are vendors too ashamed to admit they have any false positives?
The actual "fleet" of Tesla (and of others experimenting in the autonomous vehicle field) is made exclusively of decently new and perfectly or nearly prefectly maintained vehicles.
Besides and before any other consideration, the way a modern, newish and well maintaind car handles (steers, brakes, etc., particularly in an emergency situation) is hardly comparable with the way the "average" car can do, think of unbalanced brakes, worn down tyres, etc., but also about the undeniable truth that your - say - 1998 pickup won't ever be as stable as a sports car.
Since there is no real data about the comparison term (all the other vehicles) status and capabilities (of both the car and the drivers), each and every comparison tends to be on favour of the Tesla for reasons that have nothing connected to the actual automation.
Even an "internal" comparison (Tesla's on autopilot vs. Tesla's on manual driving) wouldn't be IMHO much representative as drivers of Tesla are I believe - if not an elite - a definite group of people, not too young, not too old, possibly with an interest or passion about cars, while "the rest" will comprise just licensed drivers (possibly with little experience and with a tendency to risk too much), elderly people (possibly with slower reaction times).
"Delaying the roll-out of partially-autonomous vehicles costs lives. This conclusion assumes that (1) automakers make steady progress in improving the safety and reliability of their partially autonomous vehicles and (2) drivers are comfortable enough with monitoring the partially-autonomous vehicles so that new sources of error associated with the transition to and from manual and autonomous control do not increase fatality rates."
In other words, if you assume (1) something which we can't actually know unless we have the ability to predict the future, and (2) something we already have strong reason to believe isn't true, then we must roll out partially autonomous vehicles as soon as possible and anyone who questions this is basically killing people.
I'd strongly argue against 2. Telling people that paying attention isn't important 90% of the time is not going to increase attention the random 10% of the time when it does matter. Google halted all testing for level 3 self-driving cars because of this. They caught employees literally falling asleep.
>This model shows that rolling out just as safe or a little safer partially-autonomous vehicles by 2020 will save 160,000 more lives over 50 years than a scenario that waits until 2025 to roll out almost perfect autonomous vehicles. Delaying the roll-out of partially-autonomous vehicles costs lives.
Sure, rolling out autonomous vehicles will save lives if you assume that they are safer than humans. But right now all the evidence points to autonomous vehicles being substantially more dangerous than humans. Waymo reports disengagements every 5,500 miles, and estimates that something like 10% disengagements would have led to a collision.
And taken as a whole, self-driving cars have already killed one person with only about 10 million miles driven. It would take the average human driver more than 100 million miles to kill someone.
Edit: My comment refers to safety of level 4/5 autonomy, not the current Tesla Autopilot.
The real interesting question is whether it's worth it to deploy dangerous cars, risking today's lives to help save future lives. It's not that interesting to ask whether deploying hypothetically safer cars will save lives.
1 does indeed require predicting the future, but if we can all agree that full autonomy is a tractable problem that will eventually be solved, then it stands to reason that progress will be made toward that goal, barring some catastrophe. How steady that progress is remains to be seen, but I think we can reasonably take assumption 1 as a given, even with a generally skeptical view.
I don't think "we all" can agree on that, actually. It seems that more and more people are coming to the conclusion that the last 10% of full autonomy is roughly equivalent to solving general intelligence—at least given our current road infrastructure and driving-related behaviors and expectations.
I think that we will make steady progress towards cars that are safer than the average driver. I think that to drive like a human using only binocular visual cues is less obvious, but a computer can process so much more information and have such a varied array of sensors that it can "cheat" its way into being better without as much intelligence.
I also claim that full autonomy is not required, just sufficient autonomy that it can safely hand-off to the driver when confused (the ability to recognize an untenable situation soon enough to pull over to the side of the road is sufficient for that, beeping 2 seconds before crashing is not).
I'm also unconvinced that it will be economic for companies to make self-driving cars any time in the near future though, because even if they are (for example) twice as safe as an average driver, that's a huge number of lawsuits and juries award much higher damages when the defendant is a corporation rather than a (possibly dead or permanently injured) person.
I've certainly come to that conclusion, although one person is not a trend. I'm in the minority among the people I talk to. I think more people see them achieving something that looked impossible, so something even more impossible doesn't seem like an obstacle.
People who browse threads in places where actual engineers working on this stuff post their thoughts
vs.
People who get all their information about self-driving cars from "journalists"
The former group will pretty much unanimously tell you we are at least 10 years from a full solution, if it's even possible. Most of the latter group seem to think it's already here.
"Solving general intelligence" isn't comparable to a 10-years deadline at all.
But still, which of the few companies that work on this has their engineers expressing that 10-years figure? I can also say that I know stuff from good expert sources without mentioning any specific.
I was more referring to the "last 10% of full autonomy" for self-driving cars than the solving general intelligence bit (despite the fact that GP was sort of equating the two). Guessing if/when we'll solve general intelligence is a fool's game.
> "Solving general intelligence" isn't comparable to a 10-years deadline at all
As I just mentioned, I don't think it's wise to put any kind of timeline on that particular milestone.
> which of the few companies that work on this has their engineers expressing that 10-years figure?
Well I didn't exactly go around asking every commenter who they worked for, but my gut-level statistician tells me it's been over 90% of the people on various relevant forums (including here) who seem like they know what they're talking about.
All I know is that if it was as easy as a lot of folks were projecting 5 years ago, it would already be here.
It's good that there's starting to be more focus on whether partially autonomous vehicles are actually safer, but what about other aspects?
We have social networks now that have "terms of service" and who are kicking out people they disagree with. Is it too far out there to suppose that an autonomous vehicle will come with similar "terms of service"? What if the vehicle refuses to drive you to a competitor's store? Or to a political rally that the car company doesn't like? Or to a gun store? Or to a religious gathering?
Since most drivers control and own their cars, these are not concerns today, but they likely will be. It seems like these are bigger decisions to hash out than whether a car is merely safe. It doesn't matter if the car is safe, if it safely takes away your freedoms.
Tesla has data on how much time their cars spend on autopilot. If Tesla wants to promote their crash rate, they need to disclose the raw data.
Total vehicle miles for vehicles with a system that only works right on freeways is guessing. The accident rate of interest is autopilot miles on freeways vs all vehicles miles on freeways. Here's the US data summary for all vehicles.[1] See table 35, which breaks out divided highway data.
The focus on fatal collisions is misplaced. There are far more non-fatal collisions, which provides much more information. Evaluating Tesla's "autopilot" is about measuring driving error, not crash survivability.
There needs to be at least some emphasis on fatalities. If autopilot reduces total collisions but increases fatalities, you might draw the wrong conclusion about its safety. It's early but from the incidents we know already, it seems like a decent probability that this is the situation that Tesla is in, lower crash rate but more likely for those crashes to be fatal.
The Rand article says the most lives can be saved if Autopilot was rolled out after it was 10% better than humans. However, we won’t have statistical confidence until billions of miles are driven. If we think this problem can be solved, we have to take a risk and go for it.
One thing I don’t understand about the current way autonomous vehicles are going is avoiding more of a “positive control” type of setup. I grant it’s more complicated and it means you won’t have it right away, but shouldn’t we come up with some highway design standards that will make autonomous vehicles safer, then some signaling method to say “this is on” or during construction say “this is off”? It just seems a smarter more incremental way to go. That way you can engage “autopilot” only in conditions it’s known to work under.
„humanity could be ushered into a new economy where driving is a hobby, only for sunny days along clear roads with a view. The struggles and tedium of the daily commute could be handled by autonomous vehicles, traffic accidents could fall to nil, passengers could focus on working and relaxing in their mobile offices, and the elderly, disabled, and blind could have considerable mobility and autonomy.”
Sounds remarkably like a world with good public transport.
The best you can hope for is public transport that will get you where you're going in time.
There's no public transportation that will provide privacy, consistent comfort, or even a seat. Never mind the most basic problem with public transportation—the other passengers. Any time you get into an enclosed metal box with an arbitrary number of random people, you roll the dice. You could have a quiet, safe ride, or you could end up with loud music, obnoxious body odors, food spilling on you, bags hitting you in the face, or someone vomiting all over the floor. Any long-time New Yorker has their fair share of subway and bus stories.
Public transportation is a vital part of any city. But it's not a "relaxing mobile office," it's not always easy or convenient for the elderly and disabled, and you can't have it as the only option.
well, you certainly can get some work done on a train once you figured out your commute in a way that will get you a seat. Grande, that is not an option for anyone.
On the other hand, what would the world look if we all used autonomous cars? Seems like we would end up with the same, or worse, traffic jams. If people were willing to share their cars, they would be a bit less. But remember any means of public transport uses less space than cars. http://humantransit.org/2012/09/the-photo-that-explains-almo...
> But remember any means of public transport uses less space than cars
I think this is indicative of the PR problem that transit advocates have and that the GP is trying to point out: the advocates keep focusing on moving people from point A to point B.
Transit advocacy always seems to be about X thousand passengers per hour and saving Y thousand square feet of real estate. Quality of life of those passengers doesn't get a mention.
Granted, creature comforts may not be all that high on everyone's priority lists, but I'm confident they are for many, especially as they get to middle age and have the financial means to vote with their wallets.
In many places you really can't work on a subway if you needed a network connection since you are far underground and can't get a signal. Some have WIFI now though.
Here in Lisbon all the subway lines have mobile signal (the operators installed base stations along the line back in 2006). I'm not sure how good is the internet connection, but works fine for calls.
I'm a bit disappointed in HN that you got downvoted. Your first sentence might be inflammatory, but you do follow it up with completely valid criticisms, all of which I always remind people of when they try to sell me on public transit.
An additional refinement on comfort is temperature (and, I'm sure, for some locales, humidity). No matter what ones preference, that's also a roll of the dice. If it's a train, and you're lucky enough to be on a system that allows it, you could try moving between cars in search of your personal Goldilocks Zone, but that can mean sacrificing a seat and may not be feasible with luggage.
Now, here's an alternative (or wishful thinking): business class. This already exists in air travel (privatized public transit) and on some commuter rail lines (semi-privatized in the US?).
I think it's telling that, especially on short-haul flights there's a tendency for first class sections on planes to get smaller or disappear entirely (or never exist in the first place on no-frills airlines), but I don't wish for first class luxury. I would only want what you mention: privacy, consistent comfort, or even a seat (a guaranteed one), with the possible addition of amenities such as electrical outlets and internet.
I have a theory as to why it's merely wishful thinking and would never "fly", which I'll share if anyone's interested.
There's another major problem that I see with public transit (in the sense of the proposed utopia), and that's that enough people actually want to live in suburbs, where it's way too expensive to run even a halfway decent transit system.
The last major problem that I think is inadequately addressed, is freedom, although you touched on it by mentioning privacy. Cars grant the greatest freedom of movement we have available.
A car allows someone to go almost anywhere almost any time at remarkably high speed with modest incremental (monetary) cost and little to no advance planning or notice required. Of course, whether this on-a-whim freedom is desireable/beneficial on a macro is debatable, but I think it's pretty clear it's desireable by individuals.
In the densest cities with the best transit systems during peak hours, using that system can replicate (and even surpass!) this freedom of movement. However, that falls apart if ones trip extends outside of those peak hours or outside the city[1].
[1] This is, at times, considered a feature by some US suburbs in that it makes it effectively impossible for poor people from the city to come out to their neighborhoods, though this may seem weird to us in booming cities where land close to the center is most expensive and is cheapest at the outskirts.
"I agree with Tesla’s methodology on Autopilot mileage because the road conditions under which a partial autonomy system is rated for operation (highways, clear lane markings, etc) are systematically different from manually-driven miles."
But if that were the case you must also only include crash data from manually driven cars in those same circumstances. But the majority of automobile accidents do not occur on freeways -- which are the most autopilot friendly roads being driven.
We can't easily work out that traffic split, but we can do something else that should be just as effective:
Compare accidents per million miles driven in autopilot enabled cars, to accidents per million miles driven in regular vehicles. If autopilot does meaningfully improve driving safety, we should expect that the average for autopilot enabled cars to to be lower than other cars by a statistically significant margin.
If we want to get specific per [1] more than 50% of accidents occur at intersections, given Tesla's sample will dramatically over-represent intersectional crashes we could go extreme and say that their statistics should be immediately less than half the crash rate of non-autopilot vehicles. (this is an exaggeration, but given they're not being honest I don't really care)
Personally I'd also control for class of cars we're comparing -- I suspect the stats for high end cars are different from low are different from sports cars, etc.
[1] https://www.fhwa.dot.gov/research/topics/safety/intersection...