"Delaying the roll-out of partially-autonomous vehicles costs lives. This conclusion assumes that (1) automakers make steady progress in improving the safety and reliability of their partially autonomous vehicles and (2) drivers are comfortable enough with monitoring the partially-autonomous vehicles so that new sources of error associated with the transition to and from manual and autonomous control do not increase fatality rates."
In other words, if you assume (1) something which we can't actually know unless we have the ability to predict the future, and (2) something we already have strong reason to believe isn't true, then we must roll out partially autonomous vehicles as soon as possible and anyone who questions this is basically killing people.
I'd strongly argue against 2. Telling people that paying attention isn't important 90% of the time is not going to increase attention the random 10% of the time when it does matter. Google halted all testing for level 3 self-driving cars because of this. They caught employees literally falling asleep.
>This model shows that rolling out just as safe or a little safer partially-autonomous vehicles by 2020 will save 160,000 more lives over 50 years than a scenario that waits until 2025 to roll out almost perfect autonomous vehicles. Delaying the roll-out of partially-autonomous vehicles costs lives.
Sure, rolling out autonomous vehicles will save lives if you assume that they are safer than humans. But right now all the evidence points to autonomous vehicles being substantially more dangerous than humans. Waymo reports disengagements every 5,500 miles, and estimates that something like 10% disengagements would have led to a collision.
And taken as a whole, self-driving cars have already killed one person with only about 10 million miles driven. It would take the average human driver more than 100 million miles to kill someone.
Edit: My comment refers to safety of level 4/5 autonomy, not the current Tesla Autopilot.
The real interesting question is whether it's worth it to deploy dangerous cars, risking today's lives to help save future lives. It's not that interesting to ask whether deploying hypothetically safer cars will save lives.
1 does indeed require predicting the future, but if we can all agree that full autonomy is a tractable problem that will eventually be solved, then it stands to reason that progress will be made toward that goal, barring some catastrophe. How steady that progress is remains to be seen, but I think we can reasonably take assumption 1 as a given, even with a generally skeptical view.
I don't think "we all" can agree on that, actually. It seems that more and more people are coming to the conclusion that the last 10% of full autonomy is roughly equivalent to solving general intelligence—at least given our current road infrastructure and driving-related behaviors and expectations.
I think that we will make steady progress towards cars that are safer than the average driver. I think that to drive like a human using only binocular visual cues is less obvious, but a computer can process so much more information and have such a varied array of sensors that it can "cheat" its way into being better without as much intelligence.
I also claim that full autonomy is not required, just sufficient autonomy that it can safely hand-off to the driver when confused (the ability to recognize an untenable situation soon enough to pull over to the side of the road is sufficient for that, beeping 2 seconds before crashing is not).
I'm also unconvinced that it will be economic for companies to make self-driving cars any time in the near future though, because even if they are (for example) twice as safe as an average driver, that's a huge number of lawsuits and juries award much higher damages when the defendant is a corporation rather than a (possibly dead or permanently injured) person.
I've certainly come to that conclusion, although one person is not a trend. I'm in the minority among the people I talk to. I think more people see them achieving something that looked impossible, so something even more impossible doesn't seem like an obstacle.
People who browse threads in places where actual engineers working on this stuff post their thoughts
vs.
People who get all their information about self-driving cars from "journalists"
The former group will pretty much unanimously tell you we are at least 10 years from a full solution, if it's even possible. Most of the latter group seem to think it's already here.
"Solving general intelligence" isn't comparable to a 10-years deadline at all.
But still, which of the few companies that work on this has their engineers expressing that 10-years figure? I can also say that I know stuff from good expert sources without mentioning any specific.
I was more referring to the "last 10% of full autonomy" for self-driving cars than the solving general intelligence bit (despite the fact that GP was sort of equating the two). Guessing if/when we'll solve general intelligence is a fool's game.
> "Solving general intelligence" isn't comparable to a 10-years deadline at all
As I just mentioned, I don't think it's wise to put any kind of timeline on that particular milestone.
> which of the few companies that work on this has their engineers expressing that 10-years figure?
Well I didn't exactly go around asking every commenter who they worked for, but my gut-level statistician tells me it's been over 90% of the people on various relevant forums (including here) who seem like they know what they're talking about.
All I know is that if it was as easy as a lot of folks were projecting 5 years ago, it would already be here.
In other words, if you assume (1) something which we can't actually know unless we have the ability to predict the future, and (2) something we already have strong reason to believe isn't true, then we must roll out partially autonomous vehicles as soon as possible and anyone who questions this is basically killing people.