I worked on the autonomous pod system at Heathrow airport[1]. We used a very conservative control methodology; essentially the vehicle would remain stopped unless it received a positive "GO" signal from multiple independent sensor and control systems. The loss of any "GO" signal would result in an emergency stop. It was very challenging to get those all of those "GO" indicators reliable enough to prevent false positives and constant emergency-braking.
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
> The loss of any "GO" signal would result in an emergency stop.
That's an E-stop chain and that's exactly how it should work.
But the software as described in the NTSB report was apparently bad enough that they essentially hardwired an override on their emergency stop. The software equivalent of putting a steel bar into a fuse receptacle. The words that come to mind are 'criminal negligence'. The vehicle would not have been able to do an E-stop even if it was 100% sure it had to do just that, nor did it warn the human luggage.
The problem here is not that the world is so unsafe that you will have to make compromises to get anywhere at all, the problem here is that the software is still so buggy that there is no way to safely navigate common scenarios. Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.
> Pedestrian on the road at night is one that I've encountered twice on my trips and they did not lead to any fatalities because when I can't see I slow down. If 6 seconds isn't enough to make a decision you have no business being on the road in the first place.
I've seen a few people comment on the footage that they too would have run the pedestrian over, to which my only response is: I sure hope you don't have a driver's license [anymore]!
The vast majority of those people are being (purposely?) deluded by a very misleading video showing an apparently pitch black section of road. In reality, it was a lighted road and the dashcam footage had a very compressed dynamic range.
To me that means they weren't walking directly under the street lamp. If you look at other peoples videos of that street at night on youtube it's well lit. Street lamps cast a wide spotlight so you don't have to be directly under it to still have illumination.
> If you look at other peoples videos of that street at night on youtube it's well lit.
I don't think you can look at videos and judge the level of illumination well; their videos could be more or less accurate than Uber's, and what I see depends on codecs, video drivers, my monitor, etc. Also, any video can easily be edited these days.
Is there a way to precisely measure the illumination besides a light meter? Maybe we can use astronomers' tricks and measure it in relation to objects with known levels of illumination. Much more importantly, I'm not even sure what properties of light we're talking about - brightness? saturation? frequencies? - nor which properties matter how much for vision, for computer vision, and for the sensors used by Uber's car in particular.
I'm not taking a side; I'm saying I have yet to see reliable information on the matter, or even a precise definition of the question.
It is generally unusual for any camera (not using infrared) to outperform the human eye in low light situations. If a camera (any camera) shows a clear image (at all) a person would have almost certainly seen it.
Dashcam videos typically do not capture nighttime scenes very well. Any human would have been able to see the pedestrian well in advance of a collision. There are cell phone videos of that same stretch of road at night and they show the illumination level much better than the Uber video.
It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.
Don't conflate that with Uber's screw-up here. This wasn't a situation where a fatality was unavoidable or where a very safe system had a once-in-a-blue-moon problem. It's one where they just drove around not-very-safe cars.
Agreed. Uber disabled a safety feature that would have prevented this fatality -- but that doesn't mean that the automation was therefore safe apart from Uber's mismanagement of it. It's entirely believable that had that safety feature been fully enabled, it would have also e-braked in 1,000 other situations which didn't result in a collision. And false-positive e-brake events are definitely worth avoiding: they can get you rear-ended and injure unbelted passengers.
This doesn't mean that Uber therefore did the right thing in disabling the system; it probably means that the system shouldn't have been given control of the car in the first place. But my point is that there is no readiness level where driverless cars will ever be safe -- not in the same way that trains and planes are safe. The driving domain itself is intrinsically dangerous, and changing the vehicle control system doesn't change the nature of that domain. So if we actually care about safety, then we need to be changing the way that streets are designed and the rules by which they are used.
> It is the case that even very good driverless cars of the future will cause fatalities now and then. Even if they're safer than human drivers.
And that is why I am so mad at Uber. They are compromising the public trust in autonomous cars with their reckless release policy. And thereby potentially endangering even more lives, as we have to convince the public of the advantages of this technology.
I agree with all of this except the tense: haven't they shut the whole thing down at this point with no immediate plans to start it up again? Or am I mis-remembering that.
They were testing in three places: San Francisco, Arizona, and Pittsburgh. They didn't want to get a license from California (probably because they couldn't follow the safety regulations), so they threw a tantrum and moved to AZ. Then after this fatality, they shut the AZ program down and are just testing in Pittsburgh.
That's not true. They shut down everywhere after this fatality. They just said that they'll shut down AZ permanently (not that AZ would probably let them do it anyway), and resume testing in Pittsburgh sometime soon, in a more limited way (which apparently the Pittsburgh mayor isn't wild about).
This is absolutely the right analysis of how these systems work and why you can't expect autonomous cars to halt traffic deaths. What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.
My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles have a huge advantage for real world safety because they weigh ~60lbs when in autonomous mode and are limited to 12mph. This equals the kinetic energy of myself walking at a brisk pace, or basically something that won't kill purely from blunt force impact. I think the future for autonomy will be unlocked by low mass and low speed vehicles, not cars converted to drive themselves.
> What the Uber crash has shown us is that the tolerance for AVs killing people is probably exactly zero, not some (very meaningful) reduction like 10x or 100x less.
It hasn't shown that at all. It has documented beyond reasonable doubt that Uber should not be allowed to participate in real world tests of autonomous vehicles.
There are plenty of situations where people would fully accept a self driving vehicle killing someone but this isn't one of those.
The Uber crash has shown us that the public tolerance for AVs killing people is somewhere lower than presumptively around 30x more dangerous than the mean human driver.
Uber had a fatality after 3 million miles of driving.
The mean fatality rate is approximately 1 per 100 million miles of driving.
It's a sample size of one, so the error bars are big, but it drives me insane that people are acting like the Uber cars are the ideal driverless cars of the imagined future, and are super safe. The available data (which is limited, but not that limited) is that Uber driverless cars are much, much, much more dangerous than mean human drivers.
My company didn't start with this zero tolerance thing in our minds, but it turns out our self-delivering electric bicycles
That actually sounds like a really interesting concept, one of those ideas that seems obvious only after someone suggests it. What company is this?
Right now, in the Seattle area, we are basically seeing a new littering epidemic in the form of sharable bicycles being left to rust away, unused, at random places. If the bike could cruise to its next user autonomously, that would be really be a game-changer. "Bikes on demand" would turn bikesharing from (IMHO) a stupid idea into something that just might work.
Plus, the engineering challenges involved in automating a riderless bicycle sound fun.
Weel, we're in Bellevue. It's a super fun problem to work on and one of the first thing we figured out was that trikes won't work because of their width and difficulty to ride, so we got a two-wheeled bike to balance. The autonomy problems are easier than cars in a lot of ways, and this Uber case is something we don't deal with because our bikes can always stop when presented with a no-go situation since we're only autonomous when no one is riding.
That's good to hear, sounds like a very cool project. I could see this living up to at least some of the hype that the original Segways received.
The biggest challenge will probably be to keep people from screwing with the bikes, of course. :( An unoccupied bicycle cruising down the street or sidewalk will fire all sorts of mischievous neurons that onlookers didn't even know they had.
Definitely, will be interesting to test. We have several cameras onboard so that we can see what happened but an equal concern with vandalism is how people feel about being watched. We want to avoid feeling like your neighborhood is suddenly a panopticon. Still unsolved.
Hah, yeah it reminds me of a runaway shopping cart when you see our bike rolling. We expect people will get used to it eventually but we have some ideas to test in the future on how to make it more obvious, such as giving the bike a ‘face’ and having it lit up with LEDs that are visible from all angles. Def not a solved problem, but as far as design problems go it’s a pretty fun one.
Your analysis leaves much to be desired, though, as it comes perilously close to equating "we can't prevent 100% of fatalities" with "we shouldn't care about, learn from, or make changes in response to a fatality".
What the Uber crash has shown us is mostly the willingness of people on HN to excuse Silicon Valley darlings even when they actually demonstrably kill people.
I don't think it has anything to do with "Silicon Valley darlings" (of which Uber is certainly not anymore). It has more to do with "super cool future tech" that they really want to see implemented in their lifetimes - so much so that they may make dubious arguments to support thier position.
Potentially deadly? Maybe, sure, but at low speeds, up to 10 mph say, it is incredibly unlikely that falling off a bicycle (even with no helmet) will do more than cause bruises and damaged ego.
Is this including the elderly who often will break a hip that way and then die of the complications? Because if so, that would not be comparable to a healthy young (< 60 yo) person falling.
Are there numbers on the average height of those fatal falls? If they're from balconies, roofs, etc., I'd say being on a bike (a few feet from the ground) would make it much safer.
Curious if you have ever fallen off a bike? I have fallen over several times on a bike while stationary (when learning to ride with clipless pedals), I have crashed bikes at much higher speeds as well, and I have watched my kids fall off of bikes lots of times while learning. In all of that, I have never seen a (or had my own) head hit the ground. Typically you hit the ground with your arms (slow speed or stationary fall) or your hips, back, or shoulders (if at higher speed).
Don't underestimate how dangerous even a small fall can be, you can end up fine but you could also end up smashing your face into the curb.
A friend of mine, in his 50's very fit, cycling to work and back every day, broke both his arms while doing literally a 10-meter test drive in front of a bike store.
The bike's brakes were setup reversed compared to what he used to, so he ended up breaking with the front brake, flipping the bike over and breaking both his arms while landing. His fault? Sure, but still a rather scary story how quickly even mundane things can go really wrong.
I don't think he did, not much use for a bike when both your arms are in a plaster cast from hands to shoulders. Poor guy couldn't even go to the toilet without help.
Sure, but "[t]he system is not designed to alert the operator." At least they could have alerted the operator. This seems like reckless endangerment or negligent homicide. Luckily for Uber they hit a poor person and no one will hold them responsible. 1.3 seconds is a long time for the operator to act.
This highlights an interesting general point - in many situations, there is no simple safe fallback policy. On a highway, an emergency stop is not safe. This is a general problem in AI safety and is covered nicely in this youtube video, as well as the paper referenced there - https://www.youtube.com/watch?v=lqJUIqZNzP8
That depends, there could simply be no traffic behind you, which an experienced driver and hopefully and automated one would be monitoring.
Besides, there are many situations on the highway where an E-stop is far safer than any of the alternatives even if there is traffic behind you. Driving as though nothing has changed in the presence of an E-stop worthy situation is definitely not the right decision.
How intelligent is the ML driving the car? If the car slowed down and hit the 49 year old at a reduced speed the insurance payout to a now severely disabled individual would be far more expensive than the alternative insurance pay out with a pedestrian fatality. A choice between paying out for 40 years worth of around-the-clock medical care vs. a one-time lump-sum payout to the victim's family would be pretty obvious from a corporate point of view.
Are you seriously suggesting that the better software strategy is to aim for the kill because it is cheaper than possibly causing 'only' injury?
That should be criminal.
I'm all for chalking this one up to criminal negligence and incompetence, outright malice is - for now - off the table, unless someone leaks meeting notes from Uber where they discussed that exact scenario.
My point is that it's a black box and nobody outside of Uber knows what its priorities are. It could have just as easily mistaken the pedestrian leaned over pushing the bike for a large dog and then proceeded to run her over because it's programmed to always run dogs over at full speed on the highway. Outside of Asimov's "Three Laws of Robotics" there is nothing that dictates how self-driving cars should behave, so my unpopular idea above isn't technically breaking any rules.
Computers have vastly lower reaction time than humans. Computers have sensory input that humans lack (LIDAR). Computers don't get drowsy or agitated.
And "almost" is always a good idea when talking about a future that looks certain. Takes into account the unknown unknowns. And the known unknowns (cough hacking cough).
Fast reaction times, good sensors and unyielding focus are not enough to drive safely. An agent also needs situational awareness and an understanding of the entities in its environment and their relations.
Without the ability to understand its environment and react appropriately to it, all the good the fast reaction times will do to an AI agent is to let it take the wrong decisions faster than a human being.
Just saying "computers" and waving our hands about won't magically solve the hard problems involved in full autonomy. Allegedly, the industry has some sort of plan to go from where we are now (sorta kinda level-2 autonomy) to full, level-5 autonomy where "computers" will drive more safely than humans. It would be very kind of the industry if they could share that plan with the rest of us, because for the time being it sounds just like what I describe above, saying "computers" and hand-waving everything else.
That's a sociopolitical question more than a technical one. I posit that:
1.) Road safety -- as far as the current operating concept of cars is concerned (eg., high speeds in mixed environments) -- is not a problem that can be "solved". At best it can only ever be approximated. The quality of approximation will correspond to the number of fatalities. Algorithm improvements will yield diminishing returns: the operating domain is fundamentally unsafe, and will always result in numerous fatalities even when driven "perfectly".
2.) With regards to factors that contribute to driving safety, there are some things that computers are indisputably better at than humans (raw reaction time). There are other things that humans are still better at than computers (synthesising sensory data into a cohesive model of the world, and then reasoning about that world). Computers are continually improving their performance, however. While we don't have all the theories worked out for how machines will eventually surpass human performance in these domains, we don't have a strong reason to believe that machines won't surpass human performance in these domains. The only question is when. (I don't have an answer to this question).
3.) So the question is not "when will autonomous driving be safe" (it won't be), but rather: "what is the minimum level of safety we will accept from autonomous driving?" I'm quite certain that the bar will be set much higher for autonomous driving than for human driving. This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is. Look at the disparities in sociopolitical responses to, say, plane crashes and Zika virus, versus car crashes and influenza. Autonomous vehicles will be treated more as the former than the latter, and therefore the scrutiny they receive will be vastly higher.
4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
5.) Personally, I think that the algorithms won't be able to pass this public-acceptability threshold on their own, because even the best-imaginable algorithm, if adopted on a global basis, would still kill hundreds of thousands of people every year. That's still probably too many. I expect that full automation eventually will become the norm, but only as enabled by new types of infrastructure / urban design which enable it to be safer than automation alone.
> This is because risk perception -- especially as magnified by a media that thrives on sensationalism -- is based on how "extraordinary" an event seems, much more than how dangerous it actually is.
This is a wonderfully concise way of describing a phenomenon that I have not been able to articulate well. Thank you.
OK, this is a very good answer- thanks for taking the time.
I'm too exhausted (health issues) to reply in as much detail as your comment deserves, but here's the best I can do.
>> 4.) So basically, driverless cars will only find a routine place on the road if and when they have sufficiently fewer fatalities than human driving. My assertion was a bit tautological in this respect, but basically, if they're anywhere near as dangerous as human drivers, then they won't be a thing at all.
Or at least it won't be morally justifiable for them to be a thing at all, unless they're sufficiently safer than humans- whatever "sufficently" is going to mean (which we can't really know; as you say that has to do with public perception and the whims of a fickle press).
I initially took your assertion to mean that self-driving AI will inevitably get to a point where it can be "sufficiently" safer than humans. Your point (2.) above confirms this. I don't think you're wrong, there's no reason to doubt that computers will, one day, be as good as humans at the things that humans are good at.
On the other hand I really don't see this happening any time soon- not in my lifetime and most likely not in the next two or three human generations. It's certainly hard to see how we can go from the AI we have now to AI with human-level intelligence. Despite the successes of statistical machine learning and deep neural nets, their models are extremely specific and the tasks they can perform too restricted to resemble anything like general intelligence. Perhaps we could somehow combine multiple models into some kind of coherent agent with a broader range of aptitudes, but there is very little research in that direction. The hype is great, but the technology is still primitive.
But of course, that's still speculative- maybe something big will happen tomorrow and we'll all watch in awe as we enter a new era of AI research. Probably not, but who knows.
So the question is- where does this leave the efforts of the industry to, well, sell self-driving tech, in the right here and the right now? When you said self-driving cars will almost certainly be safer than humans- you didn't put a date on that. Others in the industry are trying to sell their self-driving tech as safer than humans right now, or in "a few years", "by 2021" and so on. See Elon Musk's claims that Autopilot is safer than human drivers already.
So my concern is that assertions about the safety of self-driving cars by industry players are basically trying to create a climate of acceptance of the technology in the present or near future, before it is even as safe as humans, let alone safer (or "sufficiently" so). If the press and public opinion are irrational, their irrationality can just as well mean that self-driving technology is accepted when it's still far too dangerous. Rather than setting the bar too high and demanding an extreme standard of safety, things can go the other way and we can end up with a diminished standard instead.
Note I'm not saying that is what you were trying to do with your statement about almost certainty etc. Kind of just explaining where I come from, here.
Likewise, thanks for the good reply! Hope your health issues improve!
I share your skepticism that AIs capable of piloting fully driverless cars are coming in the next few years. In the longer term, I'm more optimistic. There are definitely some fundamental breakthroughs which are needed (with regards to causal reasoning etc.) before "full autonomy" can happen -- but a lot of money and creativity is being thrown at these problems, and although none of us will know how hard the Hard problem is until after it's been solved, my hunch is that it will yield within this generation.
But I think that framing this as an AI problem is not really correct in the first place.
Currently car accidents kill about 1.3 million people per year. Given current driving standards, a lot of these fatalities are "inevitable". For example: many real-world car-based trolley problems involve driving around a blind curve too fast to react to what's on the other side. You suddenly encounter an array of obstacles: which one do you choose to hit? Or do you (in some cases) minimise global harm by driving yourself off the road? Faced with these kind of choices, people say "oh, that's easy -- you can instruct autonomous cars to not drive around blind curves faster than they can react". But in that case, the autonomous car just goes from being the thing that does the hitting to the thing that gets hit (by a human). Either way, people gonna die -- not due to a specific fault in how individual vehicles are controlled, but due to collective flaws in the entire premise of automotive infrastructure.
So the problem is that no matter how good the AIs get, as long as they have to interact with humans in any way, they're still going to kill a fair number of people. I sympathise quite a lot with Musk's utilitarian point of view: if AIs are merely better humans, then it shouldn't matter that they still kill a lot of people; the fact that they kill meaningfully fewer people ought to be good enough to prefer them. If this is the basis for fostering a "climate of acceptance", as you say, then I don't think it would be a bad thing at all.
But I don't expect social or legal systems to adopt a pragmatic utilitarian ethos anytime soon!
One barrier it that even apart from the sensational aspect of autonomous-vehicle accidents, it's possible to do so much critiquing of them. When a human driver encounters a real-world trolley problem, they generally freeze up, overcorrect, or do something else that doesn't involve much careful calculation. So shit happens, some poor SOB is liable for it, and there's no black-box to audit.
In contrast, when an autonomous vehicle kills someone, there will be a cool, calculated, auditable trail of decision-making which led to that outcome. The impulse to second-guess the AV's reasoning -- by regulators, lawyers, politicians, and competitors -- will be irresistible. To the extent that this fosters actual safety improvements, it's certainly a good thing. But it can be really hard to make even honest critiques of these things, because any suggested change needs to be tested against a near-infinite number of scenarios -- and in any case, not all of the critiques will be honest. This will be a huge barrier to adoption.
Another barrier is that people's attitudes towards AVs can change how safe they are. Tesla has real data showing that Autopilot makes driving significantly safer. This data isn't wrong. The problem is that this was from a time when Autopilot was being used by people who were relatively uncomfortable with it. This meant that it was being used correctly -- as a second pair of eyes, augmenting those of the driver. That's fine: it's analogous to an aircraft Autopilot when used like that. But the more comfortable people become with Autopilot -- to the point where they start taking naps or climbing into the back seat -- the less safe it becomes. This is the bane of Level 2 and 3 automation: a feedback loop where increasing AV safety/reliability leads to decreasing human attentiveness, leading (perhaps) to a paradoxical overall decrease in safety and reliability.
Even Level 4 and 5 automation isn't immune from this kind of feedback loop. It's just externalised: drivers in Mountain View learned that they could drive more aggressively around the Google AVs, which would always give way to avoid a collision.
So my contention is that while the the AIs may be "good enough" anytime between, say, now and 20 years from now -- the above sort of problems will be real barriers to adoption. These problems can be boiled down to a single word: humans. As long as AVs share a (high-speed) domain with humans, there will be a lot of fatalities, and the AVs will take the blame for this (since humans aren't black-boxed).
Nonetheless, I think we will see AVs become very prominent. Here's how:
1. Initially, small networks of low-speed (~12mph) Level-4 AVs operating in mixed environments, generally restricted to campus environments, pedestrianised town centres, etc. At that speed, it's possible to operate safely around humans even with reasonably stupid AIs. Think Easymile, 2getthere, and others.
2. These networks will become joined-up by fully-segregated higher-speed AV-only right-of-ways, either on existing motorways or in new types of infrastructure (think the Boring Company).
3. As these AVs take a greater mode-share, cities will incrementally convert roads into either mixed low-speed or exclusive high-speed. Development patterns will adapt accordingly. It will be a slow process, but after (say) 40-50 years, the cities will be more or less fully autonomous (with most of the streets being low-speed and heavily shared with pedestrians and bicyclists).
Note that this scenario is largely insensitive to AI advances, because the real problem that needs to be solved is at the point of human interface.
The problem is that drivers rarely maintain the safety distance they should have to not endanger themselves. BUT in that case, the car should have also noticed if there were near traffic behind. Doing nothing in that case doesn’t seem the right decision at all.
Very good write anyways... indeed many things will have to change - probably the infrastructure, the vehicles, the software, the way pedestrians move, and driver behavior as well.
The reason we were ultimately able to do this is because we were operating in a fully-segregated environment of our own design. We could be certain that every other vehicle in the system was something that should be fully under our control, so anything even slightly anomalous should be treated as a hazard situation.
There are a lot of limitations to this approach, but I'm confident that it could carry literally billions of passengers without a fatality. It is overwhelmingly safe.
Operating in a mixed environment is profoundly different. The control system logic is fully reversed: you must presume that it is safe to proceed unless a "STOP" signal received. And because the interpretation of image & LIDAR data is a rather... fuzzy... process, that "STOP" signal needs to have fairly liberal thresholds, otherwise your vehicle will not move.
Uber made a critical mistake in counting on a human-in-the-loop to suddenly take control of the vehicle (note: this is why Type 3 automation is something I'm very dubious about), but it's important to understand that if you want autonomous vehicles to move through mixed-mode environments at the speeds which humans drive, then it is absolutely necessary for them to take a fuzzy, probabilistic approach to safety. This will inevitably result in fatalities -- almost certainly fewer than when humans drive, but plenty of fatalities nonetheless. The design of the overall system is system is inherently unsafe.
Do you find this unacceptable? If so, then then ultimately the only way to address this is through changing the design of the streets and/or our rules about how they are be used. These are fundamentally infrastructural issues. Merely swapping out vehicle control systems -- robot vs. human -- will be less revolutionary than many expect.
1: http://www.ultraglobalprt.com/