Hacker News new | past | comments | ask | show | jobs | submit login
New reports suggest limits to autonomous vehicle performance (ieee.org)
105 points by charlysl on Feb 2, 2018 | hide | past | favorite | 125 comments



Another interesting article popped up with the release of the CA disengagement reports: https://jalopnik.com/californias-autonomous-car-reports-are-...

TLDR: Cruise had, what some would consider, to be a disengagement incident but didn't report it because it does not fall into the categories defined by California.

The core issue here is that the disengagement reports should not be the end-all-be-all of where companies are. As the ieee article calls out, we have no clue what kind of testing the companies are doing (how hard they are pushing their systems). There is also some room for companies to avoid reporting certain types of incidents.

Waymo did 2 million miles in 2017, but only 330k were in CA. We are only seeing a slice of their data.

If we (the public) want a better picture of where companies are on this, the rules around what is reported need to be changed. They possibly need to also be done at the federal level so no company can hide their public testing in states that don't require reporting.


My biggest issue with the "miles driven" metric is that it doesn't account for the software version. Maybe updating from v1.0 to v1.1 won't warrant resetting the counter, but I believe that over time each historical mile driven becomes materially less relevant to currently deployed version.


I would argue the biggest issue with the "miles driven" metric is that it isn't contiguous. Your test drivers can handle all the rough stuff, and your mile counter ticks up on the straight shot highways. (Arguably, you could be super dishonest and handle every single intersection manually, and still have a massive miles driven statistic since most distance is accumulated on straight-ways.)

Disengagements is a useful number, and the disengagement numbers say so much more about the real state of self-driving cars: That they fail as often as a non-synthetic oil change. And if humans had driving system failures that often, we would not be considered good at driving.

The longest number a company should be marketing is how many miles they've driven between disengagements. Like the "days since an accident" sign at a workplace: Fail, and you go back to zero. The numbers will be smaller, but they'll be a lot more meaningful.


>Arguably, you could be super dishonest and handle every single intersection manually, and still have a massive miles driven statistic since most distance is accumulated on straight-ways.

i'm not sure what benefit there would be to gaming the disengagement stats. If anything, i think the incentive would be the other way around - inflate your number of disengagements so your competition thinks you're further behind than you actually are.

The point of testing is to bring a product to market, and if your product sucks people are going to figure that out relatively quickly. hiding your disengagements from the california DMV isn't going to do anything for you except build hype your product can't live up to.


Not if you're trying to get investment(s) or keep your stock price stable.


what about hours, like a pilot?


Highlight point is: A pilot's hours are in almost all cases "without an incident". The problem is Google's accounting is "total miles". Distance or time isn't the problem, it's the fact that it includes "incidents" that, without a test driver, would likely be "accidents".


Just like miles, which can look rather good if they are all highway miles, aircraft incidents are concentrated around takeoff and landing.

Risk also skews results: Pilots taking off and landing on beaches in Alaska aren't comparable to pilots taking off from airports on the basis of incidents per 1000 hours.


All simple metrics can be gamed.


s/simple//


The 'miles driven' number is a bit like years of programmer experience.

It all depends on the content of the miles. And without transparency about that it sounds impressive but could very well be 95%+ that doesn't contribute much and where your average toddler without a driving license could have handled the car.


>If we (the public) want a better picture of where companies are on this

Why? I'm as curious as anyone, but what is the argument for mandatory disclosure of disengagement numbers?


I think it's pretty common that progress slows down once the simpler problems have been solved. The unsolved problems to get to fully autonomous cars are getting harder and harder once the low hanging fruit is gone.


I'd love to see an analysis of the sorts of situations that the advanced systems can't handle.

Driving in Mexico City last month I was appreciating how intensively interpersonal driving was: it seemed all drivers were constantly computing the risk appetite of all other drivers and acting accordingly; navigation, avoiding pedestrians, and following the law (to the extent possible) was the easy part. Maybe there's a way to avoid issues by providing coordination mechanisms, but I don't see how this would work as long as there are human-driven legacy cars. So will self-driving cars need theory of mind?


In addition, humans tend to push the limits so even if we have autonomous cars that can handle current human driver behavior, the humans will adapt and put more pressure on the autonomous cars.


I'd argue for it to go the other way. Driving has a large cultural component -- people tend to drive how others around them do. Once autonomous cars hit some critical mass lots of human drivers are going to start mimicking how they drive which is probably a fair bit more conservative than most of these people in big cities.


I doubt it. Once you know that can cut off a self driving car without consequences it's very tempting to do that.


What are the consequences of cutting off any other car today? There will still be people in those self driving cars, you know. Perhaps even people with a horn.


A self driving car will be for sure programmed in a way that it will avoid collisions at any cost. Another human driver on the other hand may just be an idiot and run into you.


On the other hand a self driving car would have a crystal clear record of you driving unsafely, which could be immediately uploaded to police.


Self-driving cars will work in Latin America the week after hell freezes over. I was recently in Ecuador, and many major city streets don't really have lanes -- there is just a swarm of cars cooperating to fill the available space, communicating via friendly honks. Then there's the Mexican convention for passing on a 2-lane road, where the passer pulls into the center, and traffic in both directions pulls off onto the shoulders. Robot cars may be able to drive around Phoenix soon, but the rest of the world is much more complicated.


This is a powerful law of engineering. From 1903 to 1969 (66 years) we went from the Wright Flyer to a man on the moon. In the 49 years since then, we still fly subsonic aircraft that are maybe 40-50% more fuel efficient (747 to 787).

You can observe similar plateaus in power systems (coal, nuclear), medicine, etc.


How do you know when you hit the peak, though? I remember thinking we'd hit peak mobile phone with Nokia and Motorola putting out slightly-smaller devices each year. Then the iPhone came along.


There often is no peak, just plateaus from which to get to the next plateau gets often more and more difficult.


Much depends on the specific domain and application. Areas in which there is physical transformation, movement, or transfer tend to hit theoretical limits far more rapidly than in informational contexts. But if you look at information, the end-results enabled by information tend to plateau fairly quickly.

A few examples come to mind.

There is the field of prime movers and combustion engines. Efficiencies are set by the Carnot and Rankine cycles, and are constrained by the hot/cold side ratios. For most fuel-based engines, the maximum attainable efficiency is on the order of 50%, and in practice, an ICE (Otto-cycle, Diesel, or turbine engine) tends to hit about 30% efficiency in actually use.

(Ironically, it's coal-fired power plants, with predictable power curves and high furnace temps, which reach some of the highest Carnot efficiencies, though marine powerplants, often two-cycle Diesel engines, come quite close.)

You're simply not going to get better.

Even electric motors where they rely on thermal electricity generation are faced with similar limitations, despite the electric motor's ~85% - 95% efficiency in converting electricity to motive force. It's the decoupling of motive force from energy origin which is a primary advantage.

If you look at the field of lighting, changes in fundamental techniques have resulted in a tremendous increase in lumens per unit of input energy. Open fires, torches, and oil lamps relied on blackbody emissions (mostly from soot in the flame and air) for light, as did incandescent electric lamps, through from a heated filament. Fluorescent bulbs rely on specific chemical properties and emission, and LEDs on quantum effects. It's hard to imagine much of an increase beyond this, though there might be possible avenues.

If you look at automobiles and aircraft, initial innovation as measured by patent filings peaked in the 1920s, only a couple of decades after initial mass-market offerings. Much of what followed were changes in fuels (from NGLs to petrol/gasoline, the disaster that was leaded fuel), standardisation (placement of controls, signalling lights, wipers, components), safety, handling and controls, emissions controls, and other dialing in of performance, reliability, etc.

Rober J. Gordon in The Rise and Fall of American Growth makes the case that the DC-3, still in active commercial use was the pinnacle of aircraft perfection, attained in 1937, only 24 years after the Wright's first powered flight. Principle changes have been jet aircraft (after 1945), the jet widebody (707, 1958), and in the past decade or so, materials developments finally moving on, slightly, from the airframe designs of the 1950s.

Consider that the U.S. heavy bomber fleet of B-52 aircraft will be flying not only the same design as first flew in 1952, but the same aircraft, last manufactured in 1962, through at least 2040, some 88 years after first manufacture, and meaning that the aircraft will be at least 78 years old.

There's some room for improvement of the aircraft, but often the benefits simply aren't worth the price. That was the case for an engine retrofit programme explored in the 1970s, which determined that the fuel savings would not be cost-effective. (It's possible that subsequent petroleum economics have invalidated that assessment, though I'm not sure.)

There's also a role played by materials and the properties they provide: organics, stone, ceramics, metals, fibres, synthetics, composites, and more. There is an opportunity frontier of properties, side effects to consider, abundance and cost, and generally constraints on what can be provided. You'll see this in vehicles (engines, frames), devices, energy storage (especially batteries), fabrics and clothing (much of the "look" of 20th century decades is dependent on what new fabrics became available), etc.

In the information field, you have a fundamentally different dynamic in that capabilities scale superlinearly with chip density. That is, as you pack components ever close, the cost (and energy) fall by factors of two, about every 18 - 24 months according to Moore's Law. What ends up happening is that whilst, yes, former levels of capability are now exceedingly cheap to provide, additional, previously unattainable levels of capability are also possible. So that where Nokai had perfected the "deliver voice and SMS in my pocket" capability, the iPhone and Android upped the ante with music, Web, email, expanded messaging capabilities, and apps.

And surveillance, privacy invasion, attention erosion, exploding batteries, and other negative affordances.

There's a curious point that in technology the market seems to be far less about "how much would you pay for X" and far more about "How much X can you give me for Y dollars?". That is, price points seem fairly constant whilst capabilities expand. It suggests we have a budget for capabilities, and that manufacturers aren't interested in investing in providing abundant, but low-priced devices (at least not in advanced markets).

Summary: If it moves, delivers material or energy, or otherwise interacts with its environment, expect a fairly early plateau. If the device is informational you can see much longer periods of evolution, but also some constancy to other elements of the technology, particularly around cost. The Jevons Paradox means that as costs of providing capabilities falls, induced demand increases (especially in transport, comms, information, data). (A Gresham's Law dynamic also means that simpler / more common uses dominate as markets grow.) Physical limits still constrain, but may be expressed in unintuitive ways.


I wonder if it also is an indication that we've covered most of the low hanging fruit of the major needs facing humans - the part that we could do with technology that assumed unlimited resources - and now we are faced with the much harder problem of maintenance and scaling access to the benefits of technology without huge environmental, health, and societal damage.

Self driving cars (road safety, congestion?), renewable energy (atmospheric CO2, air pollution), etc seem to fit that motivation.

The next quantum leaps here seem to be things like teleportation (for transportation), transhumanism (for "life extension"), or interplanetary colonization (for ?). All seem like very remote possibilities today.


Letting a computer drive my car is high-risk: If there is a malfunction at 70mph, I could die!

Where's my automatic lawn mower? Where's my robot to unload the dishwasher and put everything away?

We need to find other consumer technology to use AI in before we risk our lives with self driving cars.


> Letting a computer drive my car is high-risk: If there is a malfunction at 70mph, I could die!

Google can't even make GPS mapping work perfectly - it's apparent if you use it for a day. Not to mention that AI vision systems are susceptible to adversarial attacks, which means they can be hacked by external images. This paper just came out yesterday: "Breaking 7/8 of the ICLR 2018 adversarial example defenses" (https://arxiv.org/abs/1802.00420). The situation is dire.

I think they are afraid that drawing a couple of lines with a marker on a street sign might make it invisible, or turn it into another sign. They can't defend against such attacks because the neural net doesn't actually understand the world, it just observes patterns. Understanding requires causal reasoning.


>Not to mention that AI vision systems are susceptible to adversarial attacks ... The situation is dire.

That's like saying a human driver can be blinded with a bright light so the situation is dire.

New world, old rules. You'll still go to jail for manslaughter (or whatever the charge would be).


Fwiw maps used for autonomous vehicles are higher fidelity than consumer maps. Furthermore the cars are generally equipped with IMUs so that they don't get thrown off by urban canyons, clouds, etc. And more error correction (particle filters, visual cues + map data) is used to pinpoint your location on a map.

Adversarial attacks are concerning for any AI model, but autonomous vehicles aren't all AI. Lots of rules and heuristics and pre-knowledge are utilized. In the end if something is uncertain it generally amounts to "slam on the brakes" which is more a ux problem than a safety problem.


There is no « better quality » map out there (unless maybe military and restricted).

Mapping the world is hard because it is complex and evolve way faster than a Google Car army can patrol it for a reasonable cost. More often than not OpenStreetMap which is a community project is more up to date than commercial map, but it’s not precise enough. Beside nobody would bet there life on something that is Wikipedia-like editable by ill-intentioned trolls.

This is precisely why all autonomous car rely heavily on computer vision and why adversial attacks should be carefully considered.

Uncertainty will indeed « slam on brake » but adversial attack don’t aim to generate uncertainty the aim to generate false certitudes.


Nope. There are higher fidelity maps. You should read about robotics.

It's true these maps don't exist for the whole world yet but it's also why all sdc companies are launching in metros. They're building them.

Vision is good at recognition but bad at localization. This is the real problem. In robotics a fundamental problem is SLAM. You should read about it.


Could you explicit acronyms? They seem very business specific but I would gladly learn about theses projects as I often use cartographic products for my work.


UX problem unless in traffic; getting rammed by the next car is not my idea of safety (not to mention that slamming the brakes for no apparent reason is a textbook example of reckless driving in many places). Oh well, let's just not drive unless the street is empty, I guess ;o)


Well map software on a phone isn't written to specs that account for liability for human life for one. Apples and oranges.


It should be. They're used as inputs to humans making decisions at highway speeds. Now obviously we allow this because we assume humans are intelligent enough and have enough of a model of the real world in their head that they should be able to cope with these issues but I see no reason why an AI driving a car should cross a lower bar. I fully expect that AI to take mapping errors into account and to survive them.


On the flip side, very few people die while mowing the lawn or unloading the dishwasher. If you mow the lawn poorly, you'll be fine. If you unload the dishwasher while drunk, there's no risk of killing a family.

I know enough people that drive terribly to see the obvious benefits to autonomous driving. It doesn't need to be perfect to have a net-positive effect on road safety, it just needs to be better than a human on average. And based on the evidence, it seems they're doing a damn good job of achieving that goal.


I agree, but wanted to point out that you are making the implicit assumption that all drivers are going to be replaced by autonomous vehicles. Otherwise comparing to the average driver may be biased as there most likely are correlations in some way between driver safety and likelihood of acquiring autonomous vehicles (desirability / affordability).


It's not making the assumption that all drivers will be replaced. It's making the assumption that the average skill level of the drivers that are replaced will be lower than the skill level of autonomous cars.


I have a robot vacuum, but: it doesn't do corners. It doesn't do steps. It doesn't get along with cats. It's orders of magnitude less effective than a human vacuum pilot. Yet this is also a problem that's orders of magnitude easier to solve, yes?


Are you willing to pay for more?

Robovacuums seem to have hit a sweet spot - they do just enough on their own to justify their costs by saving you time and effort. Solving the corner cases you mentioned is possible, but there is not a strong enough economic incentive for that.


It's a winner-takes-all market, so there are absolutely strong economic incentives to build the best robovacuum control software.


Why is it a winner-takes-all market?


In fairness, cars move in far fewer ways than a vacuum. Vacuums need to make sharp turns and get into tight spaces on unmarked, uneven surfaces. Cars move forward and slightly to each side for almost all of their movement. Cars also drive on mostly flat surfaces that are well marked and mapped.


You haven't seen an italian town.

Cars make sharp turns and get into tight spaces on unmarked, uneven surfaces, with pedestrians walking everywhere around the car.

That's what a self-driving car needs to support. All road rules out of the window, tiny roads you canbarely navigate, and due to a festival everything is redirected around and all your maps are useless, you're told to drive a short bit through a park because there's a parade on the road, the only way to tell where you can and can't drive being a nonstandard sign and a police officer telling you.

This is a real life use case of driving, as given in many European towns. For additional difficulty, do a small Weihnachtsmarkt in a rural place, where you'll have the same, but with so much snow you can barely see anything.


How well could you, human, vacuum a house if all you were given was a Roomba robot's chassis controlled by you via joystick, and maybe a camera mounted in the robot?

There is a physical challenge arising from the form factor, such that your intelligence won't help. Those robots don't do stairs and tight places very well simply because the hardware is that way.

In driving, the robot has the same controls as the human. It can turn the wheels, brake, apply gas. That basic playing field is level.

It's all about how well you make the AI see and understand, and translate that into turning the wheel, changing forward/reverse, and hitting gas/brakes.


Robot vacuums put very few lives at risk and operate in a manifestly simple environment. They literally navigate by heading off in random directions until the crash into something, then turn. Self-driving cars are an entirely different domain.

(I'm referring to devices/products such as the Roomba.)


Already true, of every system in your car. There are arguable a huge number of things that can go wrong in a car already.


All of those things have a good chunk of a century worth of development (as well as trillions of dollars in R&D) put into making them more robust, building redundancy, etc. I think the parent comment has a good point in that this automated driving thing is a very enormous change with a lot of risk behind it that doesn't appear to be being well managed.


Yes, but there was a time for all of those things in which they did not have a century of development. Deploying them is how you get that century of development. The risk of this stuff is really quite small and local. If the technology has faults, they'll become apparent relatively quickly and harm a relatively small number of people. I see no reason to be overly cautious here. It's not a massive correlated risk, like say, global warming.


We put the new tires and brakes on the the fastest vehicles we could make, then had a professional run them around a track with a dozen other lunatics. Anything that broke was fixed forever, and relatively few lives were lost. The number that died was low because there were fewer participants. The critical technology was well-tested before it was sent to the production line.

That's how we got our century of development. Testing, verifying, refinement. It wasn't by pushing out new stuff to a batch of users and seeing if everything worked out allright.


The number of recalls issued weekly by the car industry gives lie to that claim. No, we just live with the risk. Because we love our cars so much.

Just recently it was discovered a series of air bags manufactured in a small window of a couple of days, would instead of protecting the occupants, would emit high-speed shrapnel into their chest and face killing them. Quite an oversight.


That recall is actually far larger than a few airbags made over a couple days (it's more like 65-70 million airbags) but this is exactly my point. The operating principle of an SRS system is pretty straightforward. Computer detects a crash, computer fires the airbags. It's actually a separate module that's only got the one job. Despite its relative simplicity there are also recalls where it sometimes fails to fire the airbag [1], or sometimes fires it for no reason at all! [2]

If the automakers can't catch problems in a simple system like this, just imagine what sort of new and exciting failures we'll see in an inadequately-tested car autopilot! These things need to be very thoroughly tested before we allow lots of them on the road, and that needs to happen before they omit the steering wheel.

[1] http://www.automotive-fleet.com/channel/safety-accident-mana...

[2] https://www.autoblog.com/2015/10/30/fca-recalls-894k-total-v...


Don’t you think that the number of recalls would be even higher without motor racing being used as a proving ground?

If your airbag fires you’re already in an accident and might die anyway. You’ve already accepted that risk. There are probably better recall examples but I’m not sure it’d disprove the underlying point though.


But isn't that what they are doing right now? The only difference in terms of your argument seems to be that it's not other race car drivers and spectators at risk but other drivers and pedestrians. Then again they are not racing, but testing autonomous vehicles with drivers as a backup.


The only difference.

Other than that, Mrs. Lincoln, how was the play?


What's the reasonable test surface for a new type of brake or tire or electrical system? These are things that you can reasonably well do some fundamental engineering analysis on and say "that seems like a good design" or not. It's also something that you can test in a way that gives you a high confidence that your tests are sufficient to provide high confidence in the design. And even then there are problems that slip through.

Switching from a human driver to software automated driving is a much bigger change in automobiles than has ever come before. It's also one with a much larger reasonable test surface, by orders of magnitude. Systemic risk? How about meltdown and spectre vulnerabilities? The failure mode of most new hardware in cars requires either edge cases or corner cases to result in the worst case scenario of a crash at speed. And yet that's a very common failure mode in autonomous driving. You can't realistically have a car that can autonomously drive itself 90% or even 99% of the time, it must be nearly exactly 100% of the time to be worthwhile and safe.



I've had an automatic lawn mower for years, not sure what your point is there.


Then why don’t we make driving completely illegal? If you make an error at 70 mph you die and you will kill other innocent people. And it happens with a scarily high frequency today. Self driving cars are supposed to bring drastically down the number of car deaths, and I think this is a much more important goal than having an automatic lawn mower or a robot that unloads your dishwasher (seriously??).


I never believed neither Tesla nor Nvidia when they said last year that they would achieve Level 5 within a few short years. I have a rule of thumb for such announcements: subtract at least one level to get the real level and what their systems are actually capable of.

How could these cars reach Level 5 when they've only been tested for like a year or two on some roads? And I don't believe there is good enough simulation for them right now either. Even Google's CAPTCHA was asking me recently to pick the "bridge" from the pictures. There was no bridge.


Most of the public testing seems to be on the West coast and Southwest as well. What happens in environments where you have things like rain and snow?

My non-autonomous Honda Odyssey has cameras on the passenger mirror and rear hatch -- both of which were useless within 90 minutes of traveling from Lake Placid back home thanks to road spray and salt.

IMO Level 5 is viable for some relatively limited use cases, or with roads that have embedded telemetry. I'll believe it when intercity truck fleets convert for a few years.


I think this is why cheap solid-state LIDAR is critical to this goal. It’s currently too expensive and bulky for this kind of use, but if you can embed a few dozen LIDAR units in a car for under $1000, you get some really accurate telemetry that’s not affected by adverse weather. Cheap solid state LIDAR is gonna be the invention that powers the next industrial boom.


LIDAR doesn't really work in rain and snow. That's part of why this is a hard problem.


Agreed, but multiplexed solid state LIDAR will up to a 99% use case for vision capture. What’s out there now solves maybe 80%.

Its a hard problem being solved by incremental improvements — first you gotta make LIDAR work in concept (which is basically done), then you have to shrink it, then commercialize it, then commoditize it. So it’ll take a decade or more to realize, but that’s the ultimate thing I think we need to get these things into the mainstream.


On the other hand I don't get the obsession with level 5. There are a lot of use cases for fully autonomous level 4 e.g.

- pick me up in the parking lot

- drive out of parking garage and meet me there

- trip from LA to SF etc


Level 5 is where the utility starts to make up for the ballooning costs. Nobody is going to be able to buy a $60k autonomous Honda Accord — you’ll pay by the drink instead.


Exactly. A lot of people are still thinking about this in terms of a cool feature to add to your private car. That's not where this is going, that's not why anybody (except perhaps Tesla) is investing in this research. We're talking about a revolution in public transit.


Totally. There's no rain in Seattle ever.


I forgot about Seattle’s long cold winters full of bitter cold, snow and salt.


> Even Google's CAPTCHA was asking me recently to pick the "bridge" from the pictures. There was no bridge.

That's to be expected. It even says at the top of the challenge box: "If there's no match, just click next".

Negative reinforcement is reinforcement too.


Waymo progress seems to be slowing on critical disengagements (in older CA DMV reports these were called "safe operation disengagements" - they stopped reporting this type in 2017). These disengagements deal with perception issues, the software leading to unwanted maneuvers, inability to react to reckless road users, and incorrect predictions.

You can see it reduced rate of improvement when you dig into the numbers:

2015 0.16 disengagements per 1000 miles

2016 0.13 disengagements per 1000 miles

2017 0.12 disengagements per 1000 miles


Looking at miles before disengagement shows things a little differently.

  2015: ~6250
  2016: ~7692 (+1442)
  2017: ~8333 (+641)
Personally, I think a self driving car with 1 notified disengagement per year as perfectly usable. Not so useful for the blind, but still very helpful.

That said, even reporting the same number every year could still represent progress if they keep pushing the car harder. AKA, if 2015 = highway driving in the day with nice weather, 2017 = inner city driving at night in a snowstorm the same number of disengagements would still represent a lot of progress.


This is a serious problem if you have to take over during disengagement. If you spend most of the year not driving you risk falling out of practice, and then the one time you must rely on your own skill is in an unusual situation requiring you to exercise good judgement at speed. I would expect most disengagements to result in accidents because of this.


There was a bunch of research on this a while back, essentially saying the driver needs to be paying driving level attention the whole time or the car needs to handle itself completely; you can't just expect someone to go from not driving to driving at 60mph without warning.

I mean, you can probably design autonomous "disengage" modes- hitting the emergency blinkers and heading for the breakdown lane is the extreme default. On a lesser level, the thing could just drive like I do. If a merge is too tricky? Just keep going straight, and recalculate on the next exit.

This is helped by that fact that modern cars seem to have pretty good "Just don't run into something" sensors already, and from my own experience as a bad driver with a decent accident record, not running into other things is most of the battle.

So yeah, I could totally see autonomous cars evolving the ability to safely get themselves off the road. Of course, you're still gonna need to do that a lot less often than every three thousand miles, but you don't have to get it to zero, just around the point where normal cars break down mechanically.


I don't think people are going to enter their destination for every trip. So, people will still do some driving even with self driving systems.

On top of that most situations with disengagement have not resulted in crashes. Really it's failing to pay attention during normal driving that's most often at fault, a 'buzzing you need to pay attention now' system solves most of the problems even if the car is not driving it's self.


You make fair points, but didn't Waymo first show cars with no steering wheel whatsoever as their target? I have only see the Waymo branded vans in recent months, not the tiny custom cars (forgot the name of them).


WePod and some other companies already have self driving buses doing limited routes in foot traffic. Which is basically the Waymo tiny car demo.

It turns out to be significantly easier problem than full speed road traffic. Which IMO is why google gave up on that concept demo.


> Personally, I think a self driving car with 1 notified disengagement per year as perfectly usable.

Funny, I interpret that as one guaranteed accident every year. Because by the time the autopilot disengages you can be sure the driver is after being lulled to sleep for the last 8000+ miles in no position to step into the situation in time to avoid a crash.


You assume crash avoidance is the only reason a car would give up.

Self driving cars however have two goals, not hit anything AND got somewhere specific. It seems likely a car would disengage if it encountered a blocked road even if it did not hit anything. In that context we may see many cases where a car stops and 'gives up' which are in no way safety related and have no real safty risks.

While we don't nessisarily know the specifics, people have been testing these self driving cars for a while and yet crashing seems very rare even with unexpected handoffs.


I would hope in those situations it would just stop and not disengage.

It should remain in control until commanded to disengage. After all, blocked roads will be freed at some point and even if they don't it is the AI's problem to get itself out of any trouble that it got itself into. And even if a blocked road might seem to be free of safety risks that doesn't mean you can abandon the car there, you're supposed to stay in control of the vehicle until it is parked.

The only situation that I can compare disengaging with is when there is stopped traffic in the mountains and the snow moves in, that's one of those situations where you might be ordered by the authorities to abandon your vehicle and seek shelter (assuming this is possible and not less safe than staying with the car). Those situations can take many days to sort out afterwards. But in almost all other situations that normally would occur you should stay in and in control of your car, so I'd assume the same would go for an AI based system.


While that may be desirable if the car is operating alone in production, in testing just about anything it's not sure of should probably result in a disengagement. From my sensors are giving conflicting information with my map, to I failed to read that sign... they all should probably result in a hand off.

After that hand off someone can look at the test data and chose a better response, but caution when operating a deadly vehicle is causation is perfectly appropriate.


Agreed, but in that case it shouldn't be on public roads to begin with. If your software is so bad that the solution to complex situation is to throw up its hands and panic it has no place in traffic. After all, there are plenty of situations where inaction is just as dangerous or even more dangerous than action.

I feel pretty weird in the knowledge that people are operating vehicles with what to me comes across as barely out of beta software on roads shared with others.


Self driving cars safety track record during testing suggests these are already very well designed systems.

I also don't think you can get to self driving cars without testing them on public roads. So far it's been very safe and potentially it's going to save millions of lives so I don't have a problem with this.

Further, these cars are not simply turning off in traffic they are making errors known while continuing to drive as safely as possible. That's vastly safer than suppressing errors which generally results in people ignoring them.


Not sure I believe the title is justified given what the rest of the article says.

I'm no statistician, but I'm not sure if there are enough miles being driven that good conclusions can be drawn about whether self-driving cars are getting better or not.

Lots of testing is taking place outside of California due to other states' efforts (like changing regulations) to cater to it. So data from just California very likely doesn't tell the whole story.

Also, as has been mentioned here before, once they've solved and tested the easy cases, they might move on to solving and testing the harder ones. So there isn't an apples-to-apples comparison on test failure rates from one year to another because they could be testing different things.


We should compare this to having a random human driver drive, and see how often a third party would “take control”. Lord knows that if my dad were driving I would take control a lot more than a self driving car.


Yet your dad (likely, I'm guessing, but many like him anyway) has a license and can legally drive. I hope one thing that criticism of computer driver outcomes leads to is conclusion that lots of human drivers should not be entitled to drive, and kill >1m people/year worldwide.


Does anyone know how much of the actual driving decisions in these cars are made by machine learning algorithms? I'm aware that ai/ml is heavily used on the vision side but how much of the actual driving component is traditional algorithms vs neural nets/svm/whatever?


I can try to speculate.

Steering and speed control could be done with simple PID control. You'd have to get more sophisticated for maintaining position, but you could maintain and control position using traditional control systems. When you're using sensors you determine where you are relative to other objects and that's still within this realm.

However when certain classes of object can be expected to change position without observable indication it's hard to make a control system accurately predict that. There's no way to predict the future directly from what the car can see in all cases.

I would imagine that classifying objects and predicting which objects are likely to start moving is in the realm of ml/ai. I don't think anyone can formalize rules on that aspect of driving.


it depends on the company.

From what I've heard, waymo's controller system has a long set of heuristics, many other competitors use learned algos.


What does it matter?


I suppose it doesn't really? It's a bit of a side question that I thought maybe someone would have insight into.


This idea is expensive to implement, but building some tracks, then put only the self driving cars on this tracks, then you could have very large speeds and since only this cars would be on tracks and no other obstacle it would be very safe.


You mean trains?


Similar to trains, but high speed, and you have your own cart, you do not need to wait for a scheduled train. It does not need to be using tracks but an exclusive road with specific sensors and electronics so the carts would be able to avoid collisions at intersections.


No surprise here - the last 20% of project taking 80% of development time.


The postulation of a limitation is basically a negative claim. "This can go only so far and no farther." Very hard to prove, and inherently myopic. Even if the observations are true, it's likely just a lull in the self-driving scene.

It's reminiscent of how some people once believed that a machine will never beat the top human players in chess.


Yes, it's Xeno's paradox (http://platonicrealms.com/encyclopedia/zenos-paradox-of-the-...). CGI has exactly that same problem - the first 99% is easy, the final 1% is impossible.


It continually amazes me how tech people are blindsided by this problem. Every time. It's practically the entire reason that the Gartner hype curve is a thing:

1) New technology X appears.

2) Technology X develops quickly as the first 80% of gains are achieved.

3) Tech "luminaries" incorrectly extrapolate the rate of progress into a distant, glorious future; write breathless commentaries about how Technology X will eat the world in N years!

4) Technology X stagnates at the 81% progress mark, as it becomes clear that the green-field progress rate cannot be sustained.

5) Technology Y appears! Huzzah!


Kind of similar situation has developed in crypto currencies. Yes, we can now accumulate vast wealth and transfer it around. But still there are $500M heists. And transaction clearance takes way too long. Fixing these inconveniences will take a long time.


I don't see how Zeno's paradox applies here. The root assumption is that the work is asymptotic with some "goal" - in this case "as good as a human".

Perhaps the real goal is actually "level 6 - better than human" or "level 7 futuristic transport". Looking at it that way, maybe that would indeed follow Zeno's paradox (we'll never get futuristic transport) but soon surpass how a human would drive permitting integration with human drivers (being more reliable and cheaper causing disruption).

Ultimately, I think the main roadblock to automation in this realm are the vast entrenched interests (big oil, big auto) who'd lose out due to efficiency gains (just like with solar power and EVs).


Regarding your last point, doesn't 'big auto' comprise some of the largest investors in this technology?


"It is possible that Waymo put its technology into more challenging scenarios in 2017" Seems extremely likely.


Really? Waymo went from driving exclusively in Mountain View, CA to driving across a number of cities in the US: Atlanta, SF, Detroit, Phoenix, & Kirkland [1]. The diversity in cities would easily add to the challenge!

[1] https://www.theverge.com/2018/1/30/16948356/waymo-google-fia...


That is what GP was saying: it is extremely likely that the difficulty level of the scenarios increased.

I wonder if the flattened curve of progress in the metric described by the article (disengagement rate) is actually intentional. Maybe a decision was made to increase difficulty of driving scenarios in such a way to keep that approximately constant.


There is an upper bound to autonomous vehicle performance with our current roadway infrastructure and that is well below the threshold of full self driving cars. Once we retrofit the roads with sensors, remove the reliance on existing physical signage, and add guardrails to prevent unanticipated occurrences as much as possible, we'll see real progress. Once everything (cars, roadway, services, etc) becomes networked and talks to each other, we can get rid of the steering wheels. My guess is we're 10 years away from limited availability and 50 years away from full build out country wide. One question - will a new technology come along before then and render driverless cars as a means of human transport irrelevant? Personally, I'm waiting for human drone delivery ;-)


One thing I haven't seem mentioned is that when you're driving yourself, you anticipate and prepare your body for the changes in direction and acceleration/deceleration, so the car appears to drive smoother. If the disengagement rate is not zero (human becoming just a passenger), you still have to pay attention to the road and people may prefer to drive themselves because of this effect, maybe with exception of the highway. If the road rules stay unchanged, the self-driving car will also usually be slower in getting to the destination, as it will be set to strictly follow the speed limits and yellow lights, and if it's too deferential to other cars.


> One thing I haven't seem mentioned is that when you're driving yourself, you anticipate changes in direction and acceleration/deceleration, so the car appears to drive smoother.

And you're also holding on to the steering wheel, which makes a big difference.

Everyone I know who has motion sickness when being in a car on a winding road never experience motion sickness when they are the driver, only the passenger.


What about today’s taxis, busses, trains, planes? I don’t see passengers pay attention to the road to prepare for driver initiated braking, lane change or speed adjustment. I would assume passengers not paying attention today in human driven vehicles will continue to not pay attention in autonomous vehicles because nothing changes (as long as these vehicles drive similar to human drivers).


Except it's not like today's taxis, busses, trains or planes. It's more like riding with a friend who's never crashed their car, but sometimes will tell you "I can't handle this right now, take the wheel." and you don't have a choice.


If we consider remotely controlled vehicles, they seem like the perfect step for self-driving cars: Assuming self-driving on simple conditions(like the highway), while letting people do the complex driving, and maybe people with lower cost of living and salaries, such service may be cheaper than a taxi.

And this could create a large business, doing many miles every day, which would be the perfect means to gradually expand the role of self-driving.

Yet,non of the big companies choose this(and i'm sure they're aware of this). Why ?


I see 2 reasons. One, the whole point is to fully turn the driver role into a perfectly efficient commodity to reap rewards of zero marginal cost like typical tech companies. Two, remote driving probably involves some life threatening amount of lag.


Companies definitely are implementing remote control but they don't like to publicize it because it runs counter to the "autonomous" narrative.

In America cars have an important cultural role as a symbol of freedom and the idea that your "autonomous" car is centrally monitored and controllable is not a very good selling point.

But yes, all "autonomous" cars will absolutely have remote control functionality of some kind, at least a kill switch.


Why? - Isn't that obvious? A simple two second hickup in the cellular connection could kill everyone in the car.


Wouldn’t remote control cars require a near zero latency network connection at all times in all areas?


Phantom Auto, mentioned in the article, demoed their RC cars in CES last month, and they were ferrying journalists around using a driver 500 miles away. Latency isn't as much of an issue as one might think, but to maintain network reliability phantom uses 5 different networks simultaneously.

Waymo and Cruise (GM) are both building out call centres with agents that have some sort of remote control over the car.

Both Waymo and GM have emergency stop buttons connected to an air gapped computer that can take the vehicle to a minimum viable risk condition (which basically means pulling over to the side of the road) for security.


Yup! Even with LTE average-case latency of around 100ms (and easily double that for 4G/3G-only areas), worst case latencies are easily measured in _seconds_. According to Google, driver reaction times are measured in the range 0.7..3.0 seconds. Now add to that the bandwidth requirements for at least high quality front/back video and instrumentation.

I doubt anyone is mad enough to think in the general case remotely piloting a vehicle in a safety-critical scenario using these consumer networks is an even remotely sane option


It would be computer assisted remote control, similar to how a drone works. The remote operator provides general directions but the computer has direct control over the vehicle systems to implement those directions in response to sensor feedback.

Human reaction time is very slow so to do better than a human definitely does not require zero latency.


this sounds a bit like what Cruise is planning to do. Not remote control, but remote instruction

>“The specially trained operator [then] provides a domain extension to the vehicle to help define safe operating boundaries (e.g., permission to use the opposing traffic lane where cones are demarcating a new path of travel)

https://spectrum.ieee.org/cars-that-think/transportation/sel...


You need sensors, primarily cameras, on the cars. Some of them are already there, ok.

Then you need to stream that data to a sort of cockpit at a distance, reliably and with minimal delay or lag spikes. Cause you know, otherwise you might kill the occupant(s).

It doesn't seem an easy problem to me, at least not if you want to make a business out of it.

Just for comparison, and maybe someone can correct me on this, aren't military drones more or less just uses for bombing and they're not supposed to run into opposition? I don't think fighters are remote controlled.


Betteridge's law of headlines: "Any headline that ends in a question mark can be answered by the word 'no'."

tl;dr of the article:

- Autonomous vehicles drove fewer miles in the state of California in 2017, but maybe made up for that in miles driven elsewhere.

- The disengagement rate will probably need to improve considerably before AVs are ready for widespread deployment

- Waymo's disengagement rate barely improved year over year, but that may have been because they are placing the cars in more difficult scenarios (their blog post suggests that is indeed the case)


The article does not say "no" or that autonomous cars are still getting better at any meaningful rate.

Maybe we're 90% of the way to practical self-driving, but won't reach 99% for the foreseeable future.


tome's law: In any discussion about an article whose title is a question, Betteridge's law is mentioned with probability 1.

https://news.ycombinator.com/item?id=9077549


Perhaps you should revise the law to just yes/no questions?

https://news.ycombinator.com/item?id=16288489

https://news.ycombinator.com/item?id=16276911

On the other hand, even those aren't holding up.

https://news.ycombinator.com/item?id=16277231


Good point and good observation! The statement of Betteridge's Law itself would have to be changed though

https://en.wikipedia.org/wiki/Betteridge's_law_of_headlines




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: