Hacker News new | past | comments | ask | show | jobs | submit login

Here is my armchair diagnosis: right before the car veers towards the barrier it drives through a stretch of road where the only visible lane marker is on the left. Then the right lane marker comes into view at about the point where the lane starts to widen out for the lane split. The lines that will become the right and left lane markers of the split left and right lanes respectively are right next to the van in front of the Tesla, and at this point resemble the diamond lane markers in the middle of the split lanes. My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

If this theory turns out to be correct then Tesla is in deep trouble because this would be a very elementary mistake. The system should have known that the lanes split at this point, noticed that the distance between what it thought was the diamond lane marker and the right lane line (which was clearly visible) was wrong, and at least sounded an alarm, if not actively braked until it had a better solution.

This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong. I dubbed this property "cognizant failure" in my 1991 Ph.D. thesis on autonomous driving, but no one seems to have adopted it. It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed. Tesla seems to have done neither.




> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

This is a very good point: just like a human driver should slow down if they can't observe the road ahead well enough, an AI should slow down when it's not confident enough of its surroundings. This is probably very difficult to do, and I'm skeptical about your claim that an AI can be engineered to always detect its own failures. Further, I naively believe that Tesla is already doing a lot to detect conflicting inputs and other kinds of failures. Maybe being careful enough would prevent autopilot from working at all, so compromises have been made?

I probably don't understand AIs well enough to understand how they can be engineered to (almost) always recognize their own failures. But if a simple explanation exists, I'd love to hear it.


> if a simple explanation exists, I'd love to hear it

Basically it's a matter of having multiple redundant sensors and sounding the alarm if they don't all agree on what is going on, and also checking if what they think is going on is in the realm of reasonable possibility based on some a priori model (e.g. if suddenly all of your sensors tell you that you are 100 miles from where you were one second ago, that's probably a mistake even if all the sensors agree). That's a serious oversimplification, but that's the gist.


That's exactly how a lot of commercial aircraft systems work, they have three, if two agree and one disagree then that one is ignored.

But it is more complicated than that when you're talking about algorithms and complex systems. If you had three of the same exact system, they'd likely all make the same mistake and thus you'd gain no safety improvement (except actual malfunctions, not logical limitations).

I would like to see auto-braking taken out of autopilot completely, so if autopilot drives you into a wall at least the brakes slow you.


https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...

I'm not an expert...but it seems like for airplanes there are standards that manufacturers need to abide by for avoidance systems...which would mean testing and certification by an independent association before it's deployed. With manufacturers doing OTA updates...there's really no guarantee it's been tested...you'd have to trust your life to the QA process of each manufacturer. Not only the one of the car you're driving...but the car next to you!


This has been mentioned a lot of times lately (the auto-braking). When you think about it, you realize there are a million moments during normal driving where you have an obstacle ahead. Other cars, lane dividers like these that you might _have_ to pass by closely, or at least will have your car pointed at it for a few seconds, making tight curves on a walled road, high curbs, buildings that are on the edge of the road, etc. It's not that simple, and I bet a lot of this is already taken into account by the software.


My understanding is that auto-braking during autonomous driving doesn’t normally react to stationary objects like barriers. Otherwise it would brake for things like cardboard boxes, plastic bags, and other debris that often makes its way onto the road. But rather, autonomous driving AI puts a lot of trust into its lane guidance.


In what world would you not want your car to brake for a cardboard box? You don’t know what is inside. It could be empty, or could be full of nails. Or a cat.

Regardless, it should be trivial have a different behavior for a moving object that enters the road versus a rigid object that has not been observed moving.


Sometimes it's just a piece of paper or something similarly harmless that happens to be positioned on the road in such a way that it appears to be an obstacle.


If a system can't reliably differentiate between a flat object lying on the road, an object fluttering rapidly in the air, and a stationary object that is protruding from the road, it has no right to be in control of a car.


Will it run over accident victim, animal, or slow down and ask the human?


As far as I know things like the space shuttle had 3 different guidance computers developed by 3 independent teams. They were given the same requirements and came up with the hardware and software independently.


I'd say that Kalman filters are perfect for this-- they optimally blend estimates from multiple sources, including a model of the system dynamics (where you think you're going to be next based on where you were before, i.e. "I'm not going to suddenly jump 100 miles away"), and all the various sensor inputs ("GPS says I'm here, cameras say I'm here, wheel distance says I'm here").

On top of that, Kalman innately tracks the uncertainty of its combined estimate. So you can simply look at the Kalman uncertainty covariance and decide if it is too much for the speed you are going.

I really wonder if Tesla is doing that...


I feel that’s the problem. Techniques like SVM can provide reasonable definitions of confidence. Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??


> Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??

Sorta. There are ways to extract kinds of uncertainty and confidence from NNs: for example, Gal's dropout trick where you train with dropout and then at runtime you use an 'ensemble' of multiple dropout-ed versions of your model, and the set of predictions gives a quasi-Bayesian posterior distribution for the predictions. NNs can be trained directly via HMC for small NNs, and there are arguments that constant-learning-rate SGD 'really' implements Bayesian inference and an ensemble of checkpoints yields an approximation of the posterior, etc. You can also train RL NNs which have an action of shortcutting computation and kicking the problem out to an oracle in exchange for a penalty, which trains them to specialize and 'know what they don't know' so they choose to call the oracle when they're insufficiently sure (this can be done for computational savings if the main NN is a small fast simple one and the oracle is a much bigger slower NN, or for safety if you imagine the oracle is a human or some fallback mechanism like halting).

I have some cites on these sorts of things in https://www.gwern.net/Tool-AI and you could also look at the relevant tags https://www.reddit.com/r/reinforcementlearning/search?q=flai... and https://www.reddit.com/r/reinforcementlearning/search?q=flai...


Yes.

In particular, it's possible to learn the variance of the return using TD-methods with the same computational complexity as learning the expected value (the value function). See [0] for how to do it via the squared TD-error, or [1] for how to estimate it via the second moment of the return, and my own notes (soon to be published and expanded for my thesis) here [2].

It turns out that identifying states with high variance is a great way of locating model error-- most of the real-world environments are fairly deterministic, so states with high variance tend to be "aliased" combinations of different states with wildly different outcomes. You can use this to improve your agent via either allocating more representation power to those states to differentiate between very similar ones, or have your agent account for variance when choosing its policy. For example, Tesla could identify when variance spikes in its model and trigger an alert to the user that they may need to take over.

Additionally, there's work by Bellemare [3] for estimating the distribution of the return, which allows for all sorts of statistical techniques for quantifying confidence, risk, or uncertainty.

---

0. https://arxiv.org/abs/1801.08287

1. https://arxiv.org/abs/1607.00446

2. http://rl.ai/posts/fun-with-the-td-error-part-ii.html

3. https://arxiv.org/abs/1707.06887


Yes, there is an active branch in deep learning (Bayesian deep learning) trying to model uncertainty measurements into neural networks.

Older ideas are http://mlg.eng.cam.ac.uk/yarin/blog_2248.html or http://papers.nips.cc/paper/7219-simple-and-scalable-predict...

Basically Bayesian neural networks were able to model confidence but are not applicable in current real-world scenarios. Thus, lots of methods rely on approximating bayesian methods.


not an expert, but aren't the output nodes thresholded to make a decision? How far you are from the threshold might be interpretable as confidence possibly?


For a single perceptron, not for the network as a whole.


Yep that's exactly what you would do.

However, in practice, this usually is just not a good indicator of confidence. NNs are notoriously overconfident.


I dubbed this property "cognizant failure"

I like that term. When I was involved in the medical instrumentation field, we had a similar concern: it was possible for an instrument to produce a measurement, e.g., the concentration of HIV in blood serum, that was completely incorrect, but plausible since it's within the expected clinical range. This is the worst-case scenario: a result that is wrong, but looks OK. This could lead to the patient receiving the wrong treatment or no treatment at all.

As much as possible, we had to be able to detect when we produced an incorrect measurement.

This meant that all the steps during the analyte's processing were monitored. If one of those steps went out of its expected range we could use that knowledge to inform the final result. So the clinician would get a test report that basically says, "here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

Like most software, the "happy path" was easy. The bulk of the work we did on those instruments was oriented towards the goal of detecting when something went wrong and either recovering in a clinically safe manner or refusing to continue.

With all the research into safety-critical systems over decades, it's hard to see how Tesla could not be aware of the standard practices.


"here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

there is no "slightly out of range". Its either in the range or outside. Valid or invalid, when it comes to critical tests like these.

if temperature deviation is outside of acceptable deviation range then thats a system fault and result should have been considered invalid.

Back in the days there was much less regulatory oversight and products like that could slip through the cracks, Resulting to deaths, missed diagnosis etc etc.

The same with Teslas AP: its either 100% confident in the situation or not. If the car is not confident in whats happening - it should shut down. If that happens too often then the AP feature should be removed / completely disabled.

How many more people have to get into accidents? I know, if Musk's mom (knock on wood) was a victim of this feature then things would be taken more seriously.


It's so nice to see people critique products they know nothing about with absolutely no useful context!

You do realize that I gave a much simplified view of the situation as this is a web forum discussing a related subject, not an actual design review of the instrument, right?

To any process I can set multiple "acceptable ranges" depending on what I want to accomplish. There can be a "reject" range, "ok, but warning" range, "perfect, no problem" range, or a "machine must be broken" range

Everything is context dependent; nothing is absolute.


> I like that term.

Thanks!


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That is an extremely surprising result. How is that possible? Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner? What's an example of actual real-world system like that?


> Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner?

No, of course not. But it is possible to reduce the probability of non-cognizant failure to arbitrarily low levels -- at the expense of cost and the possibility of having a system that is too conservative to do anything useful.


I totally agree with your analysis. What is more worrying is that even following the rest of the traffic would have solved the problem. If you are the only car doing something then there is a good chance you are doing something wrong.

Also: the lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane. If two systems give conflicting data some kind of alarm should be given to the driver.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

GPS is not accurate enough to reliably pinpoint your location to a particular lane. Even WAAS (https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System) can have up to 25 feet of error. Basic GPS is less accurate than that.

In fact, it's possible that GPS error was a contributing factor here but there's no way to know that from the video.


RTK [0] systems definitely can have centimetre-level accuracy, though they require a fixed base station within 10-15 miles or so to broadcast corrections. It would also require roads to be mapped to a high level of accuracy.

I have seen self-driving test cars in Silicon Valley (frequently, especially in the last year or so) using these types of systems, so they are at least being tested. I've also heard discussion of putting RTK base stations on cell-phone towers to provide wide area coverage, but I'm not sure if much effort has been put into that. I do know vast areas of the agricultural midwest are covered in RTK networks -- its used heavily for auto-steering control in agriculture.

[0] https://en.wikipedia.org/wiki/Real_Time_Kinematic


I've heard on and off for many years (15 years?) about ad-hoc wireless networks for cars. Whether it's car to car, or car to some devices on the ground (lane marking, stop lights) so it knows where it is at in relationship to the road and other cars - and it would know if the car ahead is going to break or slow down.

Now the cars are relying on cameras, lidar to figure things out. What has happened to putting sensors on the road to broadcast what/where the roads is. Is that out of the question now because of cost?


I can see lots of potential problems with that, for instance you'd have to rely on the sensor data of that other car and you'd have to believe that other car telling you the truth. Lots of griefer potential there.


I drive a lot with my GPS (TomTom) on and it rarely gets the lane I'm in wrong, usually that only happens in construction zones.

So even if the error can be large in practice it often works very well.

It would be very interesting to see the input data streams these self driving systems rely on in case of errors, and in the case of accidents there might even be a legal reason to obtain them (for liability purposes).


> it often works very well

Well, yeah, it's not like autopilot is driving cars into barriers every day. But I don't think "often works very well" (and then it kills you) is good enough for most people. It's certainly not good enough for me.


According to the Reddit posterL

> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.

> Also, start paying attention as you drive. Centering yourself in the lane is not always a good idea. You'll run into parked cars all over, and lots of times before there is an obstacle the lane gets wider. In general, my feel is that you're better off to hug one of the markings.

It kind of does sound like that autopilot is steering vehicles into barriers everyday, or it would be if drivers weren't being extra vigilant: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...

Seems like we need a study to assess whether the kind of protections AP offers outweighs the extra challenges it adds to a daily drive.


That's absolutely true, however, I did not mean it to take the place of whatever they have running right now, I meant for it to be an input into a system that can detect if there is a potential problem and the driver should be alerted.

Because if I interpret the video correctly either the car was partially in the wrong lane before the near accident or it was about to steer it into a situation like that. And that means it is time to hand off to the driver because the autopilot has either already made an error or is about to make one.

Either way the autopilot should have realized the situation is beyond it's abilities, and when GPS and camera inputs conflict (which I assume was the case) that's an excellent moment for a hand-off.


Don't navigation systems 'snap' to the most probable lane you are driving in based on general direction instead of your exact GPS coordinates?

A good example of this I think is when driving in tunnels, there is no GPS info there, but the navigation shows you following the (curved) road.


Good nav systems will fuse whatever inputs they can get including GPS, INS, compass, etc. in order to figure out and ‘snap’ to what road you’re on. Snapping to an individual lane is still AFAIK beyond these systems’ sensors’ capability. You will need an additional input like a camera for this that looks for lane markings.


Yes, they do that, and they rarely get that wrong.

Also, the better ones have inertial backup & flux gate for when GPS is unavailable. Inertial backup could also help to detect the signatures of lane changes and turns.


Try driving near a place with a lot of roads that are close together, like an airport. You’ll quickly see that snapping doesn’t really work, especially if you deviate from the route the GPS is trying to take you on.


I do this all the time and it works remarkably well. Amsterdam and the roads around it are pretty dense, up to 6 lanes each way in quite a few places with all kinds of flyovers and multi-lane exits.


At what speeds? I'm in NYC, similar dense road systems. At a reasonable speed, snapping still works because it knows where you came from. In stop stop stop go traffic combined with lane weaving and ramp merging I've found it is not hard to trick the gps into thinking you're on a parallel access road or unrelated ramp for a little while.

It's possible your highways are newer or less congested than ours?


Counterpoint, my Lenovo tablet running google maps often seems to get confused about lanes. E.g. it will think I'm taking an exit ramp when I am still on the main road. Once the ramp and the road diverge enough it will recover, but not before going into it's "rerouting" mode and sometimes even starting directions to get back on the road I'm already on.


That sounds pretty bad if that's the thing you rely on to get you where you're going. Here in NL interchanges are often complex enough that without the navigator picking the right lane you'd end up taking the wrong exit.

I don't use Google maps for driving but I know plenty of people that do and having been in the car with them on occasion has made me even happier with my dedicated navigation system. The voice prompts are super confusing with a lot of irrelevant information (and usually totally mis-pronounced) and now you are supplying even more data points that the actual navigation itself isn't as good as it could be.

I suspect this is a by-product of doing a lot of stuff rather than just doing one thing and doing that well.


Here in D.C. Google maps will regularly get confused about whether you're on the main road or a ramp; whether you're on an overhead highway or the street underneath it, whether you're on a main road or the access road next to it.


My FIATChrysler Uconnect system will sometimes think I'm on the road above when driving under a bridge.

Sometimes it'll shout "The speed limit is 35 miles an hour!" when I go under a bridge carrying a 35MPH road, even though the road I'm on has a limit of 65.


Is it using some other localization technique to complement the GPS? GPS at best can give accuracy within 3 meters (if I remember right). To make it accurate, laser/radar beams are shot out to the landmarks, of which precise co-ordinates are known, and using them the pin point coordinates of Ego vehicle are determined.


The highest resolution GPS receivers have two changes:

1. Base stations at known locations broadcast corrections for factors like the effects of the ionosphere and troposphere, and inaccurate satellite ephemerides. If you have multiple base stations, you can interpolate between them (Trimble VRS Now [1] does this, for example)

2. Precise measurements by combining code phase (~300m wavelength) and carrier phase on two frequencies (~20cm wavelength), plus the beat frequency of the two frequencies (~80cm)

With these combined, and good visibility of the sky, centimetre-level accuracy is possible.

Autonomous vehicles will often combine this with inertial measurement [2] which offers a faster data rate, and works in tunnels.

Many people also expect autonomous vehicles to also track lane markings, in combination with a detailed map to say whether lane markings are accurate.

[1] http://www.trimble.com/positioning-services/vrs-now [2] http://www.oxts.com/products/rt3000/


Inertial, flux gate compass.


"GPS is not accurate enough to reliably pinpoint your location to a particular lane."

My phone has GPS accurate to within 1 foot. I use it for mining location all the time. It uses differential GPS plus Inertial sensors.


When I think of self-driving car technology, I think of all of these inputs (maps, cameras, GPS, etc.) being fed into a world model from which inferences are drawn.[1] Here, the car could have known from cameras and maps that there was a lane split, and should have been able to detect that the line following was giving a conflicting inference compared to those other sources. The question is: does Tesla actually do that? Because if not, it's not really self-driving car technology. It's not a glitch in a product that's 90% of the way there; it's a fundamentally more basic technology.

[1] See https://www.ri.cmu.edu/pub_files/2009/6/aimag2009_urmson.pdf at 21-23.


The whole problem seems to me that to get a 90% solution for a self driving vehicle is now no longer hard. The remaining 10% is super hard and the degree to which the present day solutions tend to get these wrong is worrisome enough that if the quality doesn't jump up very quickly the whole self driving car thing could end up in an AI winter all of its own making. And that would be a real loss.

This is not a field where 'move fast and break stuff' is a mantra worth anything at all, the stuff that is broken are people's lives and as such you'd expect a far more conservative attitude, akin to what we see in Aerospace and medical equipment software development practices.

But instead barely tested software gets released via over-the-air updates and it's anybody's guess what you'll be running the next time you turn the key.

I agree with you that the software should have been able to detect that something was wrong either way, either it was already halfway in the wrong lane or it was heading to be halfway in the wrong lane, a situation that should not have passed without the car at least alerting the driver.

And from what we have seen in the video it just gave one inferred situation priority over the rest and that particular one happened to be dead wrong about it's interpretation of the model.


I agree with all you've said.

One of my concerns with self-driving systems is that they don't have a good model of the world they are operating in. Yes, sure, they build up 3-D models using multiple sensor types, and react to that accordingly, oftentimes with faster reflexes than a human.

However, consider this situation with pedestrians A and B, while driving down a street, approaching an intersection.

Pedestrian A, on the edge of the sidewalk, is faced away from the street, looking at his phone. The chances of him suddenly stepping out in front of my car are exactly 0.0%.

Pedestrian B, on the edge of the sidewalk, is leaning slightly towards the street, and is looking anxiously down the street (the wrong way). The chances of B suddenly stepping out is high, and the chances of B not seeing my car are very high (because B is being stupid/careless, or else is used to UK traffic patterns).

I, as a human, will react to those situations differently, preemptively slowing down if I recognize a situation like with Pedestrian B. Autonomous driving systems will treat both situations exactly the same.

From everything I've read about so far, current driving systems have no way of distinguishing between those two specific situations, or dealing with the myriad of other similar issues that can occur on our streets.


Another similar situation where autopilot may fail. It sees a ball rolling across the street. Will it slow down, because a child maybe chasing the ball.


I used to contemplate similar situations. For example, a stationary person on a bike at the top of a steep driveway that's perpendicular to the vehicle's driving direction. But after seeing the video of the Arizona incident I stopped. It's too soon to consider these corner cases when some systems on the public road can't even slow down for a person in front of the fucking vehicle.


That's because waymo is playing chess while uber plays checkers. For waymo, these corner cases probably matter. Uber is not actually pursuing self driving tech, they are pretending to do so to defraud investors. Their troubles aren't representative therefore.


The saying in those fields is "the rules are written in blood". The Therac-25 (a medical device) is taught in practically every CS Ethics class for having changed software development practices for such things.

The cynic in me thinks self-driving will require the same, the rules and conservative practices will only come after a bunch of highly publicized, avoidable failures that leave their own trail of blood.


I wonder how the public would respond if the person killed in that Uber accident a while ago was a kid on a bicycle and how the response will be when/if a Tesla on autopilot plows into a schoolbus (a situation they seem to have problems with is stationary objects partially in a lane).


If there's anything left to be conservative upon. Current climate of "drive fast and kill people" might wipe out the field via PR.


This is where my chief complaint with Musk comes in. His whole bravado as a salesman is probably having a longer term impact on AI for cars. He keeps pushing this auto pilot when it's not.


regardless of the lane recognition it has to at least detect the obstacle in front of it, it's something that a basic collision avoidance system can do


> What is more worrying is that even following the rest of the traffic would have solved the problem.

That's what makes this example bizarre to me. I had thought that AutoPilot's ideal situation was having a moving vehicle in front of it. For example, AP does not have the ability to react to traffic lights, but can kind of hack it by following the pace of the vehicle ahead of it (assuming the vehicle doesn't run a red light):

https://www.youtube.com/watch?v=EXs4qZlDWbQ


That's a heck of an ass-u-mption. Running the red light as the n-th exponentially increases the probability you'll get into collision.


If you are the only car doing something then there is a good chance you are doing something wrong.

I’m surprised self-driving systems don’t do this (do they?). One or more vehicles in front of you that have successfully navigated an area you are now entering is a powerful data point. I’m not saying follow a car off a cliff, but one would think the behavior of followed vehicles should be somehow fused into the car’s pathfinding.


"Follow the leader" is a main ingredient in flocking behavior in birds, who rarely if ever collide in flight.


> If you are the only car doing something then there is a good chance you are doing something wrong.

Following this principle would probably result in a lot of people angry that their autopilot had gotten them a speeding ticket.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

Aside from GPS’ accuracy as mentioned in other replies, also take into account the navigation’s map material. The individual lans are probably not individually tracked on the map, but a single track per road with meta data specifying the number of lanes amongst other featerus. So even if GPS would have provided very accurate position readings, the map source material might not even match that level of detail.


Pretty sure I heard an audible warning in the vid.


No, that's sound you hear when you override and take over the autopilot


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed

This seems to me to be a clearly incorrect (and self-contradictory) claim. It entirely depends on your definition of failure.

Your analysis seems fine. The big problem is that the "autonomous" driver is using one signal (where are the lines on the edge of the road?) to the near exclusion of all others (is there a large stationary solid object in front of me?)

Maybe Tesla should have hired George Hotz (sp?) if only to write a lightweight sanity-check system that could argue with the main system about whether it was working.


> This seems to me to be a clearly incorrect (and self-contradictory) claim.

I guess that means I was able to pull the wool over the eyes of all five members of my thesis committee because none of them thought so.

> It entirely depends on your definition of failure.

Well, yeah, of course. So? There is some subset of states of the world that you label "failure". The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?


> The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?

Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".


> Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not

They seem the same to me. In fact, in 27 years you are the first person to voice this concern.

> I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".

Could be, but that would be a publishable result. So... publish it and then we can discuss.


What happens if all your sensors fail, including the one that senses the failure of your sensors? What happens if the power source disconnects?


Having all your sensors fail is actually a very easy case. Imagine if all of your sensors failed: suddenly you could not see, hear, feel, smell, or taste... do you think it would be hard to tell that something was wrong?


Having your sensors fail doesn't mean they're not providing data. It means they're not providing accurate data. In humans, we would call this hallucinating, and humans in fact cannot generally tell that they are hallucinating.


> Having your sensors fail doesn't mean they're not providing data.

That is one possible failure mode. But you're right, it's not the only one.

There is an extensive literature on how to detect and correct sensor errors resulting from all kinds of different failure modes.


Arguing that it's possible to build an autonomous that is sometimes wrong but always knows when it's wrong seems like a stretch to me.


A system that can sense when it is in a failure mode is analogous to a “detector” in the Neyman-Pearson sense. A detector that says it is OK when it is actually in a failure condition is said to have a missed detection (MD).

OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).

In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.

Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.

In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.

As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.

Discrete-input systems can behave differently.


You've done a great job of describing the problem. It is manifestly possible to drive missed detection rates very close to zero without too many false alarms because humans are capable of driving safely.


Ah, yes... you have convinced me - though humans can apply generalized intelligence to the problem, which I imagine is particularly useful in lots of special cases.


General intelligence is not needed. Illiterate people can drive. People who can't do math can drive.


Illiteracy is not incompatible with intelligence.


So we can get within epsilon for an undefined delta because humans can do something, although not always.

Right, that whole claim about engineering a system that always knows when it’s not working sounds rock solid to me. After all, we can build a human, right?


> we can build a human, right?

Not yet. But there's no reason to believe we won't be able to eventually.


You can if you dial the specificity down to 0.

i.e: Always report "I'm wrong."


Humans have moments like this too. I remember suddenly driving into dense fog on the freeway and not able to see more than 20 feet in front of me. I had a definite "failure mode" where I slowed down and freaked out because all my previous driving experience was "failing" me on how to avoid a potential accident.


> my armchair diagnosis

> my 1991 PH.D. thesis on autonomous driving

Going to hide in a corner and stay quiet on HN until I forget about this comment!


:-)

Just in case you're interested:

https://vtechworks.lib.vt.edu/handle/10919/38880

and the associated conference paper:

http://www.flownet.com/gat/papers/aaai92.pdf

Most of the work was done on a Mac II with 8MB (that's megabytes, not gigabytes) of RAM.

The progress that has been made since those days boggles my mind.


shameless plug! haha just kidding and WOW! Autonomous driving so early on, JPL, Google..

Im going to go hide in the corner as well.


The most impressive thing in retrospect is that all that work was done on 32-bit processors with ~16 MHz clocks and ~8MB of RAM. And those machines cost $5000 each in 1991 dollars. Today I can buy a machine with 100 times faster clock and 1000 times as much RAM for about 1% of the price. That's seven orders of magnitude price-performance increase in <30 years. It's truly mind-boggling.


so due to this significantly cheaper availability of computing power one would think that today we would be making enormous progress every day.

my phone is more powerful today than most powerful desktop back in 1995. And what do i use my phone for? check email and read HN.


> one would think that today we would be making enormous progress every day

Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

> And what do i use my phone for? check email and read HN.

What's wrong with that? Add wikipedia to that list and put a slightly different spin on it: today you can carry around the equivalent of an entire library in your pocket. That seems like progress to me. When I was a kid, you had to actually (imagine this) PHYSICALLY GO TO A LIBRARY in order to do any kind of research. If you didn't live close to a good library you were SOL. Today anyone can access more high-quality information than they can possibly consume from anywhere for a few hundred bucks. It's a revolution bigger than the invention of the printing press.

It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.


> Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

you are probably correct. in comparison with the past we are moving fast, it just feels that we could do so much more with what we have.

> It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.

this is the sad part. but on the flip side this means there are so many opportunities for those who have the desire and drive to make things better.


I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish. The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

The same is happening with chatbots - more and more businesses think they can put a chatbot on their site and assume it'll handle everything when in fact it's meant to assist you rather than take over things for you.


> I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish

Apparently, we're not, as Waymo has shown.

> The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

Agreed, the "but that's not how autopilots work in airplanes" canned response is irrelevant.


Yep it took Google/Waymo 8 years to get there but Musk and Uber just wants to move fast and crash into things. Musk wouldn't be selling self-driving since Oct'16 if it's not any day now or would he ?


> Apparently, we're not, as Waymo has shown.

The only things Waymo has shown so far are a bunch of marketing videos and a few tightly controlled press rides.


> Apparently, we're not, as Waymo has shown.

Does Waymo drive at high speeds all the videos I have seen are at low speeds.


> Apparently, we're not, as Waymo has shown.

Can you point me to a website or store where I can buy my fully autonomous waymo car? I doubt waymo is even at par with tesla, considering tesla is actually selling cars. Waymo is just vaporware at this point.


> Can you point me to a website or store where I can buy my fully autonomous waymo car?

Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

Let me know where I can buy a M1 Abrams, if you don't, I'll claim they don't exist!

But actually, you can get a ride in a Waymo self-driving car already, with nobody on the driver's seat.


> Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.


> No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.

I personally know people who have the option to hail a self driving car in Phoenix, AZ, just as they would an Uber.

But I guess they are just my imagination, right?


Waymo has shown no such thing. In controlled, limited circumstances, yes, but in complex, poorly mapped, degraded conditions, not yet.


>This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

Although serious, this is working as designed. Level II self-driving doesn't have automation that makes guarantees about recognizing scenarios it cannot handle. At level III, the driver can safely close their eyes until the car sounds the alarm that it needs emergency help. Audi plans to release a level III car next year, so we'll see how liability for accidents actually shakes out.

Unfortunately level II is probably the most dangerous automation even with drivers that understand the limitations of the system. They still need constant vigilance to notice failures like this and react quickly enough to avoid collisions. Just imagine poorly marked or abrupt and ramps or intersections that drivers hardly have enough time to react when they're already driving. Add in the delay to notice the computer is steering you into a wall and yank the wheel can turn some of these accident prone areas into accident likely areas.


> level II is probably the most dangerous automation

I'll go further and say that level II is worse than useless. It's "autonomy theatre" to borrow a phrase from the security world. It gives the appearance that the car is safely driving itself when in fact it is not, but this isn't evident until you crash.


I disagree. It’s not as though level II is a hard and fast definition - Tesla wants level 4 and claims the cars possess the hardware for that already. So you’d think their level II would still be smart enough to detect these problems to some degree. It’s not a fixed system but one they keep upgrading. I would expect this nominally level II system to have more smarts than a system that is designed never to exceed level II.


> Tesla wants level 4 and claims the cars possess the hardware for that already.

You can't possibly determine what hardware is required for Level 4 until you have proven a hardware/software combination, so that's just empty puffery, but even if it was true...

> So you’d think their level II would still be smart enough to detect these problems to some degree.

No, because the smartness of their system is about the software. They could have hardware sufficient to support Level 4 autonomy and better-than-human AI running software that only supports a less-thsn-adequate version of Level 2 autonomy. What their hardware could support (even if it was knowable) gives you no basis for belief about what their current software on it supports, except that it won't exceed the limits set by the hardware.


That's probably right or very close to what is happening, but what I cannot understand: what about damn black-and-yellow barrier board right in front of the car closing in at the speed of 60mph? Some sensor should have picked up on that.


This was my thought as well. Doesn't the system have the equivalent of reflexes that override the approved plan?

I've been skeptical of autonomous driving since it started to become a possibility. I spend a fair amount of time making corrections while driving that have nothing to do with what's happening in my immediate vicinity. If it can't handle an obstruction in the road, how will it handle sensing a collision down the road, or a deer that was sensed by the side of the road, or even just backing away from an aggressive driver in a large grouping of vehicles in front of you? I've had to slow down and/or move off on to the shoulder on two lane country roads because someone mistimed it when passing a vehicle. I don't have much faith in how this system would handle that. Not to mention handling an actual emergency failure like a tire blowing out.

I'm sure they will get there eventually, but it looks like they have conquered all the low-hanging fruit and somehow think that's enough. I'm now officially adding "staying away from vehicles studded with cameras and sensors" to my list of driving rules.


What is someone drives down the road with lane-marking painter? I have witnessed two or three times in my life, it used to just be funny, but now it can be deadly.


The cars will for survival should outweigh the lane marker following algorithm. Maintaining distance and trajectory relative to things with mass is more important than coloring within in the lines.


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That sounds highly dubious. Here's a hypothetical scenario: there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down and steer to the left. This will help you avoid a deadly collision as the person stumbles into the road.

Now let's take a self driving car in the same scenario, where, since it doesnt have general intelligence, it fails to distinguish the drunk person from a normal pederstrian and keeps going at the same speed and distance from the sidewalk as normally. How, in this scenario, does the vehicle 100% know that it has failed (like you say is always possible)?


An even more extreme example: suppose someone on the sidewalk suddenly whips out a bazooka and shoots it at you. Does your failure to anticipate this contingency count as a failure?

"Failure" must be defined with respect to a particular model. If you're driving in the United States, you're probably not worried about bazookas, and being hit by one is not a failure, it's just shit happening, which it sometimes does. (By way of contrast, if you're driving in Kabul then you may very well be concerned with bazookas.) Whether or not you want to worry about drunk pedestrians and avoid them at all possible costs is a design decision. But if you really want to, you can (at the possible cost of having to drive very, very slowly).

But no reasonable person could deny that avoiding collisions with stationary obstacles is a requirement for any reasonable autonomous driving system.


Way to dodge the question. And how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time.

Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case. People do this all the time when driving -- be it a drunk guy on the sidewalk, a small kid a tad bit too unstable when riding a bike by the roadside, kids playng catch nex to the road and not paying attention, etc etc. Understanding these situation is crucial in self driving if we want to beat the 1 fatality per 100M mile that we have with human drivers. For such scenarios, please explain how the AI can always know when it failed to anticipate a problem that a normal human driver can.


> how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time

You raised this scenario:

> there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down...

I was just responding to that.

> Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case.

I never said it was. All I said was that "failure must be defined with respect to some model." If you really want to anticipate every contingency then you have to take into account some very unlikely possibilities, like bazookas or (to choose a slightly more plausible example) having someone hiding behind the parked car that you are driving past and jumping out just at the wrong moment.

The kind of "failure" that I'm talking about is not a failure to anticipate all possible contingencies, but a failure to act correctly given your design goals and the information you have at your disposal. Hitting someone who jumps out at you from behind a parked car, or failing to avoid a bazooka attack, may or may not be a failure depending on your design criteria. But the situation in the OP video was not a corner case. Steering into a static barrier at freeway speeds is just very clearly not the right answer under any reasonable design criteria for an autonomous vehicle.

My claim is simply that given a set of design criteria, you cannot in general build a system that never fails according to those criteria, but you can build a system that, if it fails, knows that it has failed. I further claim that this is useful because you can then put a layer on top of this failure-detection mechanism that can recover from point failures, and so increase the overall system reliability. If you really want to know the details, go read the thesis or the paper.

These are not particularly deep or revolutionary claims. If you think they are, then you haven't understood them. These are really just codifications of some engineering common-sense. Back in 1991, applying this common sense to autonomous robots was new. In 2018 it should be standard practice, but apparently it's not.


You don't even need to go to the level of a drunk person. Imagine driving down a suburban street and a small child darts out onto the road chasing after a ball.


Some local knowledge: they blocked off lane to the left side is the same lane used when the express lanes are in their opposite configuration, in the morning. In the morning with the express inbound towards the city, that lane is a left-hand exit from the main flow of Interstate 5, onto the dedicated express lanes. There should be a sufficient amount of GPS data gathered from hundreds of other teslas, and camera data, that shows the same lane from an opposite perspective.


> This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

That’s exactly my experience as a driver:

You learn to anticipate that the ‘autopilot’ will disengage or otherwise fail. I have been good enough at this, obviously, but it is sometimes frightening how close you get to a dangerous situation …


>Here is my armchair diagnosis:

Odd how your "armchair diagnosis" matches perfectly with the top ranked comment in the reddit thread that was posted 1 day ago.

courtlandreOwner 815 points 1 day ago

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


The comment above is saying the split looks like a diamond lane marker indicating an HOV/electric car lane, so the car is centering itself on the diamond. That's a completely different idea from the Reddit comment about the car centering itself between the right and left white lines. My guess is that both inputs contributed to the car "thinking" it was following a lane.


The conclusions are the exact same. The autopilot was trying to center the car between lines.

>My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


Is it really odd that two people could reach the same conclusion when presented with the same video?


Of course not. But it is odd when the top rated comment directly below the video is essentially the armchair theory that was proposed.


Again, how is it odd that everyone agrees with the obvious explanation?


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

Wouldn't the system doing the checking be considered an autonomous system...that could also fail?


No, because the monitoring system doesn't interact with an environment, so its job is much simpler. This is actually the reason that cognizant failure works.


Thank you for the concise answer. I was just about to ask if you have a blog, but found your website in your HN profile. I know what I am doing for the rest of the night :P PS: Funny that we've both written a double ratchet implementation in JS :)


I don't mean to call the OP lier but anyone knows if there is any evidence whatsoever that this video was taken on an autopilot engaged Tesla? Just want to be sure about it before I form judgment against Tesla.


More basic lane assist in other cars already solve this problem. 100% teslas fault. My car will disable lane assist the moment the lane gets too wide.


Sounds like an easy fix then. After that we move on to the next edge case and so on ad infinitum.


If you are aware when you are wrong... you would not be wrong.

If you are referring to assigning confidences/probabilities to decisions, this is standard in ML.


> If you are referring to assigning confidences/probabilities to decisions,

There's a little more to it than that but yeah, pretty much.

> this is standard in ML.

Yes, I know. But not, apparently, standard in embedded autonomous systems.


I don’t know you, sir, but Tesla should hire you!


Estimating errors or confidence level reliably is still one of the biggest unsolved problems in AI.


In other words an old ambiguous parsing problem.

It might be fixable in software. I'm a bit annoyed at Tesla for over reliance over painted lines. They fade, they can be covered, be outdated ..

That old fail video of a SDV stuck inside a circle is not funny anymore.


> SDV stuck inside a circle

Off topic: I never understood why this video was discussed that much, especially in order to blame SDVs. It's an example of pointless road markings and a perfectly behaving vehicle. It's like driving into a one way street that turns out to have no exit. The driver can't be blamed.


I know, and partly agree. Still when you have something that hints at leaping into the future, a simplistic automata behaviour like this brings all to the ground.

We'd expect at least 'cycle' detection ;)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: