Hacker News new | past | comments | ask | show | jobs | submit login
Another Tesla on autopilot steers towards a barrier (reddit.com)
433 points by walrus01 on April 6, 2018 | hide | past | favorite | 465 comments



Here is my armchair diagnosis: right before the car veers towards the barrier it drives through a stretch of road where the only visible lane marker is on the left. Then the right lane marker comes into view at about the point where the lane starts to widen out for the lane split. The lines that will become the right and left lane markers of the split left and right lanes respectively are right next to the van in front of the Tesla, and at this point resemble the diamond lane markers in the middle of the split lanes. My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

If this theory turns out to be correct then Tesla is in deep trouble because this would be a very elementary mistake. The system should have known that the lanes split at this point, noticed that the distance between what it thought was the diamond lane marker and the right lane line (which was clearly visible) was wrong, and at least sounded an alarm, if not actively braked until it had a better solution.

This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong. I dubbed this property "cognizant failure" in my 1991 Ph.D. thesis on autonomous driving, but no one seems to have adopted it. It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed. Tesla seems to have done neither.


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

This is a very good point: just like a human driver should slow down if they can't observe the road ahead well enough, an AI should slow down when it's not confident enough of its surroundings. This is probably very difficult to do, and I'm skeptical about your claim that an AI can be engineered to always detect its own failures. Further, I naively believe that Tesla is already doing a lot to detect conflicting inputs and other kinds of failures. Maybe being careful enough would prevent autopilot from working at all, so compromises have been made?

I probably don't understand AIs well enough to understand how they can be engineered to (almost) always recognize their own failures. But if a simple explanation exists, I'd love to hear it.


> if a simple explanation exists, I'd love to hear it

Basically it's a matter of having multiple redundant sensors and sounding the alarm if they don't all agree on what is going on, and also checking if what they think is going on is in the realm of reasonable possibility based on some a priori model (e.g. if suddenly all of your sensors tell you that you are 100 miles from where you were one second ago, that's probably a mistake even if all the sensors agree). That's a serious oversimplification, but that's the gist.


That's exactly how a lot of commercial aircraft systems work, they have three, if two agree and one disagree then that one is ignored.

But it is more complicated than that when you're talking about algorithms and complex systems. If you had three of the same exact system, they'd likely all make the same mistake and thus you'd gain no safety improvement (except actual malfunctions, not logical limitations).

I would like to see auto-braking taken out of autopilot completely, so if autopilot drives you into a wall at least the brakes slow you.


https://en.wikipedia.org/wiki/Airborne_collision_avoidance_s...

I'm not an expert...but it seems like for airplanes there are standards that manufacturers need to abide by for avoidance systems...which would mean testing and certification by an independent association before it's deployed. With manufacturers doing OTA updates...there's really no guarantee it's been tested...you'd have to trust your life to the QA process of each manufacturer. Not only the one of the car you're driving...but the car next to you!


This has been mentioned a lot of times lately (the auto-braking). When you think about it, you realize there are a million moments during normal driving where you have an obstacle ahead. Other cars, lane dividers like these that you might _have_ to pass by closely, or at least will have your car pointed at it for a few seconds, making tight curves on a walled road, high curbs, buildings that are on the edge of the road, etc. It's not that simple, and I bet a lot of this is already taken into account by the software.


My understanding is that auto-braking during autonomous driving doesn’t normally react to stationary objects like barriers. Otherwise it would brake for things like cardboard boxes, plastic bags, and other debris that often makes its way onto the road. But rather, autonomous driving AI puts a lot of trust into its lane guidance.


In what world would you not want your car to brake for a cardboard box? You don’t know what is inside. It could be empty, or could be full of nails. Or a cat.

Regardless, it should be trivial have a different behavior for a moving object that enters the road versus a rigid object that has not been observed moving.


Sometimes it's just a piece of paper or something similarly harmless that happens to be positioned on the road in such a way that it appears to be an obstacle.


If a system can't reliably differentiate between a flat object lying on the road, an object fluttering rapidly in the air, and a stationary object that is protruding from the road, it has no right to be in control of a car.


Will it run over accident victim, animal, or slow down and ask the human?


As far as I know things like the space shuttle had 3 different guidance computers developed by 3 independent teams. They were given the same requirements and came up with the hardware and software independently.


I'd say that Kalman filters are perfect for this-- they optimally blend estimates from multiple sources, including a model of the system dynamics (where you think you're going to be next based on where you were before, i.e. "I'm not going to suddenly jump 100 miles away"), and all the various sensor inputs ("GPS says I'm here, cameras say I'm here, wheel distance says I'm here").

On top of that, Kalman innately tracks the uncertainty of its combined estimate. So you can simply look at the Kalman uncertainty covariance and decide if it is too much for the speed you are going.

I really wonder if Tesla is doing that...


I feel that’s the problem. Techniques like SVM can provide reasonable definitions of confidence. Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??


> Reinforcement Learning.. DNN... the mechanisms behind autonomous cars... are they able to do this??

Sorta. There are ways to extract kinds of uncertainty and confidence from NNs: for example, Gal's dropout trick where you train with dropout and then at runtime you use an 'ensemble' of multiple dropout-ed versions of your model, and the set of predictions gives a quasi-Bayesian posterior distribution for the predictions. NNs can be trained directly via HMC for small NNs, and there are arguments that constant-learning-rate SGD 'really' implements Bayesian inference and an ensemble of checkpoints yields an approximation of the posterior, etc. You can also train RL NNs which have an action of shortcutting computation and kicking the problem out to an oracle in exchange for a penalty, which trains them to specialize and 'know what they don't know' so they choose to call the oracle when they're insufficiently sure (this can be done for computational savings if the main NN is a small fast simple one and the oracle is a much bigger slower NN, or for safety if you imagine the oracle is a human or some fallback mechanism like halting).

I have some cites on these sorts of things in https://www.gwern.net/Tool-AI and you could also look at the relevant tags https://www.reddit.com/r/reinforcementlearning/search?q=flai... and https://www.reddit.com/r/reinforcementlearning/search?q=flai...


Yes.

In particular, it's possible to learn the variance of the return using TD-methods with the same computational complexity as learning the expected value (the value function). See [0] for how to do it via the squared TD-error, or [1] for how to estimate it via the second moment of the return, and my own notes (soon to be published and expanded for my thesis) here [2].

It turns out that identifying states with high variance is a great way of locating model error-- most of the real-world environments are fairly deterministic, so states with high variance tend to be "aliased" combinations of different states with wildly different outcomes. You can use this to improve your agent via either allocating more representation power to those states to differentiate between very similar ones, or have your agent account for variance when choosing its policy. For example, Tesla could identify when variance spikes in its model and trigger an alert to the user that they may need to take over.

Additionally, there's work by Bellemare [3] for estimating the distribution of the return, which allows for all sorts of statistical techniques for quantifying confidence, risk, or uncertainty.

---

0. https://arxiv.org/abs/1801.08287

1. https://arxiv.org/abs/1607.00446

2. http://rl.ai/posts/fun-with-the-td-error-part-ii.html

3. https://arxiv.org/abs/1707.06887


Yes, there is an active branch in deep learning (Bayesian deep learning) trying to model uncertainty measurements into neural networks.

Older ideas are http://mlg.eng.cam.ac.uk/yarin/blog_2248.html or http://papers.nips.cc/paper/7219-simple-and-scalable-predict...

Basically Bayesian neural networks were able to model confidence but are not applicable in current real-world scenarios. Thus, lots of methods rely on approximating bayesian methods.


not an expert, but aren't the output nodes thresholded to make a decision? How far you are from the threshold might be interpretable as confidence possibly?


For a single perceptron, not for the network as a whole.


Yep that's exactly what you would do.

However, in practice, this usually is just not a good indicator of confidence. NNs are notoriously overconfident.


I dubbed this property "cognizant failure"

I like that term. When I was involved in the medical instrumentation field, we had a similar concern: it was possible for an instrument to produce a measurement, e.g., the concentration of HIV in blood serum, that was completely incorrect, but plausible since it's within the expected clinical range. This is the worst-case scenario: a result that is wrong, but looks OK. This could lead to the patient receiving the wrong treatment or no treatment at all.

As much as possible, we had to be able to detect when we produced an incorrect measurement.

This meant that all the steps during the analyte's processing were monitored. If one of those steps went out of its expected range we could use that knowledge to inform the final result. So the clinician would get a test report that basically says, "here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

Like most software, the "happy path" was easy. The bulk of the work we did on those instruments was oriented towards the goal of detecting when something went wrong and either recovering in a clinically safe manner or refusing to continue.

With all the research into safety-critical systems over decades, it's hard to see how Tesla could not be aware of the standard practices.


"here's the HIV level, but the incubation temperature went slightly out of range when I was processing it, so use caution interpreting it."

there is no "slightly out of range". Its either in the range or outside. Valid or invalid, when it comes to critical tests like these.

if temperature deviation is outside of acceptable deviation range then thats a system fault and result should have been considered invalid.

Back in the days there was much less regulatory oversight and products like that could slip through the cracks, Resulting to deaths, missed diagnosis etc etc.

The same with Teslas AP: its either 100% confident in the situation or not. If the car is not confident in whats happening - it should shut down. If that happens too often then the AP feature should be removed / completely disabled.

How many more people have to get into accidents? I know, if Musk's mom (knock on wood) was a victim of this feature then things would be taken more seriously.


It's so nice to see people critique products they know nothing about with absolutely no useful context!

You do realize that I gave a much simplified view of the situation as this is a web forum discussing a related subject, not an actual design review of the instrument, right?

To any process I can set multiple "acceptable ranges" depending on what I want to accomplish. There can be a "reject" range, "ok, but warning" range, "perfect, no problem" range, or a "machine must be broken" range

Everything is context dependent; nothing is absolute.


> I like that term.

Thanks!


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That is an extremely surprising result. How is that possible? Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner? What's an example of actual real-world system like that?


> Are you really claiming that any control system can be engineered to detect that is has failed in any possible manner?

No, of course not. But it is possible to reduce the probability of non-cognizant failure to arbitrarily low levels -- at the expense of cost and the possibility of having a system that is too conservative to do anything useful.


I totally agree with your analysis. What is more worrying is that even following the rest of the traffic would have solved the problem. If you are the only car doing something then there is a good chance you are doing something wrong.

Also: the lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane. If two systems give conflicting data some kind of alarm should be given to the driver.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

GPS is not accurate enough to reliably pinpoint your location to a particular lane. Even WAAS (https://en.wikipedia.org/wiki/Wide_Area_Augmentation_System) can have up to 25 feet of error. Basic GPS is less accurate than that.

In fact, it's possible that GPS error was a contributing factor here but there's no way to know that from the video.


RTK [0] systems definitely can have centimetre-level accuracy, though they require a fixed base station within 10-15 miles or so to broadcast corrections. It would also require roads to be mapped to a high level of accuracy.

I have seen self-driving test cars in Silicon Valley (frequently, especially in the last year or so) using these types of systems, so they are at least being tested. I've also heard discussion of putting RTK base stations on cell-phone towers to provide wide area coverage, but I'm not sure if much effort has been put into that. I do know vast areas of the agricultural midwest are covered in RTK networks -- its used heavily for auto-steering control in agriculture.

[0] https://en.wikipedia.org/wiki/Real_Time_Kinematic


I've heard on and off for many years (15 years?) about ad-hoc wireless networks for cars. Whether it's car to car, or car to some devices on the ground (lane marking, stop lights) so it knows where it is at in relationship to the road and other cars - and it would know if the car ahead is going to break or slow down.

Now the cars are relying on cameras, lidar to figure things out. What has happened to putting sensors on the road to broadcast what/where the roads is. Is that out of the question now because of cost?


I can see lots of potential problems with that, for instance you'd have to rely on the sensor data of that other car and you'd have to believe that other car telling you the truth. Lots of griefer potential there.


I drive a lot with my GPS (TomTom) on and it rarely gets the lane I'm in wrong, usually that only happens in construction zones.

So even if the error can be large in practice it often works very well.

It would be very interesting to see the input data streams these self driving systems rely on in case of errors, and in the case of accidents there might even be a legal reason to obtain them (for liability purposes).


> it often works very well

Well, yeah, it's not like autopilot is driving cars into barriers every day. But I don't think "often works very well" (and then it kills you) is good enough for most people. It's certainly not good enough for me.


According to the Reddit posterL

> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.

> Also, start paying attention as you drive. Centering yourself in the lane is not always a good idea. You'll run into parked cars all over, and lots of times before there is an obstacle the lane gets wider. In general, my feel is that you're better off to hug one of the markings.

It kind of does sound like that autopilot is steering vehicles into barriers everyday, or it would be if drivers weren't being extra vigilant: https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil...

Seems like we need a study to assess whether the kind of protections AP offers outweighs the extra challenges it adds to a daily drive.


That's absolutely true, however, I did not mean it to take the place of whatever they have running right now, I meant for it to be an input into a system that can detect if there is a potential problem and the driver should be alerted.

Because if I interpret the video correctly either the car was partially in the wrong lane before the near accident or it was about to steer it into a situation like that. And that means it is time to hand off to the driver because the autopilot has either already made an error or is about to make one.

Either way the autopilot should have realized the situation is beyond it's abilities, and when GPS and camera inputs conflict (which I assume was the case) that's an excellent moment for a hand-off.


Don't navigation systems 'snap' to the most probable lane you are driving in based on general direction instead of your exact GPS coordinates?

A good example of this I think is when driving in tunnels, there is no GPS info there, but the navigation shows you following the (curved) road.


Good nav systems will fuse whatever inputs they can get including GPS, INS, compass, etc. in order to figure out and ‘snap’ to what road you’re on. Snapping to an individual lane is still AFAIK beyond these systems’ sensors’ capability. You will need an additional input like a camera for this that looks for lane markings.


Yes, they do that, and they rarely get that wrong.

Also, the better ones have inertial backup & flux gate for when GPS is unavailable. Inertial backup could also help to detect the signatures of lane changes and turns.


Try driving near a place with a lot of roads that are close together, like an airport. You’ll quickly see that snapping doesn’t really work, especially if you deviate from the route the GPS is trying to take you on.


I do this all the time and it works remarkably well. Amsterdam and the roads around it are pretty dense, up to 6 lanes each way in quite a few places with all kinds of flyovers and multi-lane exits.


At what speeds? I'm in NYC, similar dense road systems. At a reasonable speed, snapping still works because it knows where you came from. In stop stop stop go traffic combined with lane weaving and ramp merging I've found it is not hard to trick the gps into thinking you're on a parallel access road or unrelated ramp for a little while.

It's possible your highways are newer or less congested than ours?


Counterpoint, my Lenovo tablet running google maps often seems to get confused about lanes. E.g. it will think I'm taking an exit ramp when I am still on the main road. Once the ramp and the road diverge enough it will recover, but not before going into it's "rerouting" mode and sometimes even starting directions to get back on the road I'm already on.


That sounds pretty bad if that's the thing you rely on to get you where you're going. Here in NL interchanges are often complex enough that without the navigator picking the right lane you'd end up taking the wrong exit.

I don't use Google maps for driving but I know plenty of people that do and having been in the car with them on occasion has made me even happier with my dedicated navigation system. The voice prompts are super confusing with a lot of irrelevant information (and usually totally mis-pronounced) and now you are supplying even more data points that the actual navigation itself isn't as good as it could be.

I suspect this is a by-product of doing a lot of stuff rather than just doing one thing and doing that well.


Here in D.C. Google maps will regularly get confused about whether you're on the main road or a ramp; whether you're on an overhead highway or the street underneath it, whether you're on a main road or the access road next to it.


My FIATChrysler Uconnect system will sometimes think I'm on the road above when driving under a bridge.

Sometimes it'll shout "The speed limit is 35 miles an hour!" when I go under a bridge carrying a 35MPH road, even though the road I'm on has a limit of 65.


Is it using some other localization technique to complement the GPS? GPS at best can give accuracy within 3 meters (if I remember right). To make it accurate, laser/radar beams are shot out to the landmarks, of which precise co-ordinates are known, and using them the pin point coordinates of Ego vehicle are determined.


The highest resolution GPS receivers have two changes:

1. Base stations at known locations broadcast corrections for factors like the effects of the ionosphere and troposphere, and inaccurate satellite ephemerides. If you have multiple base stations, you can interpolate between them (Trimble VRS Now [1] does this, for example)

2. Precise measurements by combining code phase (~300m wavelength) and carrier phase on two frequencies (~20cm wavelength), plus the beat frequency of the two frequencies (~80cm)

With these combined, and good visibility of the sky, centimetre-level accuracy is possible.

Autonomous vehicles will often combine this with inertial measurement [2] which offers a faster data rate, and works in tunnels.

Many people also expect autonomous vehicles to also track lane markings, in combination with a detailed map to say whether lane markings are accurate.

[1] http://www.trimble.com/positioning-services/vrs-now [2] http://www.oxts.com/products/rt3000/


Inertial, flux gate compass.


"GPS is not accurate enough to reliably pinpoint your location to a particular lane."

My phone has GPS accurate to within 1 foot. I use it for mining location all the time. It uses differential GPS plus Inertial sensors.


When I think of self-driving car technology, I think of all of these inputs (maps, cameras, GPS, etc.) being fed into a world model from which inferences are drawn.[1] Here, the car could have known from cameras and maps that there was a lane split, and should have been able to detect that the line following was giving a conflicting inference compared to those other sources. The question is: does Tesla actually do that? Because if not, it's not really self-driving car technology. It's not a glitch in a product that's 90% of the way there; it's a fundamentally more basic technology.

[1] See https://www.ri.cmu.edu/pub_files/2009/6/aimag2009_urmson.pdf at 21-23.


The whole problem seems to me that to get a 90% solution for a self driving vehicle is now no longer hard. The remaining 10% is super hard and the degree to which the present day solutions tend to get these wrong is worrisome enough that if the quality doesn't jump up very quickly the whole self driving car thing could end up in an AI winter all of its own making. And that would be a real loss.

This is not a field where 'move fast and break stuff' is a mantra worth anything at all, the stuff that is broken are people's lives and as such you'd expect a far more conservative attitude, akin to what we see in Aerospace and medical equipment software development practices.

But instead barely tested software gets released via over-the-air updates and it's anybody's guess what you'll be running the next time you turn the key.

I agree with you that the software should have been able to detect that something was wrong either way, either it was already halfway in the wrong lane or it was heading to be halfway in the wrong lane, a situation that should not have passed without the car at least alerting the driver.

And from what we have seen in the video it just gave one inferred situation priority over the rest and that particular one happened to be dead wrong about it's interpretation of the model.


I agree with all you've said.

One of my concerns with self-driving systems is that they don't have a good model of the world they are operating in. Yes, sure, they build up 3-D models using multiple sensor types, and react to that accordingly, oftentimes with faster reflexes than a human.

However, consider this situation with pedestrians A and B, while driving down a street, approaching an intersection.

Pedestrian A, on the edge of the sidewalk, is faced away from the street, looking at his phone. The chances of him suddenly stepping out in front of my car are exactly 0.0%.

Pedestrian B, on the edge of the sidewalk, is leaning slightly towards the street, and is looking anxiously down the street (the wrong way). The chances of B suddenly stepping out is high, and the chances of B not seeing my car are very high (because B is being stupid/careless, or else is used to UK traffic patterns).

I, as a human, will react to those situations differently, preemptively slowing down if I recognize a situation like with Pedestrian B. Autonomous driving systems will treat both situations exactly the same.

From everything I've read about so far, current driving systems have no way of distinguishing between those two specific situations, or dealing with the myriad of other similar issues that can occur on our streets.


Another similar situation where autopilot may fail. It sees a ball rolling across the street. Will it slow down, because a child maybe chasing the ball.


I used to contemplate similar situations. For example, a stationary person on a bike at the top of a steep driveway that's perpendicular to the vehicle's driving direction. But after seeing the video of the Arizona incident I stopped. It's too soon to consider these corner cases when some systems on the public road can't even slow down for a person in front of the fucking vehicle.


That's because waymo is playing chess while uber plays checkers. For waymo, these corner cases probably matter. Uber is not actually pursuing self driving tech, they are pretending to do so to defraud investors. Their troubles aren't representative therefore.


The saying in those fields is "the rules are written in blood". The Therac-25 (a medical device) is taught in practically every CS Ethics class for having changed software development practices for such things.

The cynic in me thinks self-driving will require the same, the rules and conservative practices will only come after a bunch of highly publicized, avoidable failures that leave their own trail of blood.


I wonder how the public would respond if the person killed in that Uber accident a while ago was a kid on a bicycle and how the response will be when/if a Tesla on autopilot plows into a schoolbus (a situation they seem to have problems with is stationary objects partially in a lane).


If there's anything left to be conservative upon. Current climate of "drive fast and kill people" might wipe out the field via PR.


This is where my chief complaint with Musk comes in. His whole bravado as a salesman is probably having a longer term impact on AI for cars. He keeps pushing this auto pilot when it's not.


regardless of the lane recognition it has to at least detect the obstacle in front of it, it's something that a basic collision avoidance system can do


> What is more worrying is that even following the rest of the traffic would have solved the problem.

That's what makes this example bizarre to me. I had thought that AutoPilot's ideal situation was having a moving vehicle in front of it. For example, AP does not have the ability to react to traffic lights, but can kind of hack it by following the pace of the vehicle ahead of it (assuming the vehicle doesn't run a red light):

https://www.youtube.com/watch?v=EXs4qZlDWbQ


That's a heck of an ass-u-mption. Running the red light as the n-th exponentially increases the probability you'll get into collision.


If you are the only car doing something then there is a good chance you are doing something wrong.

I’m surprised self-driving systems don’t do this (do they?). One or more vehicles in front of you that have successfully navigated an area you are now entering is a powerful data point. I’m not saying follow a car off a cliff, but one would think the behavior of followed vehicles should be somehow fused into the car’s pathfinding.


"Follow the leader" is a main ingredient in flocking behavior in birds, who rarely if ever collide in flight.


> If you are the only car doing something then there is a good chance you are doing something wrong.

Following this principle would probably result in a lot of people angry that their autopilot had gotten them a speeding ticket.


> lane markings are confusing but GPS and inertial readings should have clearly indicated that that is not a lane

Aside from GPS’ accuracy as mentioned in other replies, also take into account the navigation’s map material. The individual lans are probably not individually tracked on the map, but a single track per road with meta data specifying the number of lanes amongst other featerus. So even if GPS would have provided very accurate position readings, the map source material might not even match that level of detail.


Pretty sure I heard an audible warning in the vid.


No, that's sound you hear when you override and take over the autopilot


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed

This seems to me to be a clearly incorrect (and self-contradictory) claim. It entirely depends on your definition of failure.

Your analysis seems fine. The big problem is that the "autonomous" driver is using one signal (where are the lines on the edge of the road?) to the near exclusion of all others (is there a large stationary solid object in front of me?)

Maybe Tesla should have hired George Hotz (sp?) if only to write a lightweight sanity-check system that could argue with the main system about whether it was working.


> This seems to me to be a clearly incorrect (and self-contradictory) claim.

I guess that means I was able to pull the wool over the eyes of all five members of my thesis committee because none of them thought so.

> It entirely depends on your definition of failure.

Well, yeah, of course. So? There is some subset of states of the world that you label "failure". The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?


> The guarantee is not that the system never enters one of those states, the guarantee is that if the system enters one of those states it never (well, so extremely rarely that it's "never" for all practical purposes) fails to recognize that it has done so. Why do you find that so implausible?

Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not; I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".


> Stated so, that is plausible. However, "it is possible to engineer a system that never fails to detect that it has failed" is not

They seem the same to me. In fact, in 27 years you are the first person to voice this concern.

> I claim that any subset of states which is amenable to this level of detectability will exclude some other states that any normal person would also consider to be "failure".

Could be, but that would be a publishable result. So... publish it and then we can discuss.


What happens if all your sensors fail, including the one that senses the failure of your sensors? What happens if the power source disconnects?


Having all your sensors fail is actually a very easy case. Imagine if all of your sensors failed: suddenly you could not see, hear, feel, smell, or taste... do you think it would be hard to tell that something was wrong?


Having your sensors fail doesn't mean they're not providing data. It means they're not providing accurate data. In humans, we would call this hallucinating, and humans in fact cannot generally tell that they are hallucinating.


> Having your sensors fail doesn't mean they're not providing data.

That is one possible failure mode. But you're right, it's not the only one.

There is an extensive literature on how to detect and correct sensor errors resulting from all kinds of different failure modes.


Arguing that it's possible to build an autonomous that is sometimes wrong but always knows when it's wrong seems like a stretch to me.


A system that can sense when it is in a failure mode is analogous to a “detector” in the Neyman-Pearson sense. A detector that says it is OK when it is actually in a failure condition is said to have a missed detection (MD).

OTOH, if you say you’ve failed when you haven’t, that’s a false alarm (FA).

In general, there is a trade-off between MD and FA. You can drive MD probability to near-zero, but typically at a cost in FA.

Again in general, you can’t claim that you can drive MD to zero (other than by also driving FA probability to one) without knowing more about the specifics of the problem. Here, that would be the sensors, etc.

In particular, for systems with noisy, continuous data inputs and continuous state spaces -- not even considering real-world messiness -- I would be surprised if you could drive PD to zero without a very high cost in FA probability.

As a humbling example, you cannot do this even for detection of whether a signal is present in Gaussian noise. (I.e., PD = 1 is only achievable with PFA = 1!) PD = 1 is a very high standard in any probabilistic system.

Discrete-input systems can behave differently.


You've done a great job of describing the problem. It is manifestly possible to drive missed detection rates very close to zero without too many false alarms because humans are capable of driving safely.


Ah, yes... you have convinced me - though humans can apply generalized intelligence to the problem, which I imagine is particularly useful in lots of special cases.


General intelligence is not needed. Illiterate people can drive. People who can't do math can drive.


Illiteracy is not incompatible with intelligence.


So we can get within epsilon for an undefined delta because humans can do something, although not always.

Right, that whole claim about engineering a system that always knows when it’s not working sounds rock solid to me. After all, we can build a human, right?


> we can build a human, right?

Not yet. But there's no reason to believe we won't be able to eventually.


You can if you dial the specificity down to 0.

i.e: Always report "I'm wrong."


Humans have moments like this too. I remember suddenly driving into dense fog on the freeway and not able to see more than 20 feet in front of me. I had a definite "failure mode" where I slowed down and freaked out because all my previous driving experience was "failing" me on how to avoid a potential accident.


> my armchair diagnosis

> my 1991 PH.D. thesis on autonomous driving

Going to hide in a corner and stay quiet on HN until I forget about this comment!


:-)

Just in case you're interested:

https://vtechworks.lib.vt.edu/handle/10919/38880

and the associated conference paper:

http://www.flownet.com/gat/papers/aaai92.pdf

Most of the work was done on a Mac II with 8MB (that's megabytes, not gigabytes) of RAM.

The progress that has been made since those days boggles my mind.


shameless plug! haha just kidding and WOW! Autonomous driving so early on, JPL, Google..

Im going to go hide in the corner as well.


The most impressive thing in retrospect is that all that work was done on 32-bit processors with ~16 MHz clocks and ~8MB of RAM. And those machines cost $5000 each in 1991 dollars. Today I can buy a machine with 100 times faster clock and 1000 times as much RAM for about 1% of the price. That's seven orders of magnitude price-performance increase in <30 years. It's truly mind-boggling.


so due to this significantly cheaper availability of computing power one would think that today we would be making enormous progress every day.

my phone is more powerful today than most powerful desktop back in 1995. And what do i use my phone for? check email and read HN.


> one would think that today we would be making enormous progress every day

Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

> And what do i use my phone for? check email and read HN.

What's wrong with that? Add wikipedia to that list and put a slightly different spin on it: today you can carry around the equivalent of an entire library in your pocket. That seems like progress to me. When I was a kid, you had to actually (imagine this) PHYSICALLY GO TO A LIBRARY in order to do any kind of research. If you didn't live close to a good library you were SOL. Today anyone can access more high-quality information than they can possibly consume from anywhere for a few hundred bucks. It's a revolution bigger than the invention of the printing press.

It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.


> Actually, we are. Back in the day, I would have given long odds against seeing autonomous cars on the road in my lifetime. Notwithstanding the odd mishap, today's autonomous vehicles actually work much better than I would have thought possible.

you are probably correct. in comparison with the past we are moving fast, it just feels that we could do so much more with what we have.

> It's true that society doesn't seem to be making very effective use of this new super power. But that doesn't make it any less of a super power.

this is the sad part. but on the flip side this means there are so many opportunities for those who have the desire and drive to make things better.


I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish. The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

The same is happening with chatbots - more and more businesses think they can put a chatbot on their site and assume it'll handle everything when in fact it's meant to assist you rather than take over things for you.


> I think more people need to embrace and understand the fact that we are a long way from having fully autonomous vehicles that you can just give the steering reins to from start to finish

Apparently, we're not, as Waymo has shown.

> The term 'Autopilot' gives off the wrong idea that the driver can sit back and relax while the car brings you from point A to point B safely.

Agreed, the "but that's not how autopilots work in airplanes" canned response is irrelevant.


Yep it took Google/Waymo 8 years to get there but Musk and Uber just wants to move fast and crash into things. Musk wouldn't be selling self-driving since Oct'16 if it's not any day now or would he ?


> Apparently, we're not, as Waymo has shown.

The only things Waymo has shown so far are a bunch of marketing videos and a few tightly controlled press rides.


> Apparently, we're not, as Waymo has shown.

Does Waymo drive at high speeds all the videos I have seen are at low speeds.


> Apparently, we're not, as Waymo has shown.

Can you point me to a website or store where I can buy my fully autonomous waymo car? I doubt waymo is even at par with tesla, considering tesla is actually selling cars. Waymo is just vaporware at this point.


> Can you point me to a website or store where I can buy my fully autonomous waymo car?

Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

Let me know where I can buy a M1 Abrams, if you don't, I'll claim they don't exist!

But actually, you can get a ride in a Waymo self-driving car already, with nobody on the driver's seat.


> Not sure what are you mentioning, self-driving cars only exist when you can buy them personally?

No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.


> No, but somebody should be able to buy. Waymo is just vapor right now. It's an experiment, not a product.

I personally know people who have the option to hail a self driving car in Phoenix, AZ, just as they would an Uber.

But I guess they are just my imagination, right?


Waymo has shown no such thing. In controlled, limited circumstances, yes, but in complex, poorly mapped, degraded conditions, not yet.


>This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

Although serious, this is working as designed. Level II self-driving doesn't have automation that makes guarantees about recognizing scenarios it cannot handle. At level III, the driver can safely close their eyes until the car sounds the alarm that it needs emergency help. Audi plans to release a level III car next year, so we'll see how liability for accidents actually shakes out.

Unfortunately level II is probably the most dangerous automation even with drivers that understand the limitations of the system. They still need constant vigilance to notice failures like this and react quickly enough to avoid collisions. Just imagine poorly marked or abrupt and ramps or intersections that drivers hardly have enough time to react when they're already driving. Add in the delay to notice the computer is steering you into a wall and yank the wheel can turn some of these accident prone areas into accident likely areas.


> level II is probably the most dangerous automation

I'll go further and say that level II is worse than useless. It's "autonomy theatre" to borrow a phrase from the security world. It gives the appearance that the car is safely driving itself when in fact it is not, but this isn't evident until you crash.


I disagree. It’s not as though level II is a hard and fast definition - Tesla wants level 4 and claims the cars possess the hardware for that already. So you’d think their level II would still be smart enough to detect these problems to some degree. It’s not a fixed system but one they keep upgrading. I would expect this nominally level II system to have more smarts than a system that is designed never to exceed level II.


> Tesla wants level 4 and claims the cars possess the hardware for that already.

You can't possibly determine what hardware is required for Level 4 until you have proven a hardware/software combination, so that's just empty puffery, but even if it was true...

> So you’d think their level II would still be smart enough to detect these problems to some degree.

No, because the smartness of their system is about the software. They could have hardware sufficient to support Level 4 autonomy and better-than-human AI running software that only supports a less-thsn-adequate version of Level 2 autonomy. What their hardware could support (even if it was knowable) gives you no basis for belief about what their current software on it supports, except that it won't exceed the limits set by the hardware.


That's probably right or very close to what is happening, but what I cannot understand: what about damn black-and-yellow barrier board right in front of the car closing in at the speed of 60mph? Some sensor should have picked up on that.


This was my thought as well. Doesn't the system have the equivalent of reflexes that override the approved plan?

I've been skeptical of autonomous driving since it started to become a possibility. I spend a fair amount of time making corrections while driving that have nothing to do with what's happening in my immediate vicinity. If it can't handle an obstruction in the road, how will it handle sensing a collision down the road, or a deer that was sensed by the side of the road, or even just backing away from an aggressive driver in a large grouping of vehicles in front of you? I've had to slow down and/or move off on to the shoulder on two lane country roads because someone mistimed it when passing a vehicle. I don't have much faith in how this system would handle that. Not to mention handling an actual emergency failure like a tire blowing out.

I'm sure they will get there eventually, but it looks like they have conquered all the low-hanging fruit and somehow think that's enough. I'm now officially adding "staying away from vehicles studded with cameras and sensors" to my list of driving rules.


What is someone drives down the road with lane-marking painter? I have witnessed two or three times in my life, it used to just be funny, but now it can be deadly.


The cars will for survival should outweigh the lane marker following algorithm. Maintaining distance and trajectory relative to things with mass is more important than coloring within in the lines.


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

That sounds highly dubious. Here's a hypothetical scenario: there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down and steer to the left. This will help you avoid a deadly collision as the person stumbles into the road.

Now let's take a self driving car in the same scenario, where, since it doesnt have general intelligence, it fails to distinguish the drunk person from a normal pederstrian and keeps going at the same speed and distance from the sidewalk as normally. How, in this scenario, does the vehicle 100% know that it has failed (like you say is always possible)?


An even more extreme example: suppose someone on the sidewalk suddenly whips out a bazooka and shoots it at you. Does your failure to anticipate this contingency count as a failure?

"Failure" must be defined with respect to a particular model. If you're driving in the United States, you're probably not worried about bazookas, and being hit by one is not a failure, it's just shit happening, which it sometimes does. (By way of contrast, if you're driving in Kabul then you may very well be concerned with bazookas.) Whether or not you want to worry about drunk pedestrians and avoid them at all possible costs is a design decision. But if you really want to, you can (at the possible cost of having to drive very, very slowly).

But no reasonable person could deny that avoiding collisions with stationary obstacles is a requirement for any reasonable autonomous driving system.


Way to dodge the question. And how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time.

Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case. People do this all the time when driving -- be it a drunk guy on the sidewalk, a small kid a tad bit too unstable when riding a bike by the roadside, kids playng catch nex to the road and not paying attention, etc etc. Understanding these situation is crucial in self driving if we want to beat the 1 fatality per 100M mile that we have with human drivers. For such scenarios, please explain how the AI can always know when it failed to anticipate a problem that a normal human driver can.


> how did we get from always knowing when you're failed to "just drive very, very slow when", when dealing with situations that human drivers deal with all the time

You raised this scenario:

> there's a very drunk person on the sidewalk. As a human driver, you know he might act unexpectedly so you slow down...

I was just responding to that.

> Let's not pretend that anticipating potentially dangerous behaviour from subtle clues is some once-in-a-lifetime corner case.

I never said it was. All I said was that "failure must be defined with respect to some model." If you really want to anticipate every contingency then you have to take into account some very unlikely possibilities, like bazookas or (to choose a slightly more plausible example) having someone hiding behind the parked car that you are driving past and jumping out just at the wrong moment.

The kind of "failure" that I'm talking about is not a failure to anticipate all possible contingencies, but a failure to act correctly given your design goals and the information you have at your disposal. Hitting someone who jumps out at you from behind a parked car, or failing to avoid a bazooka attack, may or may not be a failure depending on your design criteria. But the situation in the OP video was not a corner case. Steering into a static barrier at freeway speeds is just very clearly not the right answer under any reasonable design criteria for an autonomous vehicle.

My claim is simply that given a set of design criteria, you cannot in general build a system that never fails according to those criteria, but you can build a system that, if it fails, knows that it has failed. I further claim that this is useful because you can then put a layer on top of this failure-detection mechanism that can recover from point failures, and so increase the overall system reliability. If you really want to know the details, go read the thesis or the paper.

These are not particularly deep or revolutionary claims. If you think they are, then you haven't understood them. These are really just codifications of some engineering common-sense. Back in 1991, applying this common sense to autonomous robots was new. In 2018 it should be standard practice, but apparently it's not.


You don't even need to go to the level of a drunk person. Imagine driving down a suburban street and a small child darts out onto the road chasing after a ball.


Some local knowledge: they blocked off lane to the left side is the same lane used when the express lanes are in their opposite configuration, in the morning. In the morning with the express inbound towards the city, that lane is a left-hand exit from the main flow of Interstate 5, onto the dedicated express lanes. There should be a sufficient amount of GPS data gathered from hundreds of other teslas, and camera data, that shows the same lane from an opposite perspective.


> This is actually the most serious aspect of all of these crashes: the system does not seem to be aware when it is getting things wrong.

That’s exactly my experience as a driver:

You learn to anticipate that the ‘autopilot’ will disengage or otherwise fail. I have been good enough at this, obviously, but it is sometimes frightening how close you get to a dangerous situation …


>Here is my armchair diagnosis:

Odd how your "armchair diagnosis" matches perfectly with the top ranked comment in the reddit thread that was posted 1 day ago.

courtlandreOwner 815 points 1 day ago

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


The comment above is saying the split looks like a diamond lane marker indicating an HOV/electric car lane, so the car is centering itself on the diamond. That's a completely different idea from the Reddit comment about the car centering itself between the right and left white lines. My guess is that both inputs contributed to the car "thinking" it was following a lane.


The conclusions are the exact same. The autopilot was trying to center the car between lines.

>My guess is that the autopilot mistook these lines for the diamond lane marker and steered towards them thinking it was centering itself in the lane.

It sees the white line on the left, the white line on the right and thinks its a big lane. Its trying to center the car in the lane.


Is it really odd that two people could reach the same conclusion when presented with the same video?


Of course not. But it is odd when the top rated comment directly below the video is essentially the armchair theory that was proposed.


Again, how is it odd that everyone agrees with the obvious explanation?


> It's not possible to engineer an autonomous system that never fails, but it is possible to engineer one in such a way that it never fails to detect that it has failed.

Wouldn't the system doing the checking be considered an autonomous system...that could also fail?


No, because the monitoring system doesn't interact with an environment, so its job is much simpler. This is actually the reason that cognizant failure works.


Thank you for the concise answer. I was just about to ask if you have a blog, but found your website in your HN profile. I know what I am doing for the rest of the night :P PS: Funny that we've both written a double ratchet implementation in JS :)


I don't mean to call the OP lier but anyone knows if there is any evidence whatsoever that this video was taken on an autopilot engaged Tesla? Just want to be sure about it before I form judgment against Tesla.


More basic lane assist in other cars already solve this problem. 100% teslas fault. My car will disable lane assist the moment the lane gets too wide.


Sounds like an easy fix then. After that we move on to the next edge case and so on ad infinitum.


If you are aware when you are wrong... you would not be wrong.

If you are referring to assigning confidences/probabilities to decisions, this is standard in ML.


> If you are referring to assigning confidences/probabilities to decisions,

There's a little more to it than that but yeah, pretty much.

> this is standard in ML.

Yes, I know. But not, apparently, standard in embedded autonomous systems.


I don’t know you, sir, but Tesla should hire you!


Estimating errors or confidence level reliably is still one of the biggest unsolved problems in AI.


In other words an old ambiguous parsing problem.

It might be fixable in software. I'm a bit annoyed at Tesla for over reliance over painted lines. They fade, they can be covered, be outdated ..

That old fail video of a SDV stuck inside a circle is not funny anymore.


> SDV stuck inside a circle

Off topic: I never understood why this video was discussed that much, especially in order to blame SDVs. It's an example of pointless road markings and a perfectly behaving vehicle. It's like driving into a one way street that turns out to have no exit. The driver can't be blamed.


I know, and partly agree. Still when you have something that hints at leaping into the future, a simplistic automata behaviour like this brings all to the ground.

We'd expect at least 'cycle' detection ;)


This comment from OP (beastpilot) is pretty frightening:

> Yep, works for 6 months with zero issues. Then one Friday night you get an update. Everything works that weekend, and on your way to work on Monday. Then, 18 minutes into your commute home, it drives straight at a barrier at 60 MPH.

> It's important to remember that it's not like you got the update 5 minutes before this happened. Even worse, you may not know you got an update if you are in a multi-driver household and the other driver installed the update.

Very glad the driver had 100% of their attention on the road at that moment.


Elsewhere from the comments: "Yes the lane width detection and centering was introduced in 10.4. Our X does this now as well and while it's a welcome intoroduction in most cases every one in a while in this instance or when HOV lanes split it is unwelcome." So basically, if I'm understanding this right they introduced a new feature that positions your car a little more consistently in the lane at the small cost of sometimes steering you at full speed head-on into a barrier.

Remember Tesla's first press release on the crash and how it mentioned "Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times"? I imagine the number that have driven it successfully in that lane since that update was rolled out sometime in mid-March is rather smaller...


And that's exactly what makes Tesla PR so disingenuous, they know better than anybody what updates they did and how often their updated software passed that point and yet they will happily cite numbers they know to be wrong.

So, now regarding that previous crash: did that driver (or should I say occupant) get lulled into a false sense of security because he'd been through there many times in the past and it worked until that update happened and then it suddenly didn't?


I'm a huge fan of Tesla and I thought that bit of PR was horrifyingly bad. It came across as completely dismissive of any further investigation or accepting of even the possibility of responsibility.

Now that these other videos are showing up, and further details (the update) are emerging, that PR should bite them in the ass hard enough that they decide never to handle an incident that way again.

I don't want this to kill Tesla -- I sincerely hope they make their production goals and become a major automobile manufacturer -- but their handling of this should hurt them.

I'm also curious if any of the people at Tesla saying, "we call it autopilot even though we expect the human driver to be 100% attentive at all times" have studied any of the history of dead man's vigilance devices.


I've got to take issue with you there. Tesla Engineering knows better than anybody what updates they did and how often they update etc.

Tesla PR knows nothing about what updates the engineering team did. At least some people in Tesla PR probably don't even know the cars update their software regularly.

It's bad practice for them to speak out of turn, but I can absolutely see the PR team not having a good grasp of what really indicates safety and their job is to just put out the best numbers possible.


I'm sorry but Tesla PR == Tesla. If they don't have a good grasp on this they should STFU until they do. That would make Tesla even worse in my book.

Their job is not to put out the best numbers possible, their job is to inform. Most likely they were more worried about the effect of their statement on their stock price than they were worried about public safety.

If they do put out numbers (such as 85K trips past that point) then it is clear that they have access to engineering data with a very high resolution, it would be very bad if they only used that access to search for numbers that bolster their image.


Well I kind of came to hope that Tesla PR was less BS than other PR companies. But that memo basically shows they're no different from all other PR, just twisting the truth to avoid negative stories until the fuss dies down.


The entire point of a PR department is to gather that information before releasing it. Yes, they wrangle it to put the company in the best light, but that doesn't mean they should be operating in ignorance. They have access to Tesla employees that outside reporters do not (or at least, should not) have.


> The entire point of a PR department is to gather that information before releasing it.

No the entire point of a PR department is to propagandize the public on behalf of the corporation.


It seems like the collective pack of Teslas would feed telemetry back to the mothership as unit tests for any new versions. In other words, any change has to redrive (simulate) a big sample of what has already been driven, especially where the human had to correct.


Life and death patch notes. I think I'm going to wait a while before using this feature.


> Very glad the driver had 100% of their attention on the road at that moment.

Yes, and 100% is critical. That required pretty quick and smooth reactions. The car started to follow the right-hand lane, then suddenly re-centered on the barrier. The driver had about a second to see that and correct. That's just the sort of situation where even a slightly inattentive driver might panic and overcorrect, which could cause a spinout if the roads were wet or icy or cause a crash with parallel traffic.


So your basically paying (A lot) of cash to beta test their software for them... at your own (Very personal) risk!


And endanger others who have decided to opt-out of the beta test. Keep in mind that in any accident all bets are off with respect to the traffic around you, even if you ram into a stationary barrier you could easily rebound into other traffic.


Also, if something goes wrong it's your fault anyways because you're supposed to keep 100% of your attention on the road.


That's the best kind of small print. The kind that disclaims all liability no matter what. I really don't get why we accept this stuff. It's akin to driving without a license. After all, as long as a Tesla 'autopilot' can't take a driving test it has no business controlling cars.


Not to mention the fact that, if they're software screws up and kills you, they'll rush to to Twitter in an attempt to smear you using personal data and deceptive statements.


I don't know why people use this thing. The cars are nice, and justify their purchase on their own, without this autopilot feature. But I don't think you can do good autopilot without lidar.

  People shouldn't use it.


I really don't think the cars are that nice... this is a side from the main issue here but the build quality is poor and the design is bland - both internally and externally (That massive big iPad like screen - yuck!).


The big Youtuber that rebuilds salvage Teslas made the point that almost every Tesla owning Youtuber has a video showing their car getting hauled away on a tow-truck. One of the possible reasons that Tesla service dept. doesn't provide maintenance history on salvage cars is because the amount of service would reflect poorly. It doesn't seem uncommon for a model S with 80,000 miles to have several door handle replacements, at least one whole drive unit replacement, battery contactor replacement, an infotainment system replacement, in addition to all the recall/redesigned components that are replaced before failure. I still think Tesla's are quite nice, but a bullet-proof Honda they are not.


Not just your own risk... There are other people on the road who share that risk. It makes me angry at Elon Musk himself. First he didn't sterilize that stupid thing he sent into interplanetary space. Now he's putting peoples lives at risk.


Have to disagree with you there. It is in a fairly normal solar orbit. There are a shitload of dead 3rd stages and satellites in solar orbit. They were not sterilized. National space agencies have been launching things into earth escape velocities without sterile procedures for 40 years. If it were designed to land intact on Mars that is another story


Well there's plenty of unsterilized stuff in space. The early space missions just dumped the bodily waste of the astronauts overboard.


Yeah, but it's coming back to Earth, where it came from. This is an object that could land on Mars or another planet.

Great measures have been taken in the past to ensure that other planets aren't contaminated before we've had a chance to understand their existing biology. Elon Musk is the kid who comes and knocks over some other kids' block tower for his own amusement.

https://en.wikipedia.org/wiki/Planetary_protection


> First he didn't sterilize that stupid thing

Huh? Isn't it better that way, we might spread [simple] life to other planets. I'm completely serious, I don't understand what the concern is?


There may already be life on other planets. Whether or not it's simple is a short-sighted human judgement.

Look at the big picture. We risk denying these planets the ability to evolve in isolation. That is a decision that cannot be reversed. Do we really want to do that? Maybe so, but it ought to be a conversation. Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms. Then, this belligerent guy comes along and disregards all that.


> Whether or not it's simple is a short-sighted human judgement.

You realize I was talking about your "contamination" (e.g. bacterial organisms, microscopic lifeforms, etc)?

> We risk denying these planets the ability to evolve in isolation.

Seems like a fairly limited risk. It is more likely these planets have no form of life at all and that we'd seed their only life (if it could sustain there).

> Great measures have been taken to reduce the risk of contaminating other planets with Earth based lifeforms.

You're conflating craft that were designed to land on other planets and look for life on them, with a space craft.


> The driver had about a second to see that and correct.

And if there had been a crash, Tesla probably would have said that the driver had an unobstructed view of the barrier for several seconds before impact (conveniently omitting at what point AP started to swerve).


And don't forget pointing out that the driver forgot to use their blinker a week before the crash so clearly they just aren't a responsible driver.


From the OP's comments on Reddit - he said that it happens about 90% of the time on that stretch and that he has more footage of the same exact incident.

He was likely prepared for it, which kinda makes it even scarier in a way. An inattentive driver would have totally botched this.


After working in large codebases I discovered that there are all sorts of assumptions in the code, and that a small change in one function can break an assumption used a long ways elsewhere. I came to the conclusion that any change basically invalidates all your tests. An update could introduce any possible problem that wasn't there before, so now all your experience with the system as a driver is actually reset to zero.


This is just about what I expected it to look like when software engineering safety standards are let loose on the physical world.


And this is why if I ever get a self driving car I want:

1) root

2) control over updates

3) everything to be completely open


You do not want root on your self driving car.

Or to put it differently, I don't want to be driving on the same road as you with your rooted self driving car. You can be great sysadmin/coder, tesla guys may be too, but both of you changing random stuff without any communication with each other.. I've seen enough servers.


I am trying to imagine the typical arch Linux enthusiasts modifying the lane detection software on their cars and shudder at the thought.


This is a problem that will very quickly fix itself, as morbid as it sounds...


It's one thing to remove the warning stickers from a gas-fire oven, its another when you're commanding several thousand pounds of unintentional killing machine.

Keep your rooting to your phones.


All I want in a modern car is to have nothing smart in it that I'd _need_ (or, want) to root...


Everything to be completely open is really important, but I don't want you to have root or control over update if you are using your car on a public road.


Only certified (by the government) updates should be allowed on self driving cars. It should be at least a misdemeanor to have your own code on the car.


Do these things really matter unless you are going to verify the software yourself? (and if you are, kudos to you!)

While, in principle, users could organize their own communal verification programs for open software, that does not happen in practice, even when the software is widely used and needs to be safe (or at least secure - OpenSSL...)


I just don't understand the willingness people have to put their lives in the hands of unproven technology like this. I mean, I don't even trust regular old cruise control in my cars and I keep my finger on the "cancel" button whenever I use it during long highway drives.


This kind of video makes me wonder: what is the "autopilot" feature actually for? Do people generally like it? (I don't own a Tesla and I've never driven one.)

If you're supposed to keep your hands on the wheel, and given videos like this that show you really need to keep your hands on the wheel and pay attention, is automatic steering really that big of a deal?

Cruise control, now, that really is useful because it automates a trivial chore (maintaining a steady speed) and will do it well enough to improve your gas mileage. The main failure condition is "car keeps driving at full speed towards an obstacle" but an automatic emergency brake feature (again, reasonably straightforward, and standard in many new cars) can mitigate that pretty well.

It seems to me that autopilot falls into an uncanny valley, where it's not simple enough to be a reliable automation or a useful safety improvement, but it also isn't smart enough to reduce the need for an alert human driver. So what's the point?

If you're excited about self-driving cars because they'll reduce accidents, as many people here claim, what you should really be excited about is the more mundane incremental improvements like pedestrian airbags. Those will actually save lives right now.


To directly answer your question: people buy autopilot so they can check their phone while commuting down the freeway to work. I was talking to a group of people about the difficulties of commuting in the Santa Clara Valley, and one guy shrugged and said, "just get a Tesla, move to the left lane, and manage email until you get to work".


Yikes! I really hope that's a joke...


What else would people be using it for? It's a "drive my car for me while I send this text" mode that allows you to take your attention off the road every few seconds. Otherwise, the system makes no sense at all.


Once you're engrossed in your phone, you're not taking attention off the road for a 'few seconds'. You're essentially letting autopilot drive for you. Which Tesla explicitly says you're not supposed to do.


Agreed.


I see people attempting that on the roads around here without the Tesla. That is why I always honk at cell phone drivers. Sometimes I wish I had a fully-charged Marx bank in the trunk.


It should be illegal. It is in many countries.


It is in New York.


Proponents keep touting this as the real benefit of self-driving cars, so it's hardly surprising that some people have taken them at their word and not realised that the car is not, in fact, truly autonomous and may drop out at any moment.


or not drop out and drive you into a barrier


It's not a joke. I was in a bus crossing the Bay Bridge a few weeks ago. I was watching a Tesla driver play on his phone. He didn't look up once in the span where I had visibility (about half of the bridge).


I have seen pictures of iPads strapped to steering wheel in Nissan Altimas, so yes people are really this dumb. Airbag issues entirely aside.


Oh god. Imagine the last thing you see before dying is your candy crush score propelled at your face at 200 mph.


Years ago I saw someone reading a book while driving...


Agreed. I really hate that Tesla named the feature "autopilot" as it gives the impression of being fully automatic (yes, I realize boat and airline "autopilots" aren't fully automatic - that is likely lot on most consumers).

My VW has Adaptive Cruise Control (ACC) which does the normal cruise control, plus basic distance keeping (with an alarm if the closing speed changes dangerously).

My parents' Subarus have ACC plus lane-keeping. The car will only do so much correction, plus alarms.

These seem like much better solutions, given the current state of driving "AI".


Yeah, I have a recent Subaru with those features. It beeps when I'm backing out of the garage and there's someone driving or biking up the road before they even come into my sight lines.

It can help keep me in a lane, either by beeping or nudging the steering wheel if I drift - I only turn on active lane assist on long highway stretches, it is more of a security blanket than anything else - just in case I space out for a sec, here's another layer of defense.

Adaptive cruise is also great. Between the two, in long road trips I can put the car in a travel lane and just go. I still have to attend to my surroundings, but it is a lot easier to focus on that when I'm not worried about accidentally creeping up to 90 mph because suddenly there's nobody in front of me.

I also had the auto brake feature activate once when a car in front of me stopped unexpectedly. I was in the middle of braking, but the brake pedal depressed further and there was a loud alarm beep.

None of these are autopilot, and honestly I wouldn't want autopilot until it is legit reliable. Instead, these are defense in depth features. The computer helps prevent certain mistakes as I make them, but never is in primary control of the vehicle.


Subaru's system is simply fantastic and you can tell they thought a lot about the "human condition" (human weaknesses) while designing it.

Lame keep assist is very subtle, I describe it like wind blowing on the side of the car. You can trivially over-power it and it will beep if you exit the lane (without blinker on) with or without lane keep assist active.

The whole package of safety features is wonderful and impressive for something starting at around $22K.


I drive a Nissan myself, but from the sounds of it it's about the same. LDW (just warning in my case), FEB + pedestrian/moving object detection, blind spot detection, intelligent/adaptive cruise control. (There's also a lane keeping system option called ProPilot, but I skipped out on that - I don't think I'd trust it in the regular rain anyway)

Anyway, my point is these things are common now, and while getting the full package might cost a premium, the more important parts (like FEB) are certainly becoming fairly standard.


From what I understand, Subaru develops their own. They are the only ones where there are two cameras in the front that both see in color.

We have a 2017 Subaru with all of these and it's really excellent.


I have ProPilot. It's pretty great. In heavy rain it shuts itself off and tells you that it is unavailable in bad weather.


That's great news. ProPilot sounds ideal for dealing with daily stop and go commute without the cost and risk of Tesla


I think Tesla is losing out here by calling what is little more than adaptive cruise control plus lanekeeping "autopilot." Adaptive cruise control + lanekeeping + collision avoidance is available on a ton of cars now, even economy cars, and it works great. I regularly make a ~200 mile drive on the Honda system and it is 100% the whole way. But when you sell it as Autopilot instead of driver assist, people have much higher expectations and blame your car rather than the driver - though there very well could be serious problems with the Tesla implementation.

Lanekeeping is actually very nice. The system in Teslas and most (all?) others do require you to keep your hands on the wheel and pay attention, but having to constantly manually steer the car is much more fatiguing than you would think. It's really annoying to drive cars without lanekeeping now.


My concern is, if you have both cruise control and lanekeeping, what's left to keep your attention and stop you zoning out?

I don't find steering onerous, but it requires just enough attention to keep you alert.

(I've never used a lanekeeping system, though. Maybe I'd like it, I dunno)


I use lanekeeping on the freeway, usually with adaptive cruise control as well (subaru). I focus more on the Big Picture of all the cars around me, can see further and have a better sense of who is doing what. That's anecdotal, of course, and could well be placebo effect, but I still pay a lot of attention to the road.

Plus, lanekeeping can be really annoying if you rely on it all the time. The car sometimes tends to drift back and forth in a lane a little between corrections, instead of just going straight. So: you're still steering. It's just that you're steering less and have a defense in depth against loss of focus, and can drive without exhausting yourself.


I've rented a model S for a weekend and drove about 1 000 kilometers in it.

Naturally, I tried the auto pilot feature on a highway but I wasn't to impressed. There is a major "bump" (negative G-forces), and the car tried to swirl into the other lane (this was within the first 40 minutes of my drive), and that made me distrust the auto steering feature.

As I think you're onto, I DO feel that auto pilot is the future, but we're not there yet - let's improve the existing life saving features (and not disconnect them, looking at you, Uber).


Personally I hate cruise control. I prefer to be actively engaged when driving. Either I am driving 100% or I am not at all, there is absolutely no middle ground in my opinion


I can understand that. I like cruise control because it lets me focus 100% on steering and watching the road around me.

Trying to maintain a steady speed and conserve gas is a fun challenge, but a bit pointless, because a) it's a distraction from more important tasks, and b) the computer can usually do it better than I can.


It's also more comfortable for your passengers.

I did a 6 hours trip with a friend continuously putting his foot up and down on the throttle, and it was the most gruelling car trip I've had I think.

When I asked he said it was "to keep control on the car". I'm a patient friend.


You're a better friend than I am. That truly is torture, and is the reason I volunteer to drive on ALL trips. I don't care how long. People who can't maintain proper speed without either flooring it, or taking their foot off the pedal, shouldn't be driving.


>People who can't maintain proper speed without either flooring it, or taking their foot off the pedal, shouldn't be driving.

Not entirely true. I've been to German driving school and my father really pushed the concept of smooth throttle control on me as a beginning driver. Be smooth always. However, as I was able to afford higher and higher performance vehicles my take on the smooth maintaining of speed without noticeable accelerator input has changed. While I'll drive my SUV very smoothly, when I drive my manual six speed German roadster, my style is entirely different. Because of the weight, size, and HP, it's really not possible to drive it smoothly outside of tossing it into 6th which isn't terribly "fun". In the gears where it's "fun", it's a very much "on" or "off" throttle experience simply because of the HP produced by the engine.


I drive a manual transmission BMW. The shifting, and speed control can be as smooth as glass. It's the driver not the machine. Take consideration for your passenger, and everyone else on the road.


Yes, but its not a 577 HP track-tuned roadster. I can drive my 320 HP manual sedan quite smoothly as well. There are some cars where:

a) you don't want to drive them smoothing because that's no fun - even for just the exhaust note & b) It's actually difficult to drive them smoothly because of the torque/HP.

It doesn't mean you are a bad driver, it simply means you've adjusted your driving style to match the car you are driving.


I DOUBT OP was taking a six hour road trip with his friend in a 577 HP track car. Which is the point here. It's awesome that you have a race car, but comparing that to the rest of our daily drivers makes no sense.


If I didn't know better, I'd think you were describing my wife.

In her defense, she learned to drive in a very different environment (dense Chicago surface street traffic).


I've got a friend that drives like that too. Sooooo annoying. Speed up, slow down, speed up, slow down. On and on. Ugh.


Cruise control on my car actually gains me about 1 mpg. This is specifically because the car "knows" that the speed won't be changing much so it locks the torque converter completely and drops the engine RPM by 100~200. Many of those things are direct results of the mediocre Ford implementation of the 2005 Five Hundred CVT and the fact that it predates Adaptive Cruise Control, but it has a physical reason and follows directly from new assumptions in the code only viable from the state I put it in.


We've actually moved quite a bit away from "100%" direct control.

Anti-lock brakes were the first such system. Before, you had to pump the brakes in an emergency, a practice that was difficult to execute even without the shock of an impeding accident. That system alone certainly saves thousands of lives every year.


Ahem. That is purely semantics -- ABS makes the car perform more like what you are conditioned to expect (under normal driving conditions). That's the whole success of ABS; it saves lives precisely because it conveys the illusion of you still being 100% under direct control when in fact you would be careening out of control otherwise.


One of the things I've heard mentioned by professional driving instructors, without a good citation, is that many more accidents could be avoided if people trusted ABS more. Specifically, when you're braking hard/panic braking and the ABS kicks it, the shock of the rapid firing against your foot causes you to back off pressure on the brake pedal. I was specifically instructed to stand on it anyway to see what the car behaved like when truly panic stopping and maintaining pressure, but I definitely had an instinctive reaction to let off of the brake when the ABS kicked in. If you're ever in that situation, press harder, don't let up!


That's why cars have Brake Assist (BA) now that will break more than you're actually requesting with the pedal if you break suddenly.

https://en.wikipedia.org/wiki/Emergency_brake_assist


Yes, my driving instructor (aka my father, employing the paternal prerogative of never citing anything ever) used to say the same thing.


The average driver will probably never actually experience abs. It's a good failsafe that is rarely employed...if it is, you definitely realize it


Unless you live in the US in the Midwest or New England area, then the likelihood of you never having experienced ABS kicking in is likely to be 0%. Braking on ice is not the most fun experience.


I live in the Midwest. I have yet to experience abs. Between the three vehicles I've owned and the 400-500k combined miles.


I have ABS kick on around once-twice a season. I mean, sure, if you live somewhere it doesn't snow you won't even experience ABS, but if you live in a climate where it snows you definitely will.

That being said, it's always been while stopping for a light, not in a case when I had to swerve, so its never prevented a crash for me, but it really is comforting to have it.


Around Houston I was surprised to see many warning about ice on the roads. Aparently even in Texas on a cold night a road especially on bridges can freeze resulting in black ice.


It's important to practice once in a while. Find a clear road where you're not putting anyone else at risk and get used to the feeling, so you'll not release the brakes in an emergency.


Cruise control is great where there's long stretches of open road, light traffic (LOS A or B [1]), and little need for frequent braking.

The star usecase is to set cruise to be very near the speed limit, such that after acceleration events like overtaking, you coast back to highway speed.

It's a low-effort way of ensuring that one will be compliant with speed laws most of the time, yet maintaining a steady pace. I too prefer to be 'actively engaged' while driving, but in my opinion the reduction of constant acceleration input is a welcome convenience.

'Adaptive' cruise control, on the other hand, feels to me like riding on a tenuous rollercoaster. It's intended to let cruise control be usable in packed traffic, but it requires one to cede a lot of trust and control to the machine in ways that physically make me uneasy -- and it doesn't help that the exact behavior differs between models and manufacturers, so that trust doesn't automatically transplant into a different car.

Part of the problem is, again, with terminology. Ever since Adaptive Cruise Control proliferated as a term, it drew a parallel to classic 'Cruise Control', which I think is a mistake. Classic Cruise Control is a fire-and-forget, non-safety feature that's simple to reason about: do I want the car to gun it at a constant 70 mph, or no? You can run a quick mental judgement call and decide whether to engage it or leave it off.

'Adaptive' cruise control fundamentally about maintaining following distance, i.e. tailgating restriction. It's a safety feature. It's a button to "proceed forward not exceeding target speed", but if it gets disengaged for any reason then you can easily overrun into the car ahead. It's a safety feature with the UI/UX of a non-safety feature, so it's always opt-in (!) -- which is simply horrific.

All safety features in vehicles should be either always-on, or opt-out, and NEVER opt-in. On a modern car, tailgate restriction should be on by default, with a button unlocking the car into free-throttle mode. Braking -- alone -- should never disable a safety feature.

[1] https://en.wikipedia.org/wiki/Level_of_service


ACC/ICC is not a safety feature in the general sense. It might be termed as a safety enhancement to cruise control, in which case it should be opt-out when entering cruise control (which it is, in my case) and which disengages when exiting cruise control (which is done by braking, and that is well known).

On ceding trust - at least with the ICC system in Nissans, it's A) far back, which gives more reaction time B) quite easy to tell when it sees the car in front vs when it doesn't. You're ceding trust, sure, but you can also verify easily.

Your 'tailgate restriction' bit is effectively a more agressive form of collision warning/forward emergency braking, and FCW+FEB as far as I know is available on all or at least most vehicles with ACC/ICC. Unfortunately, the realities of city driving means that 'maintaining distance' is a goal in some cases (i.e. just got cut off, tight merges, etc) rather than an absolute directive - frankly, something trying to force me to a certain distance away from the car in front of me would be more aggravating than useful.


> if it gets disengaged for any reason then you can easily overrun into the car ahead

Are there cars that have ACC and don't have AEB (Automatic Emergency Braking)?


Furthermore, if an ACC system is getting disengaged, you're going to coast to a stop. Not accelerate straight into the car in front of you.


I like it when driving through areas with strictly enforced speed limits. Otherwise I'd collect way too many tickets.


I've been wondering exactly the same thing over the last week once all this came to light. Seems it would be the worst of both worlds. Would personally feel safer to me knowing either I'm supposed to drive or not. This half way system means actually means the better the auto pilot is the more reliant I would naturally become and then not be ready to react or be in the frame of mind where I can. How airline pilots stop themselves becoming overly reliant on their autopilots is also a concern. It's that classic situation where two people each think the other will take care of it, then no one does. "I thought you were going to do it."


I don't own a car so I drive very rarely (once a month or less). I recently went on a week-long roadtrip in a rental Subaru which had their EyeSight system which does adaptive cruise (follows the speed of the car in front) + lane keeping. I'd say at the end of the day I was noticeably less exhausted than I usually am after a day of driving.

The Subaru system will not let you let go of the wheel for more than 15 seconds (after that it will instantly disengage), so it's more to save your effort of continual minor corrections. It also disengages as soon as it's confused in the slightest (faded lines, lines at an angle etc)


I'm looking forward to trying adaptive cruise control on the new Subaru my dad got recently. I pretty much stopped using regular cruise control years ago because the roads where I mostly drive are busy enough I find that it encourages driving in a way that prioritizes maintaining a constant speed even when that's different from the flow of the traffic.


So human drivers are terrible at driving for any long period of time, especially on relatively consistent "boring" highways.

Guess what computers excel at? Driving consistently on consistent highways.

The Tesla Autopilot is supposed to be the always aware and paying attention portion on these cases where a human driver would be very likely to start texting or dozing off. Now, it's not fully autonomous and may well decide it can't handle a situation (or apparently try to drive you into a barrier to see if you're still awake...). In this case the human driver who is somewhat zoned out needs to take control instantly and correct the situation, until they can safely re-engage autopilot (or pull over and make sure they're still alive, etc).


  Guess what computers excel at? Driving consistently
  on consistent highways.
Watching the video, would you say the computer was excelling, or that the road was radically unusual? I wouldn't.


The location is the end of some tidal flow lanes rejoining the main freeway lanes with an unusual configuration of lane markings and barriers. I drive past the spot all the time and can totally see a computer messing up here.


In other words, "works great if it works, might actively try to kill you at random, there's no clear way of telling when it might try the latter, it's your fault in any case." Yeah, that's very much the safe driving paradise I'm being promised "in two years" for a decade now, by the technooptimists. It's almost as if the technology is at 50%-ready instead of the 90% that marketing types seem to peddle, who would have thunk?


pretty much


The computer decided that staying in the middle of a wide lane was more important than avoiding a large obstacle where there was sufficient room on either side. That's bog-standard target-fixation fallacy. Is that excelling?


It might not have been radically unusual, but that doesn't make it consistent either.. (looked like lanes where closed to me)


> "Guess what computers excel at? Driving consistently on consistent highways."

Possibly on paper. In reality, as of right now, computers are clearly far from excelling at this specific task.


Assuming perfectly spherical cars on perfectly consistent highways in a vacuum.


OK, but there are clearly cases where the autopilot gets into trouble, and the human needs to take control again quickly. The human driver absolutely cannot doze off.

Now, I realise people in old-fashioned non-autopilot cars can and do doze off, and that's very dangerous. But it's not clear to me how the autopilot improves that situation. Relying on the autopilot actually encourages you to doze off.

We already have simple remedies like "pull over if you feel tired" and "never ever pick up your phone while driving (or you'll lose your licence)"


Expecting "drivers" to be able to instantly switch from "somewhat zoned out" to having the situational awareness to resolve the problem that AP fails at is unrealistic.


The are two very real issues/threats here.

the first being people are being put in harms way by either false sense of trust invoked by the name or the mixed messages from Tesla

the second is that if the first is left unchecked Tesla could single handily set back Autonomous driving for all by souring public and government opinion.

it needs a new name that aligns better with what it can do. it could be a safety system which gently corrects a driver and takes over in an obvious emergency internal or external. as it stands now it is just dangerous


The problem is autopilot actively engages you even less than highway driving. I argue this exacerbates some of the problems that cause humans to checkout.

I don't know what the answer is but it feels like GM's super cruise does a more adequate job of acknowledging the realistic limits of the technology and explicitly white lists roads where the technology is available for use.

I personally think that without some sort of sensors or beacons in the road, autonomous driving via camera and LIDAR sesnors is ever going to be good enough to achieve level 5 autonomous operation.


I think the world modeling is the part that autonomous cars can obviously be better at. They have all sorts of advantages over humans; access to much more detailed maps than the average person can recall, multiple types of sensors pointed in multiple directions, a system designed to integrate all that information in a way that is advantageous for driving (we have a system evolved for whatever it evolved for) and so on.

It's the sophisticated behaviors necessary to safely drive through that world model that are the issue.

The success of emergency breaking systems (that aren't advertised as "driving" assist) are pretty good evidence that the sensors can serve well as input to safe behaviors.


If a human can navigate current roads, why shouldn't a computer be able to? We may need to develop a new types of sensors for vehicles, but that seems like a better/easier plan than installing beacons on every road, everywhere.

And what about beacon maintenance? Seems like most cities have a hard enough time keeping up with pot holes, lines, etc. as is.


If a human can write a novel, why shouldn’t a computer be able to?


I have no reason to believe a computer cannot write a novel.


They probably need to be able to reproduce before they can write a compelling novel.


Beacons is a great idea -- have some standardized beacon system, and a public map of autopilot-approved roads.

Following the beacons safely would be a vastly easier problem than trying to completely replace a human driver in all situations, but it would still give you about 90% of the benefits.


Lines fade away because there is little money for maintenance but beacons that cost multiples are the solution?

The first thing I thought when I read beacons: Hackers are going to have a field day with them. Add malicious beacons to streets and cars will drive off road at high speeds.


Well, look at cat’s eyes (sometimes called Botts’ dots in the US, I think?)

I assume they’re more expensive to install than just painting a few lines, but they’re very robust and long-lasting, and they’re fantastic for human drivers. It’s not a stretch to imagine something similarly useful for computerized cars, that links to a standard road database.

Hacking is a risk, sure. I envisage you’d lock it down by having a cryptographically signed master map; if the observed beacons diverge from the map, the autopilot system would refuse to proceed. (OK, I guess that allows a DoS attack at least.)


I use Autopilot every day. I think it's amazing and borderline life-changing if you are like me and dread the monotony of a daily commute. I don't think most people realize just how mentally fatiguing the act of driving (constantly making all sorts of micro adjustments) is until they use Autopilot for a while and see the difference for themselves. I certainly didn't. I can now drive for many hours and feel just as alert as when I first got into the car. I was never able to do that before Autopilot.

Yes, you still need to pay attention. It is hard for me to believe that any Autopilot user doesn't know this because you learn it by experience almost immediately. People text and drive all the time in manual cars, but for some reason when they do it in a Tesla, we declare that Autopilot lulled them into it.

While I agree that you need to be ready to grab the wheel or hit the brake on short notice, I disagree about what that means. There is a big difference between having to be ready to do those things and actually having to do them every few seconds. This difference wasn't intuitive to me, but in practice I've found it to be extremely mentally liberating and true beyond question.

I've also found that Autopilot makes it easier to take in all of your surroundings and drive defensively against things you otherwise wouldn't see. One thing that has struck me, as I now see more drivers than just the one in front of me, is how many people are distracted while driving. If I glance continually at an arbitrary driver on my way to work, there is a greater likelihood than not that within 10 seconds they'll look at a phone. That is terrifying to me, but it is also good information to have as I drive -- I am now able to drive defensively against drivers around me, not just the one in front of me.

I've also found I'm more able to think or listen to music or podcasts than I was before Autopilot. I could never get much out of technical audiobooks, for instance, while driving manually. But Autopilot has changed that. I hesitate to say this because I worry that I will give the impression that I am focusing less on the road, but I don't think that's what's happening. My mental abilities feel much higher when I am not constantly turning a wheel or adjusting a pedal. I'm listening to music, podcasts, or audiobooks either way -- I just get a lot more out of them with Autopilot. I think it goes back to the lack of mental fatigue.

Whatever you make of my experience, I urge you to try it on a long drive if you ever get an opportunity. I have put over 60k miles on Autopilot, I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.


> I think it's amazing and borderline life-changing

That's a very unfortunate choice of words.

> I have taken over on demand hundreds if not thousands of times, and I've never had a close call that was Autopilot's fault.

Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.

Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.

Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.


> That's a very unfortunate choice of words.

Solid lol, but I stand by it!

> Maybe you are an exceptional driver, to be able to be vigilant at all times even when AP is active.

I've had driving moments I'm not proud of. But it's because I was being dumb, not because Autopilot made me do it.

I think the relevant question is: does Autopilot make people less attentive? I have no data on this. My personal experience is that most drivers are already inattentive, and Autopilot (1) makes it easier to be attentive (for a driver who chooses to be); and (2) is better than the alternative in cases where a driver is already inattentive.

> Even so, it would probably only take one instance where you weren't in time to change your mind on all this (assuming you'd survive) so until then it is a very literal case of survivors bias.

I hope I'd be more thoughtful and independent than that, but maybe you're right. But I don't think my view in the face of a terrible accident should be what drives policy, either.[1]

On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read. I see a much greater quantity of bias in the failure cases that are reported and discussed than in the survivorship stories told (as evidenced by the proportions of comments and opinions here, vs in a user community like TMC or /r/teslamotors). I posted my experience because I think it brings more to this comment thread in particular than my survivorship bias detracts from it.

> Which makes me wonder how that person that died the other week felt about their autopilot right up to the fatal crash.

See: [1]


> I think the relevant question is: does Autopilot make people less attentive?

There are large bodies of knowledge about this gained from studies regarding trains and airline pilots and the conclusion seems to be uniformly that it is much harder to suddenly jump into an already problematic situation than it is to deal with that situation when you were engaged all along.

> On the issue of survivorship bias, I would add that "Man Doesn't Text While Driving, Resulting in No Accident" isn't a headline that you're likely to read.

It's one of the reasons I don't have a smartphone, I consider them distraction machines.


I rented one yesterday (s), it was my first time in a Tesla. There are a lot of things I didn't like - the interior is ugly and the giant ipad console is stupid and hard to use while driving without any tactile switches. But it IS the future. And WHAT a car. Just f'n brilliant as a package.

BUT...I feel like autopilot should only be for traffic jams on highways. It's downright dangerous the way it forces the driver to disengage. The adaptive cruise control is much better as at least you still have to pay attention but the car manages the throttle and following distances efficiently.


> Cruise control, now, that really is useful because it automates a trivial chore (maintaining a steady speed) and will do it well enough to improve your gas mileage. The main failure condition is "car keeps driving at full speed towards an obstacle" but an automatic emergency brake feature (again, reasonably straightforward, and standard in many new cars) can mitigate that pretty well.

Adaptive cruise control also helps; if the system detects a car in front slowing, it'll slow at a roughly equivalent pace to avoid a collision.


I think right now it's only good for stop and go traffic conditions and that's about it. That's still really useful for a lot of people.


So, like the traffic every day along US-101?

This self-driving car craze would be in a very different place if Silicon Valley had halfway decent mass transit...


It’s good in traffic jams. You can relax a lot more than driving yourself or autopilot at high speed. The worst case scenario is a fender bender.


People aren't going to tolerate this kind of stuff. These aren't cosmetic or aesthetic flaws. This is an area that deserves transparency.

I'm out on the road too, and I don't get to make "consumer choices" for other people I share the road with. People are putting their market power into companies that lack the basic integrity to build these technologies with safety and transparency as the first priority, who evidently see peoples' lives as necessary sacrifices.

Please remember that we are talking about vehicles speeding down the road at 70+ MPH.


Nobody's forcing autopilot on anyone. I think instead, tesla will / should opt to disable autopilot earlier in non-ideal conditions like this - that way they won't be held liable either. ATM it really feels like they're being put on trial here.


All the other people on the road are having autopilot forced on them by reckless early adopters.


> by reckless early adopters.

That's a pretty important point. We aren't going to ban alcohol are we? Aren't we even trying to allow marijuana consumption? Both of theses put you in dangerous situation on the road, yet we aren't talking about blocking their consumption.

It still the responsibility of the adopters, just like it's the responsibility of the drinkers.

At least doing it safely actually improve security in long term. Theses systems NEED to be driven to improve their performances.


Drinking and driving is however banned.

At what cost do we need the supposed improved performance of self-driving cars?


if you crash while under a substance, you not only get arrested but you can lose your license and get huge fines.

if you crash while on autopilot, does the same happens?


Personally, I think the company selling the car with self-driving abilities should be responsible. In turn this requires a clear understanding of what the responsibility is: what is expected from a self-driving car; what situations should it handle; what kind of documentation is needed of their development process.

Like anything else which involves safety and environmental damage.


Not on public roads.


The market puts all business on trial, on a daily basis. Users discover flaws in a product, and don't buy it. We SHOULD be critical of flaws in things that we buy, and that are forced to interact with.

Ford was put on market trial after the Pinto started going up in flames. Tesla should be held to similar if not greater standards.


It's not any different than the situation with bad human drivers, but anyone on the same road is exposed to these systems.


It is different, though.

You can't push an update to stop humans from endangering themselves and/or others.

Tesla, however, is able to do exactly that with their cars.

That they haven't, when their technology has been proven to be fatally dangerous, is unconscionable.


And conversely, you can't push an update that causes humans to start endangering themselves. (Unless Tesla has some magic way of preventing regressions, which doesn't seem to be the case)


Autopilot tech in general should be open and shared among ccompanies and universities. The more brains we can put on this the better imho. If these things happen too often, Im afraid regulators will start making laws that prevent the use of this tech, which I believe has the potential to really make life better for all... one day.


I have always wondered why each company is developing their own software and algorithms... How will NTSB certify the software? Or will they? I mean, they certify car safety features now through crash tests, roll over tests, etc... Why is Tesla, or anyone manufacturer, allowed to modify vehicle safety features through OTA updates without recertification?

I realize it adds more regulation...and I'm sure there can be a middle ground. But I foresee a scenario where an OTA update is pushed and the next morning there are car crashes everywhere.


Patents and the ability to earn money by licensing out technology. That, or hoping they are the best and get more customers through it - I think that's currently the case for Tesla, selling the only commercially available semi-autonomously driving car ATM.


All manufacturers sell semi-autonomously driving cars. At a minimum cruise control, but more advanced features like e.g. https://www.toyota.com/safety-sense/

Tesla sells a feature set that sounds slightly more advanced on paper, but in practice the benefits are pretty comparable.


> I think that's currently the case for Tesla, selling the only commercially available semi-autonomously driving car ATM.

"I think that's currently the case for Tesla, selling the only commercially available semi-autonomous [electric] car ATM."

FTFY


Cadillac has a similar system out in production.


The regulators have been really easygoing so far with automated cars. I strongly suspect its because they take the optimistic comments of the tech industry at face value that these technologies are road ready. Meanwhile, I think Tesla and Uber perceive this as more of an open alpha. This will not end well.


For someone in the medtech space, it's been interesting to see how loosely (not at all?) marketing is regulated in automotive and tech in general. Were the FDA regulating automotive, there is absolutely no way that Tesla's driving assist package could be marketed with the name "Autopilot", for one. More broadly than just marketing, it's also amazing to me that Tesla could have released a feature (I know they've revised it since) that was designed for human supervision but did not require hands on the wheel for long stretches of time. What did the risk management look like there?


I think (at least in the US), the relationship the automakers have with regulators is such that if the automakers think they can make money by selling a feature, they will be able to get the regulators to come up with a way to allow it.

Not necessarily one as simple as OTA updates whenever the manufacturer feels like it.

As far as regulations preventing use of the tech, if level 2/3 systems end up causing more crashes than human drivers, we shouldn't allow use of them.


I would guess people would just stop using it


Serious question: how will it make life better for all? I haven't given that question a lot of thought, and I haven't heard legit answers from others yet.


If autonomous systems reach the point where they are measurably safer than human drivers, there's an obvious benefit to using them.

Again it's an if, but lots of people drive long distances and don't enjoy it. An autonomous vehicle would relieve them of the driving task. It would also add a transport option for people that are unable to drive long distances.

It's likely to reduce the cost of taxi like services, as drivers are currently a significant portion of those costs (autonomous driving turns pricing into almost a pure calculation about return on investment).

I guess you might dismiss those as not being "legit" enough since they are all contingent on the systems working well.


No, I'm with you here. Just thinking through the other side of some of these. Lost jobs. Not sure "safety" is a big enough issue here. On a larger scale, I'm significantly concerned that society (thru tech) is becoming increasingly optimized for comfort and safety. That bothers me. It removes so much of what is good about life. I don't want guaranteed life until 100. I don't want comfort all the time; that stunts growth. There is a lot of grey area here, of course. But this has been on my mind for a while now.


Losing truck driving jobs isn't removing what is good about life. Neither is too much safety. The people whose lives are being saved by technology aren't worrying about having too much safety. Only those who are privileged and don't have to worry about anything, realistically.


Living in a city where people park on the side of the road, I can't wait for autonomous cars to completely transform the urban space.

Currently, there are two lanes of parked cars, plus two sidewalks and (usually) two lanes for traffic.

Autonomous driving would allow cars to park themselves off-site, such as underground garages a block or two away. That alone could double the space available for pedestrians, or other uses of public space such as outside cafes, green spaces etc.

The ability to quickly call a vehicle, adaptable to your current needs (minivan for family on Saturday, open two-seater for a summer date) could also start a major move away from car ownership, reducing the total number.

Improvements in traffic management (shorter distances between cars, coordinated starting/stopping, interleaving intersections) also have the potential to allow us to reduce 4-lane roads to just two. We might even find ways to make one-way streets work for most residential area. All this again frees up public space.


All valid points, thanks


If done well (long way to go still) there will be less traffic accidents, less traffic jams, etc.


Fewer dead people sounds good.


I don't know about that. Let's talk unemotionally about it. People die. Not all of them live until natural death. That's ok. That's how life works. There is plenty of good that comes from this. Is a goal of keeping every person alive until X# (90? 100?) a good one? I'm not sure it is. Simply saying "people dying is bad" isn't a very reasoned thought if you take away the assumption that everyone agrees with this.


Sure, fewer people dying is a bit undifferentiated if you want to go for ethical discourse. I would probably be okay with a few extra dead people if that increased the overall happiness of the remaining people. I'm not sure that having people die in traffic is a good way to achieve that.


I get that, but this product development aimed at prolonging life, comfort, and safety through technology should not be taken lightly without this discussion right here (ethical discourse, as you call it). I can't divorce the two in my mind.


> Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.

This is why the idea of self-updating cars terrifies me. I'd never allow autopatching on a production server - Why would I allow it on hardware I'm betting my life on?


I'm amazed that this is even being permitted. And in almost all territories; practically no-one has banned this practice. It seems absurdly risky, especially once someone finds a security flaw in the Tesla software updater.


It adds a whole new level of frightening to the term breaking change.


"Move fast and break things". Taken literally.


“Move fast and break humans.”


Move fast and break barriers.


I can't understand the blindness of the Tesla/Elon fanboys in that thread; the comments on that thread are so defensive of Tesla and still show little concern for safety. Despite fairly damning video evidence, a fatality, many reports of accidents and near accidents...


People have a lot invested in Tesla-- sometimes emotionally, sometimes literally. It is difficult to be objective in that case.


What does this say about the quality of the automatic emergency braking, if it can't detect a substantial metal object directly in front of the car?


Autopilot has trouble with stationary objects.

https://www.wired.com/story/tesla-autopilot-why-crash-radar/


What would it take to get Tesla to admit the product is unsafe and should be disabled until they get it right? Or will they simply plow on until a regulator steps in?


> What would it take to get Tesla to admit the product is unsafe...

People should start to vote with their purse. In other words, stop buying these cars, or start selling their stocks.


This philosophy, though admirable in support of personal liberty, ignores that a commons exists.


How is that OK? O.o


So it can't drive for shit. Right? "Autopilot" my ass!


Depends on how close it is and what it is. Have to weigh the energy of the impact against the pitch introduced by braking. You need the bumper to take the hit, so that it and the crumple zones can do their job.

However, for a wall or highway barrier, it's probably almost always worthwhile to shed as much energy as possible. You're going to pitch yourself under the barrier where the bumper won't take the hit, like you might when colliding with another vehicle.


> Have to weigh the energy of the impact against the pitch introduced by braking. You need the bumper to take the hit, so that it and the crumple zones can do their job.

Eh? The vast majority of real frontal impacts will be under heavy braking. If a few degrees of dive under brakes is enough to compromise the crumple zones' effectiveness then your car is bad (and I don't believe any Teslas are bad in this way.)


I believe the parent is talking about small overlap crash?

Small Overlap tests: this is where only a small portion of the car's structure strikes an object such as a pole or a tree, or if a car were to clip another car. This is the most demanding test because it loads the most force onto the structure of the car at any given speed. These are usually conducted at 15-20% of the front vehicle structure.[1]

Modern cars are surprisingly good at small over crash tests, see this video someone linked on HN a few days ago https://www.youtube.com/watch?v=DHlj8-JcWa8

1. https://en.wikipedia.org/wiki/Crash_test


I mean, this advice is pretty firmly held for hitting things like animals. Albeit for other reasons -- it helps prevent them from going over the hood and towards the windshield.

Generally the pitch won't matter that much in vehicle-to-vehicle collisions. Only when the pitch would angle the front bumper completely under the other vehicle, which I admit isn't much of a concern. And not really a concern at all in vehicle-to-barrier collisions.

However, if we're talking about idealized automated driving systems, I hold by my original assertion. I would expect such a system to be able to correctly analyze when releasing the brake right before collision to reduce pitch would be beneficial.


Accelerating into an animal to increase car's pitch is something I had heard growing up in deer country, but it seems to be a myth: http://www.discovery.com/tv-shows/mythbusters/mythbusters-da...


That seems like a silly idea; the pitch change is not nearly as drastic as during braking. Cars decelerate faster than they accelerate. I don't know of any car that can go 0-60 in 100 feet, but there are plenty of cars that can go 60-0 in that distance:

http://www.motortrend.com/news/20-best-60-to-0-distances-rec...

Advice I've seen is to brake as hard as you can to shed energy, then release the brake to correct the pitch as close to collision as you can time it. No acceleration involved.


You can pry the steering wheel out of my cold dead hands. Might be true that on paper these things cause less accidents and less deaths, but I want to be in control of my life. If I die in a car I don't want it to be because of a software bug.


I hate to break it to you but it could still happen to you because both you and those Teslas are part of the same traffic stream. So you could easily get killed because of a software bug in someone else's vehicle.


touche. seems like they're more interested in concrete barriers than other cars at the moment.


Wait for the next software update.

And remember not to have an emergency requiring you to stop on the side of the road:

[1] https://www.youtube.com/watch?v=fc0yYJ8-Dyo

[2] https://www.theverge.com/2018/1/23/16923800/tesla-firetruck-...


That one is ugly. If AP can't even steer around huge trucks parked on the road then that's a very serious problem.

Interesting how everybody claims it is 'the drivers fault' when in fact at that moment the driver is the autopilot software. It - and not the person sitting in the left hand front seat of the car - is driving the car.


I honestly think the fault is in the naming conventions. when you call it "Auto Pilot" it doesn't seem like you have to drive at all. Someone like Cadillac calls it "Super Cruise". I think this makes it sound much more like what it is. It's driving assistance, not self driving.


You already don't have control over any of the other drivers on the road, some of which are unlicensed, impaired or even homicidal or suicidal.

Not to mention your car already has software that can kill you, even if it doesn't steer.

Even ignoring software, hardware can have "bugs" in it too that kill you. I'm not sure why you think engineering mistakes are "worse" when they are in the software vs the hardware.

Basically, no matter what it "feels" like, you are subjected to many many many forces outside your personal control.


Yeah there can be other bugs in my car, or other people that do something stupid, but I feel like I am more in control. It is unlikely I die from my engine blowing a gasket or my ECU completely dying. It might be a slight danger to those around me while I coast to the shoulder. But at least it's not going 60 mph into the shoulder...


You're severely underestimating danger of engineering mistakes in hardware.

Faulty Takata airbags have killed at least 20 in the US and injured over 100 since 2013. Then there was the Ford-Firestone fiasco that killed hundreds and injured thousands. The Ford Pinto ended up being a death trap. I also recall there several lawsuits around faulty guardrails that have killed. That's just off the top of my head.

The idea "software bugs kill, hardware bugs don't" is just plain wrong. I also can't even fathom an understanding of why it's "better" to be killed by someone whose overdosing on heroin behind the wheel than being killed by an engineering error. Especially given the hypothetical case that was proposed when engineering errors are less common.


I think the idea was "why add another layer of things that might kill me on top of existing ones?" In this regard, SDVs deliver the exact opposite of their marketing pitch.


I wouldn't even say it's that they're safer on paper, it's that they're purely hypothetically safer. In the same manner that you're "safer" when entrusting your data to the Googles and Facebooks of the world. Smart people know what they're doing, right?


I mean I don't disbelieve the figures that they reduce traffic accidents in vehicles that are using auto-pilot. It greatly reduces trivial human error. But it doesn't change the fact that it sucks when people die from beta testing a software update they didn't ask for.


Most modern cars are drive-by-wire, entirely relying on computers and the software. And their software is huge blobs of proprietary compiled code. It's guaranteed to have bugs.


1. It's more than most, it's almost all modern passenger cars that are drive-by-wire. But drive-by-wire is a term of art that specifically means that the throttle is not a direct mechanical linkage. Very, very few cars are steer-by-wire and even fewer are brake-by-wire.

2. You're using "proprietary" as a pejorative. It's not proprietary, compiled code that has bugs. All software is guaranteed to have bugs.

3. An entirely insignificant number of accidents are caused by bugs in the drive-by-wire control system. (Even taking into account that an insignificant fraction of cars on the road are drive-by-wire as you mean it.)


Very few modern cars actually decouple the steering wheel from the power steering mechanism fully, like a fly by wire jet airplane no longer has cable linkages from control stick to control surfaces.


yeah I guess. That buggy code doesn't drive people into walls or give people the feeling they can read the newspaper while doing one of most dangerous things they do in their day.


Probably not that huge, really. Embedded systems on most vehicles are fairly minimal and reliable, at least compared to modern consumer operating systems and (apparently) Tesla's systems.



> You can pry the steering wheel out of my cold dead hands.

There's a non-neglible chance that will actually happen.


If Tesla autopilot relies on good quality road markings, then it's not yet usable in 80% of the world.


It's not usable anywhere. What if I draw a fake line on the road? will it follow it? what if you have wet paint and another vehicle smudges it diagonally to the lane?


When I was a kid there was a cartoon where a truck painting a white line in the middle of the road, drove off a cliff for some reason (and therefore, so did the line).

The punchline was that all the other cars also fell off the cliff because they were following the line instead of looking at the road.

It's crazy to think, that's where we are now with so-called self-driving cars.


When I was a kid there was a computer game called Lemmings...

What if all the connected autoupdating self driving cars suddenly learn to follow each other off a cliff in pursuit of an optimisation gone wrong.

It would be ironic if the Bulterian Jihad[1] were kicked off as a result of Elon Musk's machinations.

1. https://en.wikipedia.org/wiki/Butlerian_Jihad


Exactly. This is what I’ve been saying. It is so dangerous. I love Tesla but I think autopilot is currently a death trap.


Naive centering certainly is. That's exactly what caused this behavior. Just about any scenario where the car tries to split the difference between widening lane lines has the potential of causing this problem.

To make matters worse, it's trivial to run across a case where the road curves AND splits, causing the 'neutral' path to go straight into a barrier… which is exactly what's seen in the video. The straight path would be toward the split-off road, but it veers slightly to the right before splitting.

Makes me wonder if GPS can play a role in determining whether the lane markings are liable to seduce the autopilot into splitting the difference. Given established roads, the car should have some clue this scenario is approaching. If we can see a 'V' approaching on a simple freaking Garmin, the car ought to be able to have that information.


> I think autopilot is currently a death trap.

I think this is a bit of an exaggeration. As long as the driver keeps paying attention and uses it as a driving aid, not a driver replacement, everything is fine. It's the moment that people start relying on it doing something that it wasn't built for, that the problems arise.


> I think this is a bit of an exaggeration. As long as the driver keeps paying attention and uses it as a driving aid, not a driver replacement, everything is fine. It's the moment that people start relying on it doing something that it wasn't built for, that the problems arise.

I think this is a bit too optimistic. People will start relying on it to be an autopilot. I think most people see this as the desired result: a car that can drive itself. Do you really think anyone wants "a car that will drive itself while you are paying full attention to every detail"? Otherwise, what is a driver really gaining from this? They're still expected to pay just as much attention (if not more) and I'd bet it's even more boring than regular driving (no interaction from the driver means it's like the world's most boring VR movie)

Humans are not machines, we love to find the lazy/easy way and we love to do things rather than stare at the road, eventually people will grow complacent (hopefully not before the tech is up-to-snuff).


> Do you really think anyone wants "a car that will drive itself while you are paying full attention to every detail"? Otherwise, what is a driver really gaining from this? They're still expected to pay just as much attention (if not more) and I'd bet it's even more boring than regular driving (no interaction from the driver means it's like the world's most boring VR movie)

EXACTLY. They're at even more risk for accident due to inattention because it's so hard to focus on doing nothing.


This puts Telsa at odds with itself.

The feature is called Autopilot after the namesake of flight systems where the pilot doesn't need to be constantly hands on. It is marketed as an autonomous driving system that allows the car to drive safely without human intervention.

In the same breath, Tesla says that it isn't really an autopilot system, and that even though they market it as autonomous driving, it is still essential that you act as if you were driving.

As you've stated, it's a tool for assisting drivers. So why do they market it as fire and forget?


They shouldn't market it as "autopilot"


Volvo does it too, but nobody seems to be complaining about Volvo.


Volvo calls it Pilot Assist when you actually have the feature on your car


Volvo's tech hasn't killed anyone yet.


I've seen construction crews paint new lines on the road e.g. to reroute traffic due to one lane being closed for expansion. A human driver can easily tell the difference between the fresh new line they should follow and the worn out old one. Can a Tesla?


> Can a Tesla?

My 2015 S 70D can't but then again the problem only arises if I ignore the road works signs and continue driving on autosteer in circumstances where the UI has clearly told you not to.


I have a pretty simple lane keeping aid in my car that keeps me within the lines. Yesterday it suddenly steered towards a wall. I think it may have been confused by either glare (low sun right in front of me) or a shadow line on the road.


I have it too, it makes some jolts from time to time. It will also start blaring about ten seconds after me not providing any steering input, though. It's lane ASSIST, not automatic driving.


What make & model?


Opel Astra. It also has a 'bug', where at a certain location the lines on the road trigger the emergency braking feature, every single time.


> It also has a 'bug', where at a certain location the lines on the road trigger the emergency braking feature, every single time.

Ouch. That's really harsh on following traffic. Now, obviously they shouldn't be following closely enough for that to be a problem but in practice I'm pretty sure if I stomped hard on the brake a couple of times per day with regular traffic following it would not take long before someone would ram into the back of my car and if not then the one behind it.

It's also a great way to start traffic jams.


If you draw fake markings on the road people will follow them, especially if the sun rises parallel to the road.

I have personally watched this happen, last year (I think?) the where repainting the lines on interstate 64 near shortpump and for ~ a week that summer my commute there was insane, just a bunch of cars going 70MPH with no real lanes.


This isn't new, but what if it snows? I live in Calgary and snow covers our roads for about 30-40% of the year.


Your question would be answered if they would call it driving assist and not auto pilot. Then you'd understand that you would not turn it on in snow. Just like Cruise control.


If my 2015 S 70D can't see enough to tell that there is a lane then it won't allow you to turn on autosteer. Generally if the road is covered in snow then it won't turn on, also it will turn off if the lane markings disappear.


I wonder how it works when there is some road work, and there are multiple overlapping lines at once.


IMO it should not - it should sound an alarm, have the driver take over, and shut down.


I don't think fake lines is a genuine concern. What if somebody plants anti personnel mines under the road? What if your tesla just assumes it's a normal road and drives over it?


I agree. I think this technology needs smart roads to succeed, whatever this entails.


I wonder what "AI updates" are going to mean from an insurance point of view. Don't know about the US, but some countries in Europe insure the driver and not the car (eg. Ireland). How far fetched is for insurers to argue that an autonomous car software update constitutes a change of driver and demand that the updates must first be validated from them before installing?


It's a good point.

Here in the UK I am insured to drive vehicle A as specified on my insurance documents. If I modify that vehicle my insurance policy is void and I need to inform the insurance co. and probably pay a premium.

In your scenario the same could be true on a software update.


And it wouldn't be unreasonable from the insurers point of view, since any update carries a risk. What's going to happen I think is that insurance companies will probably need to vet the updates before allowing you to install them, much like android phone vendors. Running un-authorized updates will void your policy.


I don't understand why people use Autopilot, or why Tesla has released it.

To use it safely and according to Tesla's instructions, you have to remain 100% vigilant at all times - so you might as well just drive yourself.

And if you fail to remain vigilant, which is likely since you are sitting passively in the drivers seat, you might kill somebody.

Where's the upside? Why on Earth would I want to use such a product?


> Where's the upside? Why on Earth would I want to use such a product?

I use it because it makes my driving less stressful. The car and I work together which means I don't have to work so hard to keep to the speed limit, to keep a sensible distance from the car in front and stay in lane.


So no different than a regular adaptive cruise control.


https://www.cnbc.com/2018/01/31/apples-steve-wozniak-doesnt-... "Man you have got to be ready — it makes mistakes, it loses track of the lane lines. You have to be on your toes all the time," says Wozniak.

"All Tesla did is say, 'It is beta so we are not responsible. It doesn't necessarily work, so you have to be in control.' "Well you that is kinda a cheap way out of it."


2018 the wild wild west of robot cars.

Pay an extremely high cost including your life to ensure technological advancement. Is that not part of their marketing? If not SNL needs to do a skit!

I can see the skit now, "You looked S3XY in that Telsa," "Did you get the update last night?" "Oh your right and it made me feel so S3XY until it drove me straight into a barrier at 60 MPH."


The ability to discern when the car needs to stop for a stationary object right in front of it needs to be an ironclad requirement for self-driving.

It seems that sophisticated LIDAR is currently the best way to achieve this, but LIDAR is expensive. So companies like Uber and Tesla skimp on the LIDAR, and build self-driving cars that can more or less follow lanes but plow right into obstacles? Whoa.

The feasibility of widespread "self-driving" tech is going to advance only as the cost of LIDAR falls.


I wonder why they can't just use a less sophisticated lidar. Just have a few fixed beams that just look ahead. Do you really need a scanning system to detect an obstruction?


Lidar isn't reliable in conditions that are less than ideal (rain, fog, snow, etc). If you build a car that is dependent on real-time lidar observations, it is worthless much of the time in some climates and some of the time in all climates.

Most companies are pursuing a Lidar approach because dev_speed(Lidar approach) > dev_speed(camera approach). Tesla is pursuing a camera approach because max(camera approach) > max(lidar approach).


In such conditions, cameras are not unreliable, they're downright useless. Heck, even people have trouble driving during snowfall. I have a feeling that this is a massive set of scenarios that aren't even edge cases, yet SDVs are completely unprepared for them. ("No snow in coastal California - it doesn't exist at all!")


I think you may be imagining what a human sees in a foggy camera image, rather than what a neural network can see in it. In fact, atmospheric obstructions degrade Lidar's signal much faster than a camera's.

If you check out NVidia's DriveNet demo from a couple years back, you'll see that they already have a vision-based NN that outperforms humans in fog and snow[1]. We can debate whether people should drive in those conditions at all, but today's expectations will be the baseline that SDVs are up against, and cameras are much better suited than Lidar to achieve that baseline.

[1] Start at 7:21 https://www.youtube.com/watch?v=HJ58dbd5g8g


What human sees in a foggy camera image is something quite different from what we see directly - that's the Uber-scapegoat-camera-human-wouldn't-have-seen-anything-either argument again.

To the effect of "nobody should be driving like that" - I have driven at walking pace at times, considering faster speed unsafe. I do agree that would be a better task for a SDV, iff it can full fill the promise of that marketing video.

I also agree that various conditions are suited to different sensor types - thus for an autonomous vehicle, multiple sensor types are needed.


I wonder if Tesla has a way to visualize how disruptive a change to Autopilot. They've supposedly got all this driving data lying around. Would they be able to apply the updated algorithms to that data and quantify how many decisions would be affected, and by how much? i.e. 30% of past decisions would have been changed with the new algorithm, 5% changed by an angle of over 2 degrees.

On a similar note, shouldn't customers be getting release notes for these updates? I let my customers know when we're updating a web page. Tesla should surely be informing them when cars are going to start lane centering.


The problem is you can't simulate the video that happens after your new decision.

So this steering update would take you 0.3 degrees to the right. So what? Well those 0.3 degrees might change the angle of the line which influences the car to steer another 0.3 degrees, etc. But without that followup video (which you can't simulate from a 2D recording) you don't know how it would react to the change in environment.

The only way to regression test these things is in a simulated 3D environment (or miles and miles of real test track)


Following the branch down is obviously impossible, but simply detecting that there are new branches--perhaps a greater than expected number--could be useful information. For both the engineers and the operaters.


I really don't think they could do this effectively unless each car also uploads 20GB per day of 60fps high quality video, which would collapse the cellular networks they use.


I don't think they have that much data that's useful for this purpose. They may have billions of miles of some kinds of data, but they only upload occasional samples of the camera video that would be required to test a feature of this sort. I can't find the exact link now, but someone reverse-engineered it and it was something on the order of a few seconds a few times an hour.


How do you define a 'decision' in something like autopilot though? Also, if I remember correctly, only a couple of images per second are sent to Tesla. If there's a fraction of a second where it may decide to run into something, it may not actually appear in test results.


One way is to extract the drive path for some given sensor input. Then you can classify the drive path as safe, marginal, collision, severe collision, etc.

Changes from safe to safe aren't interesting. Changes in the wrong direction are obvious issues.


Scary as hell for me since I go through this exact location on a regular basis. The location is the Seattle i-5 northbound express lanes, where the express lanes end and merge into the main freeway at Northgate.


  Previous autopilot versions didn't do this. 10.4 and 12 do this 90% of the time. I didn't post it until I got .12 and it did it a few times in case it was a bug in 10.4.
It looks like software updates will soon won't just break your workflow. They might break your spine or your head or take away your life entirely.


Musk wanted to regulate AI to combat existential threat; in reality, it's not super intelligent self-aware AI, but buggy/shitty AI that's trying to kill people.


We will be killed by AI developers and the dunning kreuger effect, not armies of Skynet T-800s.


That's a Greek Tragedy-styled scifi story waiting to be written.


I think we are better than that


AI will bleed us slowly without alerting us to its presence.


Everyone's afraid that AI will become too smart and take over the world. They should be afraid that it's too dumb and already has.


The scariest bit for me; in the comment's it's claimed that it has only done this for the last two software updates. Currently the worst I have to worry about when getting an OTA update is that it may make my phone less stable. Apparently in the future it may also influence precisely how my car tries to kill me.


Tesla makes amazing cars, but their attitude to self-driving is terrible. Waymo's approach is way mo [sic] safer.


Perfect example of a system doing what it's supposed to, but that's not the right thing. I don't believe it was headed toward the center barrier - I believe it was neatly avoiding it. Here's what I think was happening:

The car did an admirable job trying to stay in that left lane. Each time it saw a red and white diagonal lined barrier, it gracefully edged a comfortable distance away.

The problem is, most humans catch on, "Gee, three scary-looking barriers in a row! Maybe they're trying to tell me something. "Maybe this road is closed."

But I assume the car's system perceived each one as in individual, isolated obstacle to be handled without making inferences. And it did that much well, and when it realized the third or 4th barrier was impassable, it had to cross the other side of the yellow barrier and it did that well, too.


Could someone fill me in on collision detection? My 2016 BMW has Front Collision warning which can be set to min/med/max range. I've got it set to max range. It can detect a solid object that is coming up to collide with my car and warn me via sounds and HUD. Do Teslas not have some method of solid object collision that should supersede any driving directive? As per my previous comment, could a prankster paint a set of white lane markers that verve into a concrete wall or even off a cliff? That's freaking scary to me that these systems are so reliant on lane markings. At highway speed, a distracted driver, would have little time to be alerted, assess the situation and take the correct corrective action before a collision occurred.


I would argue that these incidents indicate a need for aerospace levels of quality assurance.

With the right tooling this does not need to be expensive.

I hope that incidents like this encourage people to develop open source tooling to support safer software and higher levels of quality.

We can see how lives might depend on it.


Aerospace QA is expensive by definition, because reliability nines generally are, whatever the industry: an additional nine costs 10x. You can go cheaper, but with a corresponding decrease in reliability.


A lot (but not all) of the costs are due to additional process steps that can be automated.

I think we can dramatically increase safety and quality by selectively adopting some of the concepts from aerospace and aggressively automating them.

Traceability; NLP for requirements etc..


That is probably true, and might help - but that only gets you so far, can't program your way out of everything. For high reliability, you'll need redundancy (and thus fallback and voting and whatnot), which will a) drive up HW costs, while also b) increasing complexity.

I do agree there are some LHF opportunities to learn from aviation, but doing that is insufficient.


I agree with you on the topic of redundancy.

I am probably not communicating it very well; but my main motivation is a reaction to just how much better our tools need to be.


They should have settled https://arstechnica.com/cars/2017/04/tesla-owners-sue-enhanc... last year instead of dragging it on. These round of videos are more damning than ever.

https://www.pacermonitor.com/public/case/21195146/Dean_Sheik...


This seems to be one of those problems that you can solve 90% of the way very quickly with a seemingly simple set of rules. 1) stay in your lane

2) dont hit other objects

3) obey traffic laws

Then that devil that is the details shows up...


The reddit thread (of largely tesla drivers) seems to have concluded that the resolution is “stay alert especially when updates get pushed.”

To me it doesn’t capture the gravity of the situation.


Whoever approved this change in the code needs to be out. If I was Karpathy, I'd be very worried right now that he isn't liable in a civil or criminal suit.


I assume they have a test suite that allows them to run 1000s of situations on the latest version of software to see if it crashes/almost crashes on any of them.

If so, it seems like it would be fairly easy to add this situation to the corpus... especially if they are recording data from their cars live.

They should have 100 people going out there and recording this lane split situation into their test data ASAP.


Tesla says that the driver needs to be constantly watching the road in autopilot mode. But what kind of autopilot is that? If I have to constantly pay attention, I might as well just drive. I think it's safer, too, since I am bad at paying attention for something that rarely happens.


Here's the exact location where the video ends, in case anyone wants to go "troubleshooting" this weekend:

https://goo.gl/maps/Y2kag6yYc3N2


Why isn’t the traffic ahead of the car taken into account? When I’m driving, following the car in front is a pretty good way to decide where it’s safe. Conversely, going somewhere no car ahead is going seems like a bad idea.


They need to move those damn barriers. This is happening alot. /s


I just read at https://www.lemberglaw.com/self-driving-autonomous-car-accid... about this matter, and I think that these autonomous cars haven't been ready to be publicly released to citizens now. Automakers and law makers should think about the regulation and law.


Does Tesla record when people override autopilot? Seems like this would be a good source of information about sections of road with obvious problems.


An honest question. Are any of these autonomous driving systems open source or peer reviewed in any way? If not, isn't it really weird that we are talking about regulations even though the underlying technology is not even peer reviewed? How do we know (mathematical proofs, etc.) that a self-driving car manufacturer has done a good enough job if all of that is proprietary?


In one of the earlier discussions about this(like, last year) I remember an engeneer's comment about how one of the major manufacturer's (Toyota maybe?) code base for the monitoring system of the car was something to behold. On the lines of hundreds of global variables and basically unexisting standards. Now, I don't say that their or Tesla's autopilot are the same, but it wouldn't really surprise me either.


No, no, yes, we don't.


One way to help might be to disable AP following an update for X days or weeks. Let AP run in a disabled simulation mode while you drive your normal routes. If it determines your course of action in any situation breaks the threshold of what it was thinking it should do next, auto-report video feed to Tesla, and prevent AP from being used in a X-mile radius around that area.


It seems that with current developments, that's a complicated way of saying "turn it off."


Does anyone know if all these recent accidents are with the old MobilEye technology or the new Nvidia system? Or was it a rumour that Tesla was moving towards Nvidia rather than MobilEye (I think MobileEye was the one that didn't want to renew their contract?)?


I would be careful calling it just the MobilEye system or the Nvidia system and assigning blame accordingly. They both create autonomous driving components, but at the end of the day, Tesla, Uber, etc. write their own software, and are responsible for the whole of the systems that are actually in their cars.


It's hard to be sure in every case because most 'news' stories and most HN comments don't specify even the model of car involved. But the incident in which the driver was killed running into the divider was a model X so that means hardware version 2 and Tesla's own software rather than Mobileye that is in, for instance my 2015 S 70D.

See https://en.wikipedia.org/wiki/Tesla_Autopilot#Hardware_2


Does anyone know how software updates are tested for such autonomous driving software ? Is there a possibility for this driver's video to be added to a regression suite and tested, or can they take the data from his car and build a regression test based on it ?


I sure hope they do have a regression suite with thousands of real world sensor-data samples in some normalized format made amenable to unit testing. Assuming such a normalized format is possible to generate, I expect these samples to be open-sourced and standardized. That would give the NTSB a systematized certification procedure: just run the new version of the software on all the known test cases before allowing it to be released.

Still, I suspect that may not be enough - a new weird road condition or construction site can be created at any time. This is why I believe self-driving companies would be better off spending their budget upgrading and certifying specific routes as safe for automation. Certified routes could be subject to special construction protocols and regular road quality audits that ensure automated cars won't run into a non-automatable condition on those routes. It's also easy to verify and confirm that a car works on all certified routes, rather than trying to test the entire North American road network.


I think tesla needs to build a high performance cell network and upload continuous 1080p stereoscopic video at 60fps and factor that into their training. No sensor data can be a substitute for high quality video.


I don't mean to call the OP lier by any means. But anyone knows if there is any evidence that this video was taken on an autopilot engaged Tesla? Just want to be sure about it before I form judgment against Tesla.


ha ha i dont see how this is supposed to improve safety. Maybe it's safer than having no driver at all but if we're talking about being better than typical human then the safest thing to do is to switch this thing off.


It still could be safer than most human drivers. We only have a few examples of their AI misbehaving. If it consistently crashed into the barriers, we would have thousands of deaths per day.


From the linked thread:

> This is my commute home- I know it does it about 90% of the time. It does it almost every evening and has for the last few weeks since 10.4 rolled out.


Calling this well intentioned series of mistakes "Autopilot" has got to be the dumbest fucking thing Tesla has done.

Actually, I stand corrected. Their stock gamble is about as dumb too.


My hands started to sweat watching that video - this is not good.


One thing baffles me. Where is the insurance industry in this? I would expect them to drastically increase premiums in light of these developments.


Here in Norway insuring a Tesla is no more expensive, in fact probably cheaper, than insuring a petrol car in a similar class. I'm pretty sure that the insurance companies wouldn't do it if it meant losing money.


Wasn't there (or perhaps still is) a government subsidy for electric vehicle purchases in Norway? There may also be a government subsidy for insurance, to further encourage people to shift away from ICE vehicles.

Cheaper insurance may have nothing to do with the quality or safety of autopilot in Tesla vehicles.


Man I cannot imagine how they will debug the issue in the algorithm and how they will re train it to get that barrier mistake out


I commented on the first one that humans could easily make the mistake the Tesla made. This, however, is a scarily bad fuck up.


Why is autopilot feature still allowed to exist? It should be disabled until further improvements. This blows my mind.


This is the biggest wtf. We've now seen at least three videos of similar trivial mistakes by autopilot only this week with one fatal crash and it is still not disabled.

When aircrafts crash all planes of that model are grounded until the root cause is found. Self driving cars need similar processes.


I’ve always thought self driving cars are a monumentally dumb idea. There, I said it. I feel better now.


The soundtrack fits the video perfectly.


Problem is the other SDC companies are doing dangerous things trying to keep up with Google.


Changelog (latest changes on top):

* fix issue where "Marvin" autopilot keeps complaining (rip out voicebox)

* Boost intelligence of "Marvin" autopilot. This car now has the brain the size of a planet.

* test new smarter "Marvin" autopilot


When are they going to release the software patch?


AV driving seems to be ripe for disruption


This is such a gross techie statement. No it’s not. Human life is on the line, this is actual engineering. This isn’t writing some stupid social app on MongoDB written in JavaScript over the weekend.

Google has been at this for years, and with far better sensors, and even they aren’t there yet.


Any 16 year old can already "disrupt" the industry just by getting a license and driving. Humans are and will continue to be far more capable, rational, and economical drivers for the foreseeable future.

Sounds similar to tech bros who want to radically "disrupt" carbon sequestration. Here's an actually proven idea: plant a tree.


maybe it was not obvious , AV = autonomous vehicles


Go to Washington DC, head West on Congressional, cross the Roosevelt bridge and take the exit for the GW.

If there isn't a car smashed into the middle of the north/south Y junction, you'll get the chance to count the skid marks at that Y junction from humans making this exact same error.

Tl dr; humans make the same mistake, at a higher frequency, which is why the accordian barrier exists.


humans at least are aware that they made a mistake and can avoid collision. Tesla makes no mistake, tesla has no idea whats going on so you cannot call it a mistake. Its not a bug, its a feature.


Except it gets reported back to HQ and like with airline accidents, the systems are updated across-the-board so that the risk of future occurrences is greatly reduced.

So instead of a single human learning "gee, shouldn't do that again", 100k+ vehicles learn about it.


Does Tesla actually do this? They should detect hotspots of people deactivating Autopilot (like in this video) and make Autopilot deactivate itself a few miles before

First we had the barrier crash. Then there was the video of someone reproducing the barrier crash and manually emergency braking. Now we have this video. One thing I've learned in my software career is that if one of our customers reports a bug, 10,000 others have encountered it but not bothered reporting it. So why hasn't Tesla rolled back the software yet?!


"Don't worry, an update will fix it" is a cold comfort when an OTA update broke it in the first place.


If both drivers (human and autopilot) can make the same mistake, doesn't that make your crash rate higher?


Honestly who is to say that a human driver wouldn't have done the same thing?


To be fair it steers _toward_ the barrier. No barriers were harmed.

Does Tesla run some sort of regression suite on this stuff? Can they get a copy of the sensor data from when this happened so they can reproduce those conditions as part of their test suite?


You don't need the sensor data, it's obvious. The lane markers widened, and 'straight' is into the (not seen yet) barrier. If the barrier end had a mockup of the ass-end of a car stuck on it (about three times as wide) the Tesla would freak out, panic brake, and still think it was in the center of a single widening lane. It's not mysterious what the car was thinking.

If the overall shape of the road wasn't veering slightly to the right, the car would probably not have chosen to swerve to the left into a barrier, but the car sees the lane widening equally in both directions. Simple as that.


Obvious? Not to the car! My point is that gathering failing cases like this could be a a simple way to gain confidence in a self-driving system, either from the perspective of internal development at Tesla, or even for regulation. Imagine if before you were allowed to push an update to road-going cars you had to show that it doesn't crash in any of the simulations of difficult conditions that the Road Safety Authority has prepared. I'd be in favour of such a thing at least. It's not proof, but it doesn't give any false positives.


One of the great things about self driving cars is that we are going to start "debugging Roads".

From my point of view all accidents until this time could be attributed to the civil engineer that build the road as much as the driver of the car.

You make a pedestrian pass between roads made only for aesthetic purposes so you put a sign forbidding the pass and call it done. But people of course use it and get killed.

You put concrete barriers between roads but make it so people could crash frontally against it, something you won't find in any European high speed road without deflectors that will make vehicles not crash against concrete.

Self driving will give us scientific evidence of what creates accidents like black boxes did with airplanes. Thanks to that we know that what looks like insignificant details like the color and placement of buttons, turns out to be essential.


> One of the great things about self driving cars is that we are going to start "debugging Roads".

We've been debugging roads since Roman times. This is mostly about debugging software, and even more about crappy software development processes. After all, if your update is not monotonically improving things you have a real problem if your product is mission critical and lives are at stake.


also, roads are far harder to debug and fix then software


I would not be so sure of that. Self driving in all conditions is hard. The thing that bugs me is that these '90%' solutions are released on the unsuspecting public without some serious plain language about the software capabilities and what could be expected. Marketing should not trump safety, especially not the safety of people not buying the product.


> something you won't find in any European high speed road without deflectors that will make vehicles not crash against concrete.

You find those on the German highway between Berlin and Poland, for example.

> From my point of view all accidents until this time could be attributed to the civil engineer that build the road as much as the driver of the car.

On the other hand, other solutions would provide other "bugs", to stick with your terminology.

>Self driving will give us scientific evidence of what creates accidents like black boxes did with airplanes.

No, it will only give us data on what causes accidents for self-driving cars. Most new cars apparently are already fitted with EDRs[0], the only difference with self-driving cars is the number if sensors involved.


Suprising observation about the debugging of roads. Who knows we might even learn that certain States / Countries do better in this respect and the desing aswell as the signaling might benevit from this.

Thank you.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: