Hacker News new | past | comments | ask | show | jobs | submit login

To avoid #2, Tesla specifically counts any accidents within 5 minutes after autopilot disconnect as an autopilot accident.



Five seconds.

"To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before a crash, and we count all crashes in which the crash alert indicated an airbag or other active restraint deployed."

At the bottom of:

https://www.tesla.com/VehicleSafetyReport


But they'll still release press releases saying "The vehicle had warned the driver of inattentiveness and to keep his hands on the wheel"... and oh-so-conveniently ignore "... once, fourteen minutes before the accident" (which, knowing their system now, means that was the last warning, and the attention system hadn't been tripped between then and the accident).


That's an interesting problem. The right answer mostly depends on the distribution of crashes at time t since deactivating autopilot. I would personally guess the relevance of autopilot fades to near 0 once you're 30 seconds since deactivation for 99.9% of crashes.

5 feels a little too aggressive, but would probably capture the majority of the true positives. I would have picked 10-15 seconds based on my gut.


That depends. If you're taking over from autopilot after several hours of passively resting behind the wheel, perhaps it will take you more than 30 seconds to accustom yourself to the task.


What situation could you possibly be in where its autopilots fault but takes more than 30 seconds to cause a problem AND it was necessary for you to take control?


Car steers onto opposite lane on interstate at night/no traffic?


You're not wrong, but to my knowledge, nothing like that has ever happened, and it would have been very newsworthy if it had, even absent of fatalities.


Does that really avoid #2? My understanding of that situation was this:

1. The driver senses an impending accident or dangerous situation, so they disengage autopilot.

2. The driver personally maneuvers the car so as to avoid any accident or crash.

3. The driver re-engages autopilot afterwards.

In this scenario, there is no accident, so there's nothing for Tesla to count either way. The idea is that there could have been an accident if not for human intervention. Unless Tesla counts every disengagement as a potential accident, I don't really see how they could account for this.


You need to look at the whole system. The end result (of autopilot + human) is no accident.

If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

Unless you're trying to answer "is autopilot ready for L5", this is the right metric to look at.


> If the human prevents 99% of autopilot could-have-been accidents, and as a result, 10 people die per X miles driven whereas through purely human driving 20 people die, then driving with autopilot is safer.

No, because correlation isn't causation.

In particular, it's plausible that autopilot only gets used in situations where it's easy to drive and accidents are less likely. This would erase any hope of assessing autopilot safety by looking at simple statistics like the ones you mention.


What they of course should do is count any manual intervention as a possible autopilot accident.

When I say possible, what I mean is they should go back, run the sensor data through the system, and see what autopilot would have wanted to do in the time that the human took over.


There are a couple reasons why your criteria would get almost entirely false positives.

First: Most Tesla owners disengage autopilot by tapping the brakes or turning the wheel. This is typically more well-known and more convenient than the official way to disengage autopilot (push right stalk up, which if you do twice can shift to neutral).

Second: Tesla autopilot drives like someone taking a driving test. It accelerates slowly, signals well before changing lanes, makes sure there is plenty of room in front, etc. In my experience, the vast majority of interventions are to drive more aggressively, not to avoid a collision. I think maybe twice I've disengaged to prevent a collision. In those cases it was to avoid debris in the road, not another vehicle. (The debris was unlikely to damage the car, but better safe than sorry.)


> the vast majority of interventions are to drive more aggressively, not to avoid a collision

If it's to avoid a collision then the autopilot would have crashed, and it should be deemed an autopilot accident.


Those interventions are to get somewhere faster, not to avoid a collision. If anything, such interventions tend to increase the risk of collision, not decrease it. Training autopilot to behave more like humans in those situations would make it less safe, not more.


If the intervention was not to avoid a collision, they review the footage and find that autopilot would have done something safe, and therefore it is not deemed an autopilot accident.


That's a bit speculative, since your actions will affect the actions of others, but I agree if it were done correctly would give the best picture of autopilot safety.


Can you explain how that avoids it? Not sure I understand.


It avoids a variant of point 2. The case where the driver disengages the autopilot to avoid the crash and fails. It avoids chalking that crash up to human error. It does not avoid the initial point you made that the human accident avoidance avoids the crash (and thus statistic) on the N miles of autopilot usage before it is disengaged.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: