The marketing and messaging around auto-pilot simultaneously argues that auto-pilot is safer than a human driver but blames the driver when there is an accident.
Autopilot in a plane can make things significantly safer by reducing cognitive load for the pilot. However the plane autopilot will in no way avoid a collision. Pilots are still primarily at fault if the plane crashes.
Ok, I'll try in other words: Statistically speaking, about zero persons know how the autopilot in a plane works (me included), while they do know the word autopilot. Therefore, they can't infer the limitations of Teslas autopilot from a plane's autopilot.
I seriously don't understand this disconnect. You know the word autopilot because it is a technology in airplanes. That is the only reason you know of the word.
Statistically speaking, 100% of people know that 1. Airplanes can have autopilot 2. Passenger jets still have multiple pilots in the cockpit, even with autopilot.
You don't need to know the intricacies of how autopilot functions to recognize the significance of those two facts (which I'm sure you knew) and apply the same to Tesla.
A human and Autopilot working together is safer than just a human driving. Autopilot by itself is currently less safe than just a human driving (which is why it's still level 2). There's no mixed messaging.
> A human and Autopilot working together is safer than just a human driving
This is not my understanding from colleagues who studied the human factors of what is now called level 2/3 automation many years ago. Partial automation fell into an "uncanny valley" in which the autopilot was good enough most of the time that it lulled most human participants into a false sense of security and caused more (often simulated) accidents than a human driving alone.
Since then I've seen some evidence [1] that with enough experience using an L2 system, operators can increase situational awareness. But overall I wouldn't be surprised if humans with level 2+/3 systems end up causing more fatalities than human operators would alone. That's why I'm relieved to see automakers committing [2] to skipping level 3 entirely.
This is absolutely correct. And related to the issue of situational awareness, Tesla Autopilot has utterly failed at the basic design systems concept of "foreseeable misuse."
Having worked in the driver monitoring space, it pains me to see a half-baked, black box system like Autopilot deployed without a driver camera. Steering wheel and seat sensors are not up to the task of making sure the driver is attentive. Don't even get me started on "FSD," which proposes to work in harmony with the human driver in far more complex scenarios.
The driver is there just for regulatory purposes, all cars self driving in 2016!, cross country summon in 2017, coast to coast autonomous drive in 2018, Tesla self driving taxis in 2019, FSD making teslas worth 250k$ in 2020! Etc etc
> A human and Autopilot working together is safer than just a human driving.
I am not so sure. The data from Tesla is always comparing apples and oranges and I have not seen a good third-party analysis confirming this hypothesis.
The problem is these are not independent. Autopilot can lead to inattentiveness or other things that come from the sense you are now being assisted. So it boils down to a question similar to “is one driver at skill level X better or worse than two co-drivers at skill level Y+Z” where Y is less than or, unlikely, equal to X and Z is currently known to be less than X.