Hacker News new | past | comments | ask | show | jobs | submit login

The marketing and messaging around auto-pilot simultaneously argues that auto-pilot is safer than a human driver but blames the driver when there is an accident.



Heads I win, tails you lose. What's so difficult to understand? /s


Autopilot in a plane can make things significantly safer by reducing cognitive load for the pilot. However the plane autopilot will in no way avoid a collision. Pilots are still primarily at fault if the plane crashes.


Teslas aren't planes, though, so how does the etymology of the word "autopilot" help here?


Um.

> Autopilot is such a misleading term.

>> The functionality is almost identical to the only other time we regularly use "autopilot", in airplanes.

>>> Yeah but like, who cares about etymology and stuff? Misleading af.


Ok, I'll try in other words: Statistically speaking, about zero persons know how the autopilot in a plane works (me included), while they do know the word autopilot. Therefore, they can't infer the limitations of Teslas autopilot from a plane's autopilot.


I seriously don't understand this disconnect. You know the word autopilot because it is a technology in airplanes. That is the only reason you know of the word.

Statistically speaking, 100% of people know that 1. Airplanes can have autopilot 2. Passenger jets still have multiple pilots in the cockpit, even with autopilot.

You don't need to know the intricacies of how autopilot functions to recognize the significance of those two facts (which I'm sure you knew) and apply the same to Tesla.


The etymology doesn't help.

It was an intentionally misleading word for Tesla to choose.


A human and Autopilot working together is safer than just a human driving. Autopilot by itself is currently less safe than just a human driving (which is why it's still level 2). There's no mixed messaging.


> A human and Autopilot working together is safer than just a human driving

This is not my understanding from colleagues who studied the human factors of what is now called level 2/3 automation many years ago. Partial automation fell into an "uncanny valley" in which the autopilot was good enough most of the time that it lulled most human participants into a false sense of security and caused more (often simulated) accidents than a human driving alone.

Since then I've seen some evidence [1] that with enough experience using an L2 system, operators can increase situational awareness. But overall I wouldn't be surprised if humans with level 2+/3 systems end up causing more fatalities than human operators would alone. That's why I'm relieved to see automakers committing [2] to skipping level 3 entirely.

[1] https://www.iihs.org/api/datastoredocument/bibliography/2220

[2] https://driverless.wonderhowto.com/news/waymo-was-right-why-...


This is absolutely correct. And related to the issue of situational awareness, Tesla Autopilot has utterly failed at the basic design systems concept of "foreseeable misuse."

Having worked in the driver monitoring space, it pains me to see a half-baked, black box system like Autopilot deployed without a driver camera. Steering wheel and seat sensors are not up to the task of making sure the driver is attentive. Don't even get me started on "FSD," which proposes to work in harmony with the human driver in far more complex scenarios.


There's no mixed messaging?

The driver is there just for regulatory purposes, all cars self driving in 2016!, cross country summon in 2017, coast to coast autonomous drive in 2018, Tesla self driving taxis in 2019, FSD making teslas worth 250k$ in 2020! Etc etc

There are a lot of statements by Elon Musk


Those are all "coming soon". Tesla and Elon Musk are 100% clear that today, you still need to be an attentive driver while using Autopilot.


All those years are in the past.


But those were never guaranteed dates, just very poor/optimistic predictions


"I'll sell you this box. I know it is empty today, but I assure you, tomorrow it will contain a lump of gold!"

On the next day: "I'll sell you this box. I know it's empty today, but tomorrow..."


They were, Musk always used that phrase "not a question mark".


> A human and Autopilot working together is safer than just a human driving.

I am not so sure. The data from Tesla is always comparing apples and oranges and I have not seen a good third-party analysis confirming this hypothesis.


The problem is these are not independent. Autopilot can lead to inattentiveness or other things that come from the sense you are now being assisted. So it boils down to a question similar to “is one driver at skill level X better or worse than two co-drivers at skill level Y+Z” where Y is less than or, unlikely, equal to X and Z is currently known to be less than X.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: