Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Tesla in autopilot mode crashes into parked Laguna Beach police cruiser (latimes.com)
269 points by extesy on May 29, 2018 | hide | past | favorite | 469 comments


I keep wondering what the customer benefit of Level 2 autopilot is if not to lower your attention and relax your mind. Tesla's "out" is that drivers are supposed to retain full attention and oversight of the autopilot system -- but then if you strictly follow this rule, what is the benefit of autopilot?

I can see the benefit to Tesla and future Tesla customers of essentially crowdsourced fleet learning. But what is the benefit right now if you strictly follow the rule of remaining alert enough to intervene at a second's notice?

In previous threads the best explanation I got from Tesla owners was that it frees you from the "details of physically driving" so you "can now supervise instead."[1] That just seems suspect. Supervising autopilot to the degree required to correct sudden mistakes seems, to me, to be probably very close to the mental load of the details of steering yourself.

[1] https://news.ycombinator.com/item?id=17151116


but then if you strictly follow this rule, what is the benefit of autopilot?

Indeed. When reading Tesla's statement,

"The Palo Alto-based automaker, led by Elon Musk, has said it repeatedly warns drivers to stay alert, keep their hands on the wheel and maintain control of their vehicle at all times while using the Autopilot system."

...one can't help but think, "then WTF is it good for, if I still have to drive the car myself while using it?"

Supervising autopilot to the degree required to correct sudden mistakes seems, to me, to be probably very close to the mental load of the details of steering yourself.

I'd say the mental load is higher than manually driving --- not unlike being a driving instructor to a student driver who is OK most of the time but occasionally makes a possibly fatal mistake that you need to watch out for and quickly correct. Normally, a car will go in a straight line with no steering input, but the autopilot almost "has a mind of its own" and can steer itself in the wrong direction.

This is subtly different from cruise control, which lets you concentrate on steering and rest your foot on the brake so you can use it quickly if needed. Cruise control doesn't encourage lowering attention, but enhances it.


> ...one can't help but think, "then WTF is it good for, if I still have to drive the car myself while using it?"

And why are Tesla burning karma by promoting this technology?

For SpaceX, self-landing rockets are a direct method of reducing costs.

For Tesla, sort-of-self-driving cars are a distraction from the core product and are tainting the brand. Stick to selling top-tier electric vehicles with human-driving, people will still buy them on merit.

To me it's am example of what happens when software developers run the asylum. You can do nearly anything in software, but should you?


>To me it's am example of what happens when software developers run the asylum.

Except if the rumors about turnover in the Tesla software team are true, the exact opposite is the case. The hot goss is that they're under pressure to meet unrealistic goals, not vomiting their own hubris into the product.


In this case, read "software developers" as "Elon Musk," and the metaphor makes sense.


>This is subtly different from cruise control, which lets you concentrate on steering and rest your foot on the brake so you can use it quickly if needed. Cruise control doesn't encourage lowering attention, but enhances it.

What has lead you to this conclusion? Autopilot is really just advanced cruise control. You can treat it the exact same way by leaving your foot on the brake and leaving a hand on the wheel. If people aren't using it that way, that is certainly something Tesla should try to fix. But I am just not sure I buy the argument that cruise control is inherently a safer technology than cruise control plus auto steering.


Because the fully-assisted driving ("autopilot") removes the need for interaction completely. Your brain will stop spending power on backseat driving, because the brain is really lazy--and clever about being lazy too.


>Because the fully-assisted driving ("autopilot") removes the need for interaction completely.

But it isn't fully assisted and doesn't completely remove the need for interaction. It just is another step forward from cruise control that is more assisted and requires less interaction. I am not sure why everyone assumes the sweet spot for driver assistance ends immediately after adaptive cruise control and before cruise control plus lane keeping.

>Your brain will stop spending power on backseat driving

That is an interesting phrase to use considering backseat drivers have zero responsibility or control of a car but still manage to pay attention.


Backseat drivers pay less attention than frontseat drivers, obviously.

Your characterization of autopilot as “just cruise control” is fundamentally incorrect. Mere cruise control still demands significant and continuous attention and manual control on your behalf (steering). Autosteer makes the leap to fully automatic, which opens the door to not paying attention for long stretches of time.


I am honestly not sure how to discuss this topic when in the same thread Tesla gets ridiculed for blurring the lines between Autopilot and full self driving while people who are critical of the system say things like Autopilot is "fully automatic", "fully-assisted driving", or "removes the need for interaction completely".

From a practical perspective, Autopilot still needs to be monitored. But more relevant to this discussion, the user still needs to interact with the car to keep Autopilot engaged. It is possible to do that while paying very little attention. It is also possible to drive a car with cruise control while paying very little attention. Both systems can be abused. I have already said Tesla should do a better job of trying to crack down on the abuse of Autopilot. But I don't know why it is controversial to say that if used properly both systems can lead to greater safety by allowing the driver to focus on other aspects of driving.


You're mischaracterizing the issue. Autopilot should be monitored but it can go for long stretches without manual intervention (as if fully automatic). Mere cruise control, on the other hand, requires manual steering.

You're just trying to make them sound similar, e.g. "if used properly" or "both can be abused". But there is an ocean of difference. It is vastly easier to ignore a system that is capable of driverless operation for long periods of time.


The biggest problem with this is that Tesla calls it autopilot. Had they called it advanced cruise control or something like that I think people would have used it with more caution. The more accidents that are happening makes me realize how far is Waymo/Google is actually ahead of the rest.


> If people aren't using it that way, that is certainly something Tesla should try to fix.

A good start would be to not call it auto pilot, and rename it to cruise control plus or whatever.


The NHTSA found a 40% crash rate reduction from Tesla's autopilot. I'd say thats what its good for.


This stat is widely misunderstood. Tesla rolled out Collision Avoidance Assist and Autopilot around the same time. These variables were not isolated by the NHTSA.[1] It is likely that collision avoidance was the major factor in the crash rate reduction. It is even possible that autopilot increased the accident rate.

No doubt collision avoidance is a fantastic feature though.

Addendum: Looks likes the NHTSA has disavowed Tesla’s interpretation of that study. They did not evaluate whether autosteer (autopilot) was engaged.[1]

[1] https://www.cnbc.com/2018/05/02/effectiveness-of-tesla-autop...


> Looks likes the NHTSA has disavowed Tesla’s interpretation of that study...

Wow. This is a story in its own, because that statistics has been touted a million time by the tesla/elon apologists..


so while its possible, Tesla is making changes that bring accident rates down. Everyone here is acting like its certain with absolutely no scientific evidence. Clearly by significantly more than they go up. Its stupid to come down on them like they're murdering people. That IS how you stop all progress.

Furthermore, no one is forcing anyone to buy Teslas. People should be free to do as they choose. I have no doubt, for example, that a corvette, lambo, m3, or convertible all increase certain risks for their drivers by far more than autopilot - which probably decreases their risks.


Yes. There is zero chance I'd use a system like this. Regular freeway driving is already boring enough that I worry about losing focus. (I supplement with things like caffeine and audiobooks.)

There's some threshold of interventions/minute that I think humans just aren't very good at. Which is why there are systems like Threat Image Projection, which turns baggage scanning into what is effectively a video game: https://www.rapiscansystems.com/en/products/rapiscan-threat-...


Try it before making such sure if yourself statements.

In my experience it’s LESS boring to drive with autopilot and I pay MORE attention to the actually important, potentially dangerous, things like what’s coming up (you’re actively aware of longer stretch forward, which also means you know you’re coming up to a stretch that autopilot won’t be able to handle well in advance and have plenty of time to disengage) and the cars around it (not watching lanes closely frees you to give all your attention to cars around you).


The actual benefit is to have the car handles itself in situation where you would be distracted regardless.

I engage similar system on my car before fiddling with the attention seeking touch screen, or when I need to give a quick look at the kids behind. Another is traffic jam: a lot of nothing going followed by a lot of things going on - focus on the wrong thing and you can rear end somebody. Minor benefit are some fuel saving if you are a "dynamic" driver, and it can make you more relax in stressful situation like the traffic jam above.

That's just to answer your question. I completely agree with you that the feature is overhyped, not worth paying much for it and actively promoting dangerous driving behaviour. However, it nowadays comes nearly for free as a consequence of other very useful systems: the whole set of emergency braking scenario (emergency, cruise control, safety distance, pedestrian, ...), line keeping assist, blind spot monitoring. It has its uses, if you remain aware of its shortcomings.


before fiddling with the attention seeking touch screen

...in other words, the feature partially works around some of the problems they introduced themselves by using an overly complex touchscreen UI instead of the traditional no-eyes-needed buttons and knobs.


I haven’t seen a non-touchscreen media player in a new mid- to upper-range car in a long time...


But don't Tesla's have Collision Avoidance Assist on by default? That's a distinct feature from autopilot. That's primarily what helps out in situations that you've noted.

Autopilot basically adds autosteering to that. Seems to me that could lull you into more distracted situations that collision avoidance is unable to cope with.

Note: autopilot and collision avoidance both came out around the same time, so it's far from clear that autopilot has resulted in a reduction in accident rates, or if it was mostly just the collision avoidance.


The freeing of the physical part is really the benefit for me. My knees are bad, and sitting in stop and go traffic, or even long traffic lights can actually be painful. Its not bad, but it gets really annoying. Being able to physically move my leg is amazing. You probably do get a lot of that with just adaptive cruise control these days as well, but no real experience with that. The mental load is still identical, but the physical part is way more relaxed.


Exactly: you get that benefit with adaptive cruise control. What autopilot adds is autosteer. What’s the benefit there that doesn’t violate TOS? You are supposed to keep your hands on the wheel and pay full attention just the same.


It moves your mind from micro to macro. Instead of focusing on balancing the car within the lane I can look 500m in front and plan further ahead. I can also spend 2 seconds with eyes off road adjusting the navigation compared to without where you can only spend maybe max 0.5sec. I also feel much fresher after a long trip, constant minor adjustments sounds minor but once you've driven with a good lane keeping aid(not the ping-pong versions) it feels so much more comfortable when the steering wheel is magnetically trying to snap to the center of the lane. It still feels like you are driving but the path of least resistance is always pulling you towards the center of the lane. This would still be considered level 2 self driving, the danger comes when you go to level 3 and really take your mind off the road, those I think should never be allowed on the road.


A lot can happen in 2 seconds. I hear you and appreciate the detailed description, the best one I’ve read yet. But within it are hints that you are trusting the system more than the terms of service permit.

You feel comfortable looking away a full 400% more than you would under manual control. There is not supposed to be any delta. And for many people, it’s probably more than 2 seconds. Many of these accidents including the recent fatal one happened inside of a few seconds of distraction.


It still feels like you are driving but the path of least resistance is always pulling you towards the center of the lane.

...except when it doesn't, and you're forced to suddenly comprehend the unusual situation and react before its too late:

https://www.youtube.com/watch?v=6QCF8tVqM3I


That was not 2s but 4s and you could predict that happening even before that. Does not seem like especially dangerous situation when you approach autopilot as GP does.


Totally disagree. Someone was literally killed there. Realistically “2 seconds” could be understating the length of the distraction, plus it takes you a certain amount of time to mentally re-engage and understand what is going on. Especially in a weird situation like that.


> Supervising autopilot to the degree required to correct sudden mistakes seems, to me, to be probably very close to the mental load of the details of steering yourself.

intuitively, the load seems higher. this is because your brain must now include predictions regarding an additional third party who is not there in cars without autopilot or with autopilot off. it’s similar to how i am more stressed and tired after riding in the passenger seat with a less experienced driver at the wheel than i am at the wheel myself.


I have not enabled it for this reason. I suspect that Tesla might too otherwise they would probably just let people have a 1 month free trial of autopilot so people would pony up the cash.


Compare the number of autopilot crashes, to the number of crashes from cars without autopilot, driven by idiots who are busy staring at the phone screen while driving, you'll realize that autopilot type systems are probably a zillion times safer.

Of course, there's zero data about how many accidents happen because drivers were staring at their phones, so you rarely see articles screaming "<non electric car> driven by an idiot staring at phone screen crashes <into what ever catches attention>".

I was skeptical about Autopilot from day one, but one test drive convinced me that this may not lead to zero accidents, but it certainly is way way better than vehicles that crash when driven by distracted drivers


The benefit for Tesla would be selling more cars because people think they are getting a technology that isn't really ready but Tesla's claiming is (but also claiming not really, read the fine print).


One doesn't have to be quite that cynical. Tesla's goal could genuinely be getting to Level 4/5 autonomy ASAP. My question is whether there is actually any immediate customer benefit to Level 2 autonomy that doesn't technically violate the terms of service.


The benefit is less strain on your feet/ankles/wrists/arms/shoulders etc.

Living in a high traffic area I would appreciate the time I don't have to pump my damn foot up and down every 10 seconds for hours straight while waiting in traffic.


> But what is the benefit right now if you strictly follow the rule of remaining alert enough to intervene at a second's notice?

You have an extra failsafe for those moments where you fail to remain alert. People keep treating these incidents in isolation and failing to account for the fact that human drivers make exactly this kind of fuckup every hour of every day, and lives are lost. Every day.

You get that right? Cars. Kill. People. Every day.

Tesla is shipping systems in the absolute infancy of this technology that are already statistically indistinguishable from human drivers as far as safety goes. And the luddites here want to throw all that in the garbage because of "what is the benefit right now".


Tesla is shipping systems in the absolute infancy of this technology that are already statistically indistinguishable from human drivers as far as safety goes.

That’s a hell of a claim, I’d love to see some serious citations on it.


There is no conclusive study either way. The only real data we have is the two confirmed deaths in Autopilot accidents in over a billion miles driven with Autopilot. That is not enough data to draw a definitive conclusion, but it at the very least shows that there is not a huge problem that makes it dramatically more dangerous.

For comparison the overall casualty rate for the average car in the US is somewhere near 12 deaths per 1 billion miles driven.

I wouldn't say that Tesla's have proven safer, but I think saying they are "statistically indistinguishable" from human drivers is relatively fair at this point.


> over a billion miles driven with Autopilot

No, it's over a billion miles driven by Tesla cars that have the sensor hardware that Autopilot uses. The number of miles driven with Autopilot actually engaged is about 300 million, at least according to the data Tesla has released.

https://electrek.co/2016/11/13/tesla-autopilot-billion-miles...


Look at the date on that article. It is from 2016. Tesla has sold a lot of cars and those cars have driven a lot of miles in the last year and a half. Tesla is certainly over a billion miles in Autopilot driving by now.

EDIT: Here is a Tweet from April 2016 saying 47m miles. [1] Your linked article is from November 2016 and shows 300m miles. That is 250m miles in 7 months. Projecting that forward with zero growth, Tesla would have accumulated roughly 950m miles in Autopilot usage. That isn't including the over 100,000 cars Tesla has sold since November of 2016 or the miles those cars have accumulated in that time. I stand by my statement that Teslas have recorded over a billion miles while using Autopilot.

[1] - https://twitter.com/Tesla/status/718845153318834176


There are other factors that influence fatalities in automobile accidents. You can't legitimately use deaths per miles driven as a proxy for how good a driver Tesla's autopilot feature is.


Sure, which is why I said we had no conclusive data. However, at the very least no red flags have popped up in the statistics. Think of it like flipping a coin 3 times and we got heads all three times. That is not nearly enough to judge whether the coin is fair, but we are probably getting close to concluding that the coin isn't weighted heavily in favor of tails. The sample size is still small but nothing in the statistics is screaming out that Autopilot is a big danger when compared with driving manually.


But we don't need statistics to know that Autopilot isn't as good a driver as a human. By Tesla's own statements Autopilot can't autonomously operate a car, it requires a person to be at the wheel and engaged with driving. By definition this means that autopilot is a worse driver than a human driver as it cannot drive at all.


[flagged]


You’re conflating two systems. Tesla rolled out Collision Avoidance Assist and Autopilot around the same time. Collision avoidance is undoubtably a great feature that’s contributed to a decline in accident rates. That’s primarily what helps you in cases where you “fail to remain alert.”

But here I am asking about the other feature, autopilot, i.e. autosteering. The TOS requires you to remain fully alert when using the feature. So even if it were pitched as a safety benefit that keeps you from lane drifting when you’re not paying attention — you’d be violating the TOS.

So what is the benefit that doesn’t violate TOS? Clearly the implication is — and most Tesla users agree — it lowers the cognitive load while driving. But this would seem to undermine Tesla’s defense that drivers must remain vigilant and attentive at all times when using it.


Filtering out the noise, you’re not willing or able to cite, and I don’t blame you, since you’d probably try to start with the discredited Tesla numbers. Then I’d post the reasons why they’re crap, and you’d grope for some more apples and oranges numbers. I like how you’ve just skipped past all of that and gone straight to the personal attacks, it saves a lot of time. Since you’ve made your claim without a hint of support, here’s my equally unsupported counter-claim: Tesla autopilot is much less safe than a human who wouldn’t accelerate into a cop car, fire truck, or gaurd rail unless dead, or drunk.


Well if AutoPilot is statistically saver then a human pilot (I don't know if it is). Then that's a reason to use it.


Why does this get downvoted?


Is the article available to Europeans somewhere? All I'm getting is a message with this:

> Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism.

On the other hand, I'm wondering how much I want to read an article from a website where they must track me to when I just want to read something...



I'd suggest flagging all articles posted here that prevent EU access. The content writers kinda have a right to deny you access, but we don't need to drive traffic in direction of those that refuse to handle your privacy well.


Same here. If nothing, GDPR at least brings awareness to the general public about the level of unnecessary tracking that all these websites do.



It's also quite disconcerting to consider the implications of this. Since we 'opt in' to the articles we click on, tracking what individuals read and then building a profile based off of that would be quite informative, and quite invasive.

And then sharing, trading (to expand the profile), and selling it to other companies? Really nasty stuff.


That's exactly why the GDPR is put in place, to give users transparency and force website to ask the users consent before tracking.

You'll see some sites have quite an extensive menu of opting in to various tracking etc. For example Engadget or TechCrunch have a popup asking you to give consent, or not, before using the website. This is how it should be done.

Then there's other websites that shows you all it's tracking and ask you to just accept, there's no option to refuse, which is against GDPR, as implied consent is no longer valid.

And then you'll have websites like latimes that just goes "fuck it, we'll just block 500 million people in EU instead!"


Perhaps some major players in the programmatic ad clusterfuck have just thrown their hands up over GDPR.


> Is the article available to Europeans somewhere

https://www.google.com/search?q=Tesla+in+Autopilot+mode+cras...


Really goes to show how what they do with the reader's data that they collect is actually shady to the point of illegality in large swathes of the world.


What it really shows is that they have so little EU-originating readers/income that it doesn’t make economical sense spend thousands of dollars on evaluating compliance. And yes, the law is poorly written that even parsing it is a non trivial cost, particularly if you are a risk-averse organization.

It may mean they are doing obscene tracking too, but you are making ideological assumption if that’s your only hypothesis.


Exactly, while I have access to VPN and other means of bypassing the block, I have no intention of supporting such lackadaisical solutions to our privacy.


This is so stupid, they are still infracting on European Citizens that just happen to not be in Europe. For example I can get to the page but I am in Switzerland which is not part of the EU yet smack in the middle of Europe.


The GDPR's scope is EU (or EEA, I can't recall) residents in EU territories, it has no provisions at all for EU citizens who reside outside the territory.


I'm in San Marino, I can read the page, and San Marino is not EU but it is in the scope of the GDPR law.


How do you propose they determine that you’re a part of the EU if not by IP address and if they can’t track members of the EU?


The point is that they're i) blocking people who aren't covered by GDPR. ii) not actually exempting themselves from GDPR because they still have all the data from EU residents from before the blocking.

Blocking EU residents as a GDPR protection is dumb.


I'm in the EU, but since I'm looking at it from a work computer, and I guess our network exits in the US, I can see the article, so they are in breach of GDPR even with their stupid block.


No they aren’t the fact that you can bypass their restrictions does not matter it’s the intent that does.


Still, using ip addresses to try to determine where a request is originating from is not a good idea. I'm also not intending to bypass a block, I just happen to be in a network right now that that doesn't have a public ip in the EU, I'm using the internet as intended, the fact that ips sometimes correspond to geography is mostly accidental.


It doesn’t violate GDPR you do not need conscent for this level of profiling.

And again using or not using the internet as intended isn’t the issue here but rather if they intend to provide services to the EU or not a message saying EU users should leave in the same manner as age conscent is verified is technically sufficient to indicate that you do not provide service to EU residents, geoblocking the IP also sufficient even if it’s not perfect.


> Still, using ip addresses to try to determine where a request is originating from is not a good idea

How else should they do it? I concede it is not foolproof nor ideal - but for websites which you've not provided your location to yourself, how is it possible for them to determine whether or not you are in Europe?


You are allowed to use it and even keep it without consent if anything the GDPR wants to separate consent from data processing. https://eugdprcompliant.com/personal-data/

You can maintain the records of an IP address (as well as other PII) for business needs e.g. security or to enforce other requirements such as geoblocking even without explicit consent of the user it's all about why you and what do you with it which is why legal/lawful grounds exist.

https://ico.org.uk/for-organisations/guide-to-the-general-da...

Also for the most part if you do not have a lawful basis for collecting or processing PII consent is not a sufficient reason to so.


I don't get it, why is it so hard to detect stationary objects in front of you? wouldn't that be the "hello world" of any self-driving tech?

Every time I ask this question, the response is "radar as a technology has too much noise, not accurate enough to detect stationary objects".

But it DOES detect stationary objects. Otherwise no one would even turn on Tesla AP at all.

Why is it so hard to detect stationary objects in front of you?


Subaru Eye Sight does this reliably and actually auto brakes correctly. Going from that to APv2 has been a real step back. There's a curve on 92W that it consistently try to accelerate you into an embankment. I think Elon just doesn't understand the real world reality of how bad AP is.


And Tesla's newer vehicles have the same sensors as Eye Sight, they just choose not to use them for Automatic Emergency Braking (AEB).

Tesla's fans always defend the failure of their AEB by pointing out flaws in radar, while entirely ignoring that Tesla's vehicles have contained front facing optical cameras for years now.

The Tesla shouldn't have hit the divider in the previous accident, the optical (visual light) camera should have seen it, even if radar didn't. Why doesn't Tesla's system work as well as Sabaru's Eye Sight?


It really bothers me manufacturers are pushing incremental updates to car safety systems. Does anyone besides me think this is a terrible idea?


Aren't cars doing that today with their physical saftey systems?

How else would they do it than incremental?


> How else would they do it than incremental?

How about... not?

I'm not being disingenious here.

If there's autonomus tech, then I find it scary that it might change its behaviour. You see, I've learned how (say) Autopilot responds in certain situations. I know when to trust it, and when not to.

If this behaviour changes (perhaps even if it changes for the better!), that's a very dangerous thing. Some of my experience is now invalidated, but I don't know which part. But it mostly works as before, which gives me a false sense of security.

Changing the driving behaviour of a vehicle, especially in totally un-obvious ways, is super dangerous, and such changes must have very strong justifications.

Just think of that poor guy whose Tesla drove into the divider. Oh look, it behaves just like before -- except for its lane following behaviour.


I dont understand how you would expect not to do incremental improvement. Build it perfectly the first time? That's not realistic.

Pretty much everything humans do is incremental improvement. I understand your point that we dont want to introduce weaknesses in 'upgrades' but this should be a discussion about how to create robust QA rather that stopping incremental improvement.


> I dont understand how you would expect not to do incremental improvement

I wonder if we're talking past each other? Sure, incrementally improve during engineering. But don't incrementally change the safety-critical behaviour of a car once it's in the customer's hands.

> Build it perfectly the first time? That's not realistic.

Absolutely agreed.

But for subjects such as autopilots, predictability may very well trump improvement.

I'm assuming that the initial iteration is already reasonably safe and useful (if it isn't it shouldn't have been shipped, right?).

If the behaviour changes unpredictably (how will you predict autopilot changes caused by an update?), this may well be less safe than keeping the existing system, whoose quirks the dirver has noww learned.

> Pretty much everything humans do is incremental improvement. > I understand your point that we dont want to introduce weaknesses in 'upgrades'

The nasty thing is that even a change to objectively superior behaviour may be problematic, because the driver will expect the car to behave one way, when it now behaves a different way (regardless of merit).

Not even a changelog would help in such cases. I feel like the autopilot ought to inform the driver in each situation where it used to decide one way, and now decides another way. That might be a reasonably safe way of allowing incremental change, but is likely too annoying for most people.

> but this should be a discussion about how to create robust QA rather that stopping incremental improvement.

Robust QA is certainly a must.

(side note: I used to work in safety-critical automotive projects. I may have a more intimate understanding of the issues)


The usual answer is that the signal that reflects of many stationary objects, like trees, cars, all of which have the same dopler shift and must normally be discarded.

Still, this answer doesn't quite satisfy, because it ignores the very strong signal that exists right before a crash, which crash avoidance technologies do (seem to) use.


The difference is in the compare. A collision avoidance system is there to help correct driver errors, so even if it only works 50% of the time it is still a massive reduction in the accident rate. Whereas with autopilot the expectation is that it should be recognizing objects as good or better than drivers, so a 50% failure rate would be considered massively buggy.

Note: Tesla also has a collision avoidance system, which is actually on all the time and a distinct feature from autopilot (although it uses the same underlying sensors).


two things to consider: as speed increase, braking distance increase squared

as distance increase, the angular size of an object decreases inversely.

a radar requires an increase in power both to see at a longer distance and to see a smaller object, because the signature reduces with 2*distance^4

( this is an oversimplification to help introduce the topic, you can see the details here http://www.radartutorial.eu/01.basics/The%20Radar%20Range%20... )

a static object is the worst case scenario: you can't use its doppler signature because it's not moving against the background and the radar becomes less efficient the moment you need it the most (at high speed)

as an aside, given the problem statement it should be possible to find a maximum speed that allows for timely detection and stopping.

chances are it's still slow.

I wonder why the radar/lidar isn't helped by an ir coincidence rangefinder.


that's a very good explanation


> I don't get it, why is it so hard to detect stationary objects in front of you?

This could only be said by someone who has never worked on the problem. I don't get it, why is programming so hard, you are just telling the computer exactly what to do?


It seems a legitimate question. I have never worked on this problem either but I would have thought that detecting a stationary object right in front of the car should be one of the basic and simpler functions of a self driving system.

Obviously this doesn't seem to be the case but maybe you can enlighten us why this is hard?


False positives from road signs, overhead signs, radar-reflective trash. Commonly used radars have limited vertical discrimination.


Why would the radar even be pointed at areas several meters above the car?

Maybe radars can't be pointed at a narrow space, if that's the case why not use lidar instead?


Most radars aren't coherent like laser, beams spread out. Also radar measures speed more accurately than location.

Lidar only used by R&D + taxi services, price not reasonable for car buyers yet.


Instead of being condescending this would have been a good time to explain the challenges

A static object has a negative Doppler shift rate linearly in proportion to vehicle speed and a y-axis position greater than zero but less than four metres. What makes that difficult to detect or distinguish?


Tesla are clowns. It's possible to do this right, but they're shipping a product they know isn't suited to the task.

(Lidar really does matter, but also extremely rigorous testing.)


You do realize that when driving, 90% of the things around you are stationary, right? Also, it may not apply to this specific case, but try comparing a plastic bag in the streets vs a big rock. Telling those two apart isn't trivial, but only one of those you can drive through.


If you routinely drive through plastic bags on the street, prepare for some disappointment when it turns out that the bag had a rock inside it. The car should 100% also avoid bags or any obstacles.


I'd like the car to stop for plastic bags too please. Anything in the road directly in my path.

I mean that plastic bag could contain a rock or something, driving through it is just a bad idea.


what about a large leaf?

it’s dangerous to stop in the middle of the road unexpectedly too


ya but, other cars that DON'T have autopilot/self-driving tech (e.g. Subaru Eye Sight as mentioned above) already have reliable emergency braking in case of obstacles. Aren't they already able to tell the difference between a plastic bag and a big rock? If they can, then it's already a solved problem, and Tesla should be able to do it reliably too, no?


People always lament about how Tesla tries to mislead customer's about its capability, but is there actually data that shows this is truly the case among Tesla owners? How many times does "attention needed" beep is needed before customer can be considered reasonably informed that it is not fully self-driving?

Even among non-Tesla cars, whose manufacturers don't try to "mislead", 71 percent of people believe automatic emergency braking can avoid all crashes [0]. Is this percentage higher or lower for Tesla, and is that difference warranted?

[0] https://newsroom.aaa.com/2016/08/hit-brakes-not-self-braking...

[1] Quote from above "When traveling at 45 mph and approaching a static vehicle, a scenario designed to push systems beyond the stated limitations, the systems designed to prevent crashes reduced speeds by 74 percent overall and avoided crashes in 40 percent of scenarios. In contrast, systems designed to lessen crash severity were only able to reduce vehicle speed by 9 percent overall."


These cars are driving on public roads, so the question is not whether the user is informed but whether they are compliant. It doesn't take many ignored beeps to learn that a particular customer is not using the feature as intended, and deactivate it for the safety of others. Offer to turn it back on after watching a trainig video at the dealer and signing some papers once, escalate to a mandatory counselling session after that, third strike and the feature is permanently off. Put it in the TOS so that asshole drivers can't sue and win in court. The customer should not be king when the safety of others is on the line.


So speeding is the manufacturer’s fault too? Informed by the car, non-compliant driver...

Oh, and Tesla does disable Autopilot if you ignore the hands-on-wheel warnings.


Tesla advertises their cars as having full self driving... hardware. Even if this is true[1], it's pretty misleading to advertise their cars as having "full self driving capability" because they have the _hardware_ for it. The problem is clearly that the software isn't ready yet.

[1] I think it's pretty debatable, and as far as I know it's still something of an open question to determine the minimum amount of hardware required for a fully autonomous car


> How many times does "attention needed" beep is needed before customer can be considered reasonably informed that it is not fully self-driving?

I mean there is a warning telling you that it's not self driving. People are dumb, that's it.


I don’t think it’s helpful or even correct to say “people are dumb,” as if this is purely a user problem.

The system exists in a bad gray zone — one that works well enough most of the time you stop paying attention. But it’s not full proof, so you get the worst of cognitive bias here.

Personally I think this faux self driving (literally falsely named auto pilot) should be outlawed. We cannot depend on people to take over when the car generally is handling the situation most of the time. Either manual or fully automatic — this in between will continue to cause too many bad accidents.


Agree, if you are not actively driving, your attention, and therefore reaction time, will inevitably deteriorate. In addition to that your reaction time is further reduced by the fact that even if you anticipated a problem looking at the road, you need to wait long enough after when you expected the car to react before realising that the car didn’t react and you need to take over, which further reduces your reaction time. Which is why I have some sympathy for the drivers of experimental self driving cars. Doesn’t excuse them playing with their smartphone at the wheel but they are tasked with a really hard exercise while being legally liable if things go wrong.


How do we make the eventual evolution from faux self-driving to full self-driving if not through incremental improvements?

I'm far from an expert on the subject, but I assume real data must be acquired for that to happen.


You raise the bar of what you need to deliver which will change your approach, and don’t ship products that aren't ready.


It seems to me that delaying self-driving cars for any appreciable amount of time will lead to more deaths/injuries, not fewer.


You're assuming that the quickest route to safest auto-driving lies via incremental improvements to existing systems and that's not necessarily true; see: AlphaGoZero


Fair point. I think a game with specific rules is probably better suited to the self-learning process that AlphaGo uses, whereas real-world driving rules are much more complex and nuanced. But I think it's still something worth considering.


How many non-Autopilot cars have crashed into parked vehicles? It happens a lot, to the extent that a lot of states have “move over” laws explicitly designed to reduce it.

Does Autopilot actually make the problem worse, or is it just more newsworthy?


If you go look at IIHS, they have tests that will tell you exactly how good non-Autopilot cars are at this kind of crash avoidance:

I cannot find any current (2018) car that crashes into the parked car.

(I only spot-checked prior years). I found one in 2014 (audi q5) that hits the car at 1mph when going 12mph.

http://www.iihs.org/iihs/ratings/vehicle/v/audi/q5-4-door-su...

By contrast, the 2014 subaru outback avoids collision even at 25mph.

http://www.iihs.org/iihs/ratings/vehicle/v/subaru/outback-4-...


Subaru Outback is vastly superior collision avoidance than AP which probably says a lot about software quality. It certainly doesn't have better sensors.


They're equally capable according to that test, and it's not optional equipment on the Tesla.


My experience says they're not with a few thousand miles of driving but sure, whatever.


How often are you attempting to drive into parked cars?


Cars in front of you decide to do random things like slam on the brakes or make sudden turns. Eye sight consistently would warn early then brake as much as necessary. I only had it do emergency braking 2 times and both would have been accidents as I'd misjudged the distance and speed of other objects. They have you test driving towards a trash can as part of the sale, it works, so not just cars in a lane. More interestingly though, the warnings, especially as early as they happen, made me a better driver fairly quickly. Tangentially, the warning sound on the Outback is far more compelling than on the Tesla. Weird UX thing.


Their report for the Model S says it avoided the collisions. Clearly there’s more to it than that.


You asked about non-autopilot cars and how common it is. I gave you data.

I can also find no data that suggests any other TACC is crashing into vehicles (no NTSB studies, investigations, etc) at all.

So yeah, i'm going to conclude, based on what i've seen so far, that autopilot is part of the problem.

If you have data that says autopilot is not worse, i'd love to see it.


I asked about actual crashes. You gave me info about a controlled test where the exact car discussed in this article avoided the test collision. Given that the car discussed in this article crashed in real life, the controlled test is clearly not the final word in whether these other cars are experiencing crashes like this in the real world.

Also, I wasn't asking about how Autopilot compares to similar systems from other manufacturers, I was asking how it compares to the Mark I Human, which is a driving system that crashes into stationary vehicles with distressing frequency.


I'm just going to leave you to your non-data driven skepticism of "everything other than tesla", because this is clearly not a useful conversation.

That isn't what you originally asked (and as you see, i wasn't the only one who interpreted it this way), and imho it's not a useful comparison. I have no idea why one would ask "is autopilot worse" and then say "but only compared to humans, not taking into account the other systems that do the same thing and humans are also using on a regular basis"

It doesn't answer the question whether autopilot is really worse or not, because nobody cares if it's better or worse than a straw man human who has disabled the collision avoidance on their car.


I'm not talking about people who disable collision avoidance. I'm talking about people driving cars that don't have it at all. These are, as far as I know, still the vast majority of cars out there.

The data you provided is clearly not sufficient to reach conclusions about actual crash rates, so I really don't see how it's useful to present.


and again, you literally have no data at all, sufficient or not, to back up your claims. Instead, you just shit on all data that doesn't agree with your pet theory as insufficient.

I'll take insufficient data over zero data any day of the week.


But that's not actual data on crashes. What op is suggesting is that crashes per mile driven is a more relevant statistic. Obviously self driving cars will be involved in accidents, the question is, is it happening more frequently than with human drivers? If yes, then there's a problem and an indication that the technology is not yet mature.


Pardon my ignorance: what is a “move over” law? What do you need to do?


In FL, we have a law that says you either have to move over and change lanes to allow police, firefighters, tow trucks, etc enough safe room or you have to slow to 20 below the posted speed limit.

A lot of police officers were being struck because you tend to steer in the direction you stare.


It means that when passing a stopped emergency vehicle, you need to change lanes to avoid passing right next to them (“move over”) or slow down substantially.


Thanks. I was already doing this even though I don’t know of any law that requires it where I live. You learn something every day.


I would assume they would teach you basic things like these in the driving school? I suppose that's not the case?

Learning a "new" thing like that would probably make me hastily begin to wonder what else I might be missing.


I think you need to differentiate between knowing about something and knowing about the precise label something has.


The first move over law was enacted in 1996 and they didn’t become standard until the 2000s. Believe it or not, a lot of us went to driving school before that.


That's a general problem with most news about accidents or deaths. They cherry pick the interesting stories, not the significant ones. The way I see news is that it's like fiction but with the extra excitement of being real, much like watching a movie that's "based on a true story". Its purpose is entertainment, not information or understanding of the world. Nobody should be making a decision to buy a Tesla or not based on stories like these.

I recently mentioned self-driving cars to somebody and her response was "Oh, one killed a woman recently, didn't it?" That was about the extent of what she knew about them because she watches the news.


Yes, I say that any risk that makes the news is probably not worth worrying about.

What I don’t get is all the HN commenters acting like a few crashes proves that Autopilot is irredeemably dangerous.


Because it's way oversold for what it does, and even if you look at other traffic aware cruise control, all data suggests it is in fact, more dangerous than other traffic aware cruise control (probably in part because of the former)

I can't find an NTSB investigation or study that pegs traffic aware cruise control as the cause of the accident in any case except in Tesla's cases.

Of course, it seems to Tesla folks this is just evidence they are gunning for Tesla instead of evidence that maybe it is actually worse :)


What data is there? I didn’t realize there was enough to reach a conclusion yet.


Start with: https://www.ntsb.gov/news/press-releases/Pages/PR20170912.as...

Also, 18 crashes so far where autopilot was enabled? vs so far, literally zero known crashes across everyone else?

Do you not find it somewhat interesting that in zero other cases has the NTSB found that TACC or the design of a TACC played a role in the crash?

I'd also point out: the supplier of their original crash tech broke up with them and ordered them to stop using it because they felt what was happening was unsafe (see Mobileye's letter to Tesla).

Tesla later tried to spin it as "they are made we are breaking up with them" but Mobileye then produced data showing tried to renew the contract and Mobileye refused.

In fact, plenty of other car companies are building self-driving tech, and currently relying on Mobileye, and Mobileye has not refused to supply them - only Tesla.

Like, i'm not sure how this set of stuff doesn't tilt a person into "hey, maybe this is something to be worried about, they should produce data why it isn't".


I'm extremely skeptical that there have been zero TACC-involved crashes with other cars. Has the NTSB or other organization actually put out a statement to that effect somewhere, or are you just basing this on an inability to locate any when searching?


Again, please provide any data that says otherwise. At all. Any!

So far the data in this conversation has been incredibly one sided, with me providing data, and you providing skepticism.

These systems exist in a lot of cars, yet nothing until Tesla suggests any issues anywhere (i'm ignoring things like news reports which are fairly biased) Therefore, the extraordinary claim is not that Tesla is worse, it's that Tesla isn't worse.


The "data" you've provided so far consists of:

1) A link to IIHS crash avoidance test results. The Model S got a perfect score in those results, so those results don't at all support any claim that the Model S is worse. (I can certainly accept that the test is not definitive and the Model S could be worse than others while still getting a perfect score. My point is merely that this is not relevant data.)

2. A mention of 18 crashes with Autopilot enabled. Without a baseline to compare it to or at least some sort of rate based on the total fleet activity, this is more of an anecdote than data. Note that the average American driver crashes about once every 18 years (https://www.forbes.com/sites/moneybuilder/2011/07/27/how-man...) and there are around 175,000 Teslas on American roads right now (https://www.greencarreports.com/news/1115454_tesla-expects-2...). If Tesla drivers crash at the average rate, we'd expect to see about 175000/18/365 = 26.6 Tesla crashes every day, and that's just in the US. 18 Autopilot crashes ever is... not very high.

3. A completely unsourced and frankly implausible claim that the crash rate for other manufacturers' driver assistance features is zero.


So, then still zero data from you to back up any of your theories up, at all, but still more skepticism and random crapping on literally the only data provided so far.

Okay, i'm out.


An average of 26.6 Tesla crashes per day isn’t data? It’s more relevant or more reliable than anything you’ve given.


A quick search gives me this http://www.thetruthaboutcars.com/2016/08/automatic-emergency.... In particular "At 45 mph, the crash-avoidance systems slowed the test vehicle by 74 percent, on average, and prevented collisions 40 percent of the time.", i.e. it fails 60 percent of the time.

Do you really believe other cars' automatic emergency braking works 100 percent of the time with perfect reliability? They don't show up because it's non-news.


No, i don't believe that, i just would like this person to provide literally any data at all for their pet theories instead of just crapping on everyone else's imperfect data (which is pretty typically the way of the world)

So yeah, what i want is a citation or anything other than smug hand waving and dismissivenessm, to actually make it a useful discussion.


Has Tesla blamed the driver yet? They might get away with that here. The road was not divided and unsuitable for their lane-keeping system. The pavement markings are unusual.[1]

Of course, as usual, their obstacle detection failed to detect a stationary obstacle that didn't look like the rear end of a car in the same lane. We need a minimum standard for a vehicle which takes automated control of steering and braking. It must reliably stop for obstacles.

[1] https://goo.gl/maps/BjqTZoD5Yws


Here's the exact spot on that road that matches the picture in the article https://goo.gl/maps/pj3X4gAuG672


I remember reading people saying that insurance premiums will plummet for Teslas because of how much safer they will be on the road.

Today I read Tesla is getting into the insurance business because premiums are getting out of hand. It makes some sense since internally they’ll have more data they can use to deny claims.

It’s interesting how reality plays out.

https://electrek.co/2018/05/29/tesla-insuremytesla-insurance...


I’m skeptical that it’s because premiums are getting out of hand. The report that Teslas are the most expensive to insure only looked at the top of the line version of each model. For the Model S they looked at the P100D, which costs something like $140,000, and is one of the fastest accelerating street cars ever made. I’m not surprised that’s expensive to insure, but that doesn’t mean a more mundane one will be.


They seem to have impressive repair costs. If I was an insurer I'd quote high because of that, not because they were more likely to crash. (eg https://cleantechnica.com/2018/05/20/heres-what-7000-of-dama...)


To be fair, a major reason they are expensive to insure is because parts and labor are in low supply and high demand, thus high repair costs.


Also, aluminum is expensive to repair.


I live in Laguna, cycled by this accident today, own a Tesla, and can attest to consistent autopilot issues in this section of the highway. The road widens with a turnoff to the right, and in my experience that is exactly the direction autopilot wants to go every time, rather than follow the center line, even when it is tracking a lead vehicle that does so. There are no lines to the right, so I’m not quite sure why it does this. But, I can understand how an innattentive driver might end up in this situation because until this point in the road autopilot works great.


I really wish all the car manufacturers would stop with this bullshit "you still have to be driving it" self driving nonsense.

Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.

I'm not a self-driving car fanboy or anything. I don't think it's just around the corner, but it's clearly going to happen /eventually/ and I can't imagine it being more dangerous than regular drivers. But this kind of "self driving but not" features feel like the sort of thing that calls to government agencies for regulation, and sufficiently "dumb" mistakes from these systems seems like the sort of thing that triggers a reactive over-regulation.


I don't follow it super closely but there is exactly 1 car manufacture that I can think of guilty of pushing the self driving myth. The established car makers seem to market it as an assist, not as an autopilot. Tesla is the outlier, everyone else is much more conservative.


I think there is more than one. This Audi ad from 2016 shows it driving around with no hands on wheel. They don't get as much press as Tesla, and I don't know how good their system is, but it sure does make a bold implication (although their choice of the "Piloted Driving" name is probably a much safer choice than Teslas.

https://youtu.be/I2j2-DqcPfM


Well do you think a T. rex is going to be able to drive? If you visit the website [1] the large header text says:

> Audi development engineer Thomas Müller and his team sent “Lu Ban” and “Kong Ming”, two piloted Audi A7 Sportback prototypes, on a very special kind of test drive.

This is a very far cry from what Tesla and Musk are doing. For starters you can't even buy the car yet. If you look for information on "Audi Piloted Driving" [2] you can see they're just as cautious as you'd imagine a German company to be:

> In 2017 Audi will introduce what’s expected to the world’s first to-market Level 3 automated driving system with “Traffic Jam Pilot” in the next generation Audi A8. The system will give drivers the option to travel hands-free up to 35 mph, when certain conditions are met prior to enabling this feature — for instance, the vehicle will ensure it is on a limited-access, divided highway.

Note that it will only work on highways, but also only at non-highway speeds. Stop and go traffic.

> In 2020-2021, Audi will introduce a Level 4 “Highway Pilot” feature—technology similar to what has been demonstrated in our concept vehicle “Jack”—that offers hands-free driving at posted limited access highway speeds in which the vehicle can execute lane changes and pass cars independently.

Seems pretty cautious to me.

[1] https://www.audi.com/en/innovation/piloteddriving/piloted_dr...

[2] http://media.audiusa.com/models/piloted-driving


Is it though? Piloted means human controlled, often by a highly trained professional. I agree that auto pilot is a confusing term, but in an aircraft auto pilot systems are actually more primitive than Tesla's offering. To me, it just sounds like a cheesy way to mimic the name people are familiar with for self-driving features without infringing trademark. Because taken literally, it suggests a more capable system than auto pilot does.


> "but in an aircraft auto pilot systems are actually more primitive than Tesla's offering."

I contend that while that's true, it's not something your typical consumer is aware of. There is a common misconception that modern airline pilots simply press a button then kick back as the plane takes off, flies and land itself.


Well, no. Mercedes had to pull Drive Pilot (notice the name...) ads that actually claimed, unlike Tesla, to be self-driving... about a system significantly underperforming Tesla’s.

Tesla is outlier mostly in the disproportionate amount of flak it takes.

http://www.autonews.com/article/20160801/RETAIL03/308019956/...


Auto pilot != Self driving. People love to conflate these two so much that the people saying it rely on everybody else conflating it to justify their ignorance.

This is just like Fox "some people say" - Fox people say! Or Javascript Bitcoin miners - Monero is not Bitcoin! It's snowing so there is no global warming - It is "climate change" but the entire globe is warming! This hurricane is just weather, not climate change - it is the 10th 1 in 500 year hurricane we have had this month!

I could go on. Sure, talking about the future self driving features (and selling them) is poor form to avoid this from Tesla, so they aren't helping - but they have always been clear that it is not self driving. It is the listeners/ readers that are incorrectly connecting the dots.


https://www.tesla.com/autopilot/ -> big text saying "Full Self-Driving Hardware on All Cars" + video showcasing said capability.

You have to go way down to "Enhanced Autopilot" section to read the smaller text that mentions that its subject to regulatory approval, and that the driver is responsible.

If you don't think that's misleading advertising, I don't know what is.


In the purchase section it clearly breaks the capability out including which ones are active/ enabled. In addition on the car manual and UI at enabling also spells it out. It says keep your hands on the wheel and paying attention to pick over immediately.


People don't read manuals. People don't read instructions in the UI. That is well-established reality. People glance at a web page or ad and see the words "full self-driving" and that's what sets their expectations.


They don't apparently read pop-ups prior to driving, ends of sentences ("full self driving capable hardware"), definitions of words, hacker news comments, listen to audible warnings to keep hands on the steering wheel. At what point does a person become responsible for their own actions?


Rephrased: at what point should Tesla remove this ambiguity from their marketing materials?


Depends on the ambiguity. I would agree that they should remove the video unless they put a giant disclaimer on it right now.


By smaller text do you mean the text that is at the same sized font as every paragraph on the page except for the headings? Right in the middle of the page and not in a disclaimer wall of text (which also exists at the bottom with bold.)

I will admit showing a video of full hands off video, not on a highway, underneath a statement about the hardware, showing off software being used in a way that is not recommended in the manual is bad.

As I stated before in my first comment, they shouldn't be selling "self driving" that isn't available. I see no issue with promoting the hardware though and selling future proofing.


Well then maybe they shouldn't call it auto pilot? I mean, how dumb of a name could they have chosen if they didn't want people to think the car can drive itself?

They should also retrain their sales staff because on my test drive the saleswoman specifically told me the car could drive itself and that I should take my hands off the wheel to test it out.

They should also retrain their CEO who keeps talking about "self driving" in reference to existing cars:

https://www.tesla.com/blog/all-tesla-cars-being-produced-now...

> "All Tesla Cars Being Produced Now Have Full Self-Driving Hardware"


Have full self driving hardware != full self driving software != full self driving capabilities.

Auto pilot - https://en.wikipedia.org/wiki/Autopilot

"An autopilot is a system used to control the trajectory of an aircraft without constant 'hands-on' control by a human operator being required. Autopilots do not replace human operators, but instead they assist them in controlling the aircraft. This allows them to focus on broader aspects of operations such as monitoring the trajectory, weather and systems.[1]"

You are doing the exact conflating I am referring to.


If you want to be maximally pedantic, advanced autopilots are indeed capable of flying the plane by themselves. Modern commercial and military aircraft can take off, fly a route, and put the plane back on the ground, all by themselves. And autonomously flying a route is not unusual at all even in better-equipped small aircraft.

http://www.airliners.net/forum/viewtopic.php?t=1026435

Autoland has been around for more than 80 years and been used commercially for 60 of those.

https://en.wikipedia.org/wiki/Autoland.

It's pretty obvious what Tesla is getting at when they are advertising "every car sold with self-driving hardware".

The asterisk is of course that the hardware is self-driving capable, but the software isn't. But it's not hard to see where people are getting that idea from - it's Tesla's advertising and sales material.


Tesla sells a car that everyone from the sales people to the CEO says can drive itself, has a feature called Autopilot where the car drives itself and you're saying no one should be confused that the car can't drive itself...

Regardless, the actual people who have gotten into an accident because of this should be proof enough that Tesla is misleading people.


Those accidents and these comments and downvotes are a greater proof that people like to mislead themselves.

You must have missed the definition of autopilot in my comment.


I thought the car crashing into the parked police car, a parked fire engine, decapitated, and driving straight into a road divider at full speed were better proof that people are mislead by the term "autopilot."

Tesla likes to play both sides, it is "autopilot" when it works, but when it doesn't then the driver has literally under six seconds (according to Tesla) to avert a fatal accident.


>the driver has literally under six seconds (according to Tesla) to avert a fatal accident.

This is what gets me the most. Right after a tragedy Tesla is always quick to put out a victim blaming PR piece claiming that it's all the driver's fault and that they had i.e. "6 seconds" where they could see the divider yet they never publish anything like how long they had between when the car left the lane and the collision. Sure it was 6 seconds where they could see the barrier but how long was it between when autopilot went into casual murder mode and when it hit the barrier?

We've all seen videos like this one https://www.reddit.com/r/teslamotors/comments/8a0jfh/autopil... where autopilot swerves towards an obstacle and the driver only has a split second to react or be the next fatality. I think it's about time that Tesla acknowledges that their sensor platform is fundamentally dangerous. They never should have made such huge promises without lidar of any form. They aren't just going to be able to magically fix this with a software update.


No, it is always autopilot, you must pay attention and it never is or has been fully self driving.


Honestly the whole system is in the "uncanny valley." It is just effective enough to leave people with a false sense of security but just buggy enough to continue to kill people.

Any system that randomly needs a massive correction in under six seconds between potentially several minutes of inaction is too dangerous to be used on public roads.

I didn't sign Tesla's little disclaimer, I shouldn't have to share roads with people beta testing dangerous software.


You mean you shouldn't have to share roads with bad drivers. I do think that this probably makes bad drivers worse and good drivers marginally assisted.


It's great that you're willing to take a prescriptivist rather than descriptivist linguistic stance, but it's not that useful when it comes to consumer behavior.


I hardly think that is the distinction. The word is literally accompanied by a definition after its first use.

"Your Tesla will match speed to traffic conditions, keep within a lane, automatically change lanes without requiring driver input, transition from one freeway to another, exit the freeway when your destination is near, self-park when near a parking spot and be summoned to and from your garage.

Tesla’s Enhanced Autopilot software has begun rolling out and features will continue to be introduced as validation is completed, subject to regulatory approval. Every driver is responsible for remaining alert and active when using Autopilot, and must be prepared to take action at any time."


Don't pretend that Tesla doesn't know exactly what people think when they hear "autopilot". It can be literally true and still deceptive marketing, like a jar of yogurt labeled "light" because it's lighter in color than the regular kind.


[flagged]


Like how you're ignoring the repeated assertions that Tesla salespeople and Musk are misrepresenting or outright lying about the "autopilot's" capabilities?

If you want to debate the point, you can't just ignore all the parts that you inconveniently have no answer to.


I ignored what Tesla sales people say because it is anecdotal and contrary to what I experienced when I bought mine.

Far from ignore the parts "I have no answer to", I acknowledge the parts that are misleading like the video. I can only presume that you ignored reading my comments and enjoy misleading yourself.

I'll place it here for you again to read.

"I will admit showing a video of full hands off video, not on a highway, underneath a statement about the hardware, showing off software being used in a way that is not recommended in the manual is bad."

Good on you for editing "smug pedant" out of your comment.


Those things are nothing alike. Some are facts that can be checked, others are definitions. If people all agree that the sun won't come up tomorrow, it's still coming up. If people all agree to start using a word in a new way, then it's time to change the dictionary.


So we should just start calling Monero Bitcoin because many people are consistently wrong?


No, we shouldn't.

But a company that advertises software against JavaScript-Bitcoin-miners knows exactly that people will think their software helps against JavaScript-Monero-miners.

If it's a reasonable company, they'll try to deal with this public misunderstanding and point out that their software can't help with Monero-miners.

If it's an asshole company, they'll cash in on people mistakenly buying their software. And even worse, those people will then think they're safe when they're not.

Such an asshole company deserves all of the negative PR, so that they're hopefully at some point forced to change their business practices.


There are no javascript Bitcoin miners, people just think they exist. I mean technically it is possible, but it would be very pointless.

People just lump all crypto together because of ignorance. As is the case here. Autopilot is not fully self-driving.


It's the difference between Tesla's capital A "Autopilot" product and the english word lower case a "autopilot." The language doesn't define Autopilot, but it does define autopilot, like it doesn't define Monero. If I had a product that I called Meat and it wasn't meat, I don't get to say "well the word 'meat' originally just meant solid food," when my customers complain I misled them.


> People love to conflate

Those people being Tesla's marketing team, presumably.

> they have always been clear that it is not self driving

That is very far away from the truth.


Based on all of your citations for evidence which should be numerous since it is by definition information delivered to the public.


You're like Lewis Carroll's Humpty Dumpty

> what I say is always what I mean, and if you misunderstand, it's your fault.

Linguistic pragmatics 101: communication is a two way street.


That crosses into personal attack, which please don't.

https://news.ycombinator.com/newsguidelines.html


So Wikipedia's consensus that I linked to is wrong?


You've unfortunately contributed the most to this flamewar. We don't want those here. Please stop.

https://news.ycombinator.com/newsguidelines.html


Thank you. I didn't take any offense to Humpty Dumpty, I understand his point. I'm more surprised that somebody calling me a "smug pedant" led to me being banned for what seems a simple point, Tesla never said their car was fully self driving, and went to fairly great lengths to explain and notify that it isn't. (sans the video which I agree implies a lot too much).


You weren't banned! We'd have said so if we did that.

Being called a rude name is bad (though I think the commenter edited it out?), but like every user here, you still need to follow the rules regardless of what someone else does.

The core issue is the flamewar thing—by which I mean high-quantity, low-quality controversies—which is the essence of what we're asking people to avoid in comment threads.


Fair enough, I do see this as high quantity, low quality - although I do wish people would understand the point. I didn't realise that equalled a flame war. I just wanted to stand up for myself and an obvious truth. I didn't really think it was a flamewar until there was name slinging and the arguments deviated away from a reasoned rebuttal. The snark level was definitely escalating, so apologies if I contributed to that - my karma was duly punished :)


THat thank you was actually directed at dang, anyway, why am I not surprised that even that comment was downvoted. I am truly amazed by this conversation and fail to understand how those commenting here pass the bar for civil conversation.


It is unfortunate that they were name called “smug pedant”.


Yes, but not relevant to the main point, which is that users need not to proliferate high-quantity-low-quality controversies (a.k.a. flamewars) on this site.

(If we saw a commenter calling another a smug pedant, of course we'd tell them not to. I didn't see it though.)


From Wired's "People Keep Confusing Their Teslas for Self-Driving Cars" (https://www.wired.com/story/tesla-autopilot-crash-dui/):

A spokesperson pointed out that the owner's manual reads, “Autosteer is not designed to, and will not, steer Model S around objects partially or completely in the driving lane”.

I haven't seen the encouragement to not pay attention. In some ad on TV, maybe it was for a new Cadillac, the driver opens a soda with both hands off the wheel, and that seemed to be as far as they were willing to go.


Yes, because people are known for diligently reading fine print. /s

Cadillac has an eye-tracking system that deactivates the self-driving features if you aren't watching the road, and their system is limited to specific highways they have mapped.

Tesla has a ridiculously inadequate attentiveness control, and they do little to nothing to remind people who regularly fail to keep their hands on the wheel. They don't nag people because they want them to believe the car is more capable than it actually is.

And, I say that as a proud owner of a Model S with Autopilot v1.


If their life is on the line they'd be stupid not to. Which, of course, a lot of people are. But the fact it's common doesn't take away the blame


Almost nobody reads the entire owner's manual for their car, and even those who do read them skim over warnings like that.


The warnings are shown on the behind-wheel display every time you engage Autopilot. Unmissable.


> Cadillac has an eye-tracking system that deactivates the self-driving features if you aren't watching the road, and their system is limited to specific highways they have mapped.

I don't even get the point of that. Why even take one's hands off the wheel(or if knee-steering, one's foot off the pedal) if they can't even take their eyes off the road? I haven't driven one of these vehicles so maybe I'm missing something. It seems intuitive that automation would free us to do other tasks, but I'm not seeing that in these early "self-driving" implementations. It's more like these car companies are actually selling these cars as experiments, at the expense of people's wallets and possibly their lives.


Yes, because they definitely do not call the technology "Auto-pilot".


It's called "Autosteer". That's misleading to non techies who don't have a firm grasp on the limitations of the tech.

They should be legally required to call it something like "lane assist".


You're right, 'lane assist' is probably more appropriate. However, it's not 'autobrake' or 'auto-no-crash', the feature is actually called 'autopilot'. (why does that article say 'autosteer')

An autopilot maintains heading, and that's all. Many sailors are not techies --- an autopilot on a boat will run you right into an obstacle.


> An autopilot maintains heading, and that's all.

Aircraft autopilots can follow a flight plan, maintain an altitude or rate of climb/descent, do an instrument approach, and (using ADS-B) perform an automated avoidance maneuver when another aircraft is too close.


Some of them can. Others are no more than a gyroscope linked to the rudder.


Sure, but that's like pointing out that Toyota Corollas exist.


In the context of a conversation where people say “car” implies something capable of 200MPH, that would be worth pointing out.


I need to stop being poor and fly a citation. Closest I seen Garmin 430 do is flight path using waypoint.


It says “autosteer” because that’s the name of one of the components. Tesla’s Autopilot is autosteer, traffic-aware cruise control, and some safety features like automatic emergency braking.


Don't you see? When it's time to buy the car, it's self-driving. When it's time to take responsibility for avoiding obstacles, you're the one driving.


>I'm not a self-driving car fanboy or anything. I don't think it's just around the corner, but it's clearly going to happen /eventually/ ...

Self driving vehicles have been used at Rio Tinto's Pilbara mine for a couple of years. Sure, it's on private land and the complexity of the task is far smaller than for public roads, but it's happening now.

http://www.abc.net.au/news/2015-10-18/rio-tinto-opens-worlds...


I wrote the initial interface specification for the co-ordination system in Rio's open-pit mines. The mere fact that we were able to account for the state of every moving thing in the pit makes this a completely different proposition to self-driving vehicles in an uncontrolled real-world environment. At a guess it's at least ten to a hundred times easier.


Eventually this could be put into place in the broader traffic ecosystem - if all the cars on the road were required to have a transponder sort of arrangement on board, you could have a central point of control, and perhaps tie in people's phones etc to locate pedestrians. I do want to note that I am aware that there is an unfathomably large amount of complexity behind that however.


There's a movement towards this; look up "Vehicle-to-Everything" or "V2X". The idea is to enable things like one vehicle transmitting sensory information or intent to others around it, helping to prevent rear-end collisions, etc. My personal take is that it's well intentioned, but there will always be things that aren't part of the corpus of communicating devices that will then limit the utility. I suspect the communication of sensor data is the more useful part than some of the demos I've seen so far of, for example, a car sending an alert to a person's phone so they don't step into a driveway when it's reversing.


Because my dog will totally have his mobile phone on him when I'm chasing him down. Or my little kid chasing after his ball...


And when software on a car assumes pedestrians always have a mobile device on their person, charged and ready for a split second alert? IMO This would be an absurd design.


It'd be a tool in the toolkit, in my hypothetical. I mean, by the time autonomous cars were common enough on the road, we might all be tagged with an rfid chip as a normal identification or something... I don't disagree that it's an absurd idea, but it's just a devils advocate position I'm discussing here.


One can also see elevators as self-driving electric vertical motion vehicles. Sure, they are not operating on public roads, and the extra constraints reduce complexity, but they've been around for a while.

Personally, I don't see mining operations as anything remotely close to public roads autonomous vehicles because you have a controlled environment in which paid workers that have been trained to interact with equipment and be aware of its limitations and dangers are performing the said interactions in a strictly constrained manner with the autonomous equipment clearly visible as such and while protected by all kinds of insurance and work-specific regulations.

There's none of that even in the testing stage of current "autonomous" driving vehicles.


Very large work sites are far from completely controlled environments. Visitors, wildlife, weather, equipment malfunctions, human error, and a host of edge cases from trespassers to protesters.


See neighbor comment from someone who actually knows of what they speak: https://news.ycombinator.com/item?id=17184043


Visitors, trespassers and protesters are all a very remote possibility in the Rio Tinto mines mentioned, and certainly wouldn't be a surprise if they did show up.


No real traffic and predetermined path cleared of unexpected obstacles.

All vehicles can safely communicate with each other and are controlled from central location.

It is automated driving but not even close to "self-driving".


> Self driving vehicles have been used at Rio Tinto's Pilbara mine for a couple of years.

Accidents still happen up there.

https://www.youtube.com/watch?v=5WZCweaakOg

(*video may not explicitly of an autonomous truck, I'm not entirely sure, but just using it as an illustration)


Self driving strikes me as a technology with a hard boolean quality threshold. Either it is as good as a human or it just doesn't work.

This is a lot like rocketry. A rocket that meets 95% of its design requirements explodes, crashes, or fails to reach the proper orbit. Its damn near 100% or failure. Too bad Elon can't see this.

I think Tesla's obsession with autopilot is misplaced. Give me a solid reliable affordable EV with good range. Autopilot would be nice but only if it works. I can wait for that. But a solid EV is something I'm actually more excited about believe it or not.

I'm sort of afraid Tesla will throw the whole game in pursuit of self driving tech.


Elon's argument is that it meets your hard boolean quality threshold and is as good as a human, and he's aiming for 10x lower accidents than humans. That may not be true but that's what he's going for.


I think he can see it, but the first seriously solid entrant into the space is going to have a huge advantage and so that drives their interest in Autopilot.

Eventually, you're not marketing to consumers but to fleet owners.

Outside of that, he's in the grey area where he can build product excitement and fall back onto the "Keep your hands on the wheel" type of fineprint. I agree with those saying that it's far from ideal.


It's not even necessarily a matter of inattention. Imagine that the autopilot was driving along just fine, and when it was just a few feet away from the parked cruiser suddenly veered towards it for whatever reason. It might be physically impossible for any human, no matter how well they are paying attention, to intervene and avoid a collision.

I'm not saying that's what happened, but it's certainly possible. The mere fact that it's possible to me casts doubt on this entire design philosophy.


From the article:

> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.

From the picture at the top of the story, it is clear there is no divider on the stretch of road where the accident occurred.

Maybe it should detect when it is being misused, and start blaring loud noises, do its best to make the driver carsick, or simply immobilize the car.


Musk demos auto-pilot to the press in response to a recent crash in an area that is: A) not a highway, and B) has no lane dividers:

https://www.youtube.com/watch?v=AO33rOofFpg&t=2m18s

Also takes his hands off the wheel while using it in a manner that if someone else did and died, Tesla would slam them in the press for doing so, using dodgy-from-a-privacy-perspective and potentially-inaccurate (frequent reports of hands on wheel false negatives) telemetry.


its utterly insane

uh, isn't a self-driving car you have to drive an oxymoron?? does anybody understand words??


I agree, they should publicly distance themselves from the self-driving monicker until they at least reach level 4 coverage. At level 3, this should be at best described as 'driver assistance', or maybe a more descriptive but less palatable marketing term could be 'training wheels'---you still have to drive, but if you take your eyes off the road you might not die immediately like in a 'real' car.


Agree, except self driving car regs have been published.


Cadillac Super Cruise doesn't claim you still have to be driving it. It's hands free. It watches your eyeballs to make sure you're paying attention, otherwise it drives the car. But it only does this on specifically supported roads. It also doesn't claim to be fully autonomous.


Blame Elon Musk. I'm serious.

He's been promising "self-driving" for years, except the damn technology isn't ready/safe enough for what people are clearly using this from. The manufacturer puts a warning label, but the usage model/UI is clearly flawed.

His engineers know this, which is why Tesla's self-driving group has had such a high turnover rate over the last few years. Musk keeps overriding them.


Tesla's "Auto-pilot" is one of the most blatant market-speak lies I have come across lately, and they have been getting away it.


"Move fast and break things" is probably the worst philosophy to sweep over SV. It encourages a worldview where externalities that potentially break laws and maim or kill other people should be disregarded in the pursuit of "progress" and profits. But it's nice to see the HN community finally start to come to this realization


That.... was just Facebook's idea. I don't know any other company apart from those founded by former Facebook employees that took the monicker seriously.

A more mainstream paradigm was 'disrupt!' as in disrupt old industries, by applying the rules in ways that they couldn't aided by technology that they are disinclined to use. For instance Uber used 'disrupt!' as the centerpiece of their business, by acting as contracting service for limo companies, before finally expanding the definition of 'limo' to include any Toyota made after 2015.


Facebook may have invented the slogan, but others still took the concept and ran with it. There's various incubators in SV that repeat that same mantra to startups because they heard it from one of the big boys (Facebook) or read about it in some VC Medium post. I'm not pointing fingers, but just saying that even in such innovative places people will still blindly follow those paradigms like it's conditioning.


There are a lot of companies who try to be like Facebook and copy everything about them. Look at how do you open work spaces are becoming popular.


It started at Facebook and made sense for Facebook since nobody’s life is ruined if they can’t use Facebook for 5 minutes. It’s not appropriate for Tesla or even Google.


"Move fast and break things" was Facebook's idea, not Tesla's.


It's Facebook's motto, but the mindset is far more widespread.


I've only seen others apply that to Tesla, I've never seen them apply it to themselves.


"Move fast and break things" was never meant for actual real world physical things (or laws). It was a mantra that encouraged breaking sacred cows and assumptions but has been bastardized and vilified by the anti-big tech crowd. Obviously you can be a little more lax on your production quality if you're building inane web apps rather than rocket ships or medical devices.


[flagged]


No, it hasn't (see upthread, where you repeated this claim).


False, read again the facts and stop spreading wrong conclusions or facts, btw the study you are thinking off is not open so it was not verified, I think there is an action in justice to open the data.


I feel like I-5 between SF and LA is probably one of the safer places to use it. Any place else, maybe not so much.


[flagged]


No, that's not at all clear, and is in fact unlikely; rather: some of the capabilities in the basket of features Tesla calls "autopilot" are associated with lower crash rates, and it's not at all clear which; there are steep reductions in accidents, possibly exceeding Tesla's numbers, from cars with automatic emergency braking. Just because AEBs are useful safety features does not mean that auto-steering is necessarily on net a safe feature.


Thank you for saying this. I'm just astounded people don't question things logically


No, the NHTSA found no such thing. They did, however, determine that Tesla was not participating in their investigations in good faith.


This is false, stop this Tesla propaganda with false facts.


Maybe so, but please don't post unsubstantive comments to Hacker News.


The person posted same false fact/conclusion that there is data to prove autopilot is 40% safer all over HN, it looked to me as a misinformation campaign,

You are right though, it may look as a PR campaign but I can be sure, it could be just a misinformed fanboy


But not any worse than the term cruise control really.


Since circa 1968, "cruise control" has been used to refer to automatic throttle adjustment to maintain speed. [1] Is there some previous meaning that makes you think it misleading? Because I've never thought of it as meaning anything else.

[1] https://en.wikipedia.org/wiki/Cruise_control#History


From your link:

The first car with Teetor's system was the 1958 Imperial (called "Auto-pilot") using a speed dial on the dashboard.

More details here:

http://www.imperialclub.com/Yr/1958/58AutoPilot/index.htm

http://www.imperialclub.com/Articles/58AutoPilot/index.htm

How ironic. I wonder how many drivers using that system crashed, thinking it would do everything for them...


From looking at the photos on the latter article, I'm guessing zero people crashed expected it to do everything for them. The UI is a knob on which you set the speed you want.

The state of the art in computers at the time was the RAMAC 305, a vacuum tube computer which featured IBM's first disc drive. The computer itself was meant to be installed in a room 30'x50'. The disk drive itself was 4' across, weighed about a ton, and held 5 million characters (6 bits each). All this could be yours for only $300k/year (in current dollars).

Given that, and given that the transistor radio was introduced only a few years before, I doubt anybody expected the sort of magic we see as commonplace today. And if they did, they would let go of the wheel and find out in about 2 seconds that it was not staying on course.


Confusion over the function of cruise control was the subject of a long series of urban legends. I remember hearing at least one of these when I was growing up in the 90s. Ironically, the lesson of many of these stories is that the name for a fully self-driving system is "autopilot" not "cruise control".

> [1995] This guy saves up his money and finally gets the van he always wanted. Fridge and tv in the back, all the works. He starts driving out on a country road that leads to his home. He sets the van on cruise control and gets out of the drivers seat and goes into the back to get a beer. The van of course goes off the road, and when the paramedics ask him what happened, he said he thought he had auto-pilot.

> [1993] An old china man was driving along in his motor home. He turned on his ‘cruise control’. Apparently misunderstanding the function of ‘cruise control’, he then went into the back of the motor home. The motor home drove off the road and crashed. Apparently he did not realize that ‘cruise control’ is not ‘autopilot.’

https://www.snopes.com/fact-check/cruise-uncontrol/


And don't forget this famous example:

> [1996] https://www.youtube.com/watch?v=29ToNp1MY3c&t=33s


Likely did contribute to some accidents. Even simple cruise control can catch you out if you're not really paying attention -- e.g. when coming up on slower traffic.

But nobody in 1958 would have the slightest expectation that the car would actually drive itself. Tesla has been using the terms "self driving" and "autopilot" together since they started markting the car.


This is a good example of a case where engineering from first principles, especially around user experience for driving, can lead to making the same mistakes again (e.g., autopilot naming, and potentially issues related to consistent braking performance found in the Consumer Reports review.)


You're getting downvoted, but I agree.

A commercial pilot doesn't flip on autopilot and start browsing the internet on a phone or take a nap. And that's at 30k feet where there's little traffic to worry about.

It seems like a misunderstanding of what the technology does, by consumers and the media. I'm no expert on Tesla's marketing history, but I'm of the opinion that these crashes are due to drivers misunderstanding that the system does and what its capabilities are.


I don’t totally disagree with you but I will point out that planes land everyday in white out conditions with no visibility due to level 3 landing systems. The pilot can’t see anything and thus can’t help really even if they are awake. They’re mostly just there to talk to air traffic control for these kinds of landings. One pilot told me the Airbus planes no longer allow manual control.


Accoring to Wikipedia, you still need to have a minimum runway visibility unless you are landing CAT-IIIc, which is not used commercially (yet).

CAT-III landings are automatic, but the pilot needs some visibility to decide if the plane is going to touch down in the landing zone. If they are totally blind they will divert.

https://en.wikipedia.org/wiki/Instrument_landing_system#ILS_...


From the "Special CAT II and CAT III operations" you linked:

Some commercial aircraft are equipped with automatic landing systems that allow the aircraft to land without transitioning from instruments to visual conditions for a normal landing. Such autoland operations require specialized equipment, procedures and training, and involve the aircraft, airport, and the crew. Autoland is the only way some major airports such as Paris–Charles de Gaulle Airport remain operational every day of the year. Some modern aircraft are equipped with Enhanced flight vision systems based on infrared sensors, that provide a day-like visual environment and allow operations in conditions and at airports that would otherwise not be suitable for a landing. Commercial aircraft also frequently use such equipment for takeoffs when takeoff minima are not met.


I get your argument, but pilots do sleep all the time. Usually they take turns, but there have been multiple incidents that we know about where both pilots have fallen asleep at the same time.. we only ever hear about them if the pilot overshoots the airport, or in one case, the pilots self-reported it to encourage others to talk about the issue.

It's extremely likely it happens more often than we think and that most pilots simply just aren't going to voluntarily give up that information.


It doesn’t matter what the term ACTUALLY means, what matters is what the general public THINKS the term means.

And that seems to be “I don’t have to care while this is on“.

And that’s a big problem.


And a big part of what causes the general public to think that is Tesla's egregiously misleading marketing.

See Tesla's page on "autopilot" here: https://www.tesla.com/autopilot

The first (and only) thing you see at first is "Full Self Driving hardware on all cars", and a blurb about how all Tesla's have the ability to be self-driven. It does not mention here, nor anywhere else on the page that the current implementation of autopilot in Tesla cars is not fully self-driving. The video just underneath the headline is a video that shows a car using self-driving capabilities, not autopilot.

The entire rest of the page is like this as well. It talks at length about how Tesla's have the capability to drive themselves, and hardly ever mentions that self-driving and "autopilot" are not the same thing. There is only one single sentence, buried in the middle of the page, that mentions that drivers must remain alert and ready to take control.

And again, this is on Tesla's home page for their autopilot function. This is the first page that comes up when you Google "Tesla Autopilot". It's really shameful for Tesla to have marketing materials like this.


> It seems like a misunderstanding of what the technology does, by consumers and the media. I'm no expert on Tesla's marketing history, but I'm of the opinion that these crashes are due to drivers misunderstanding that the system does and what its capabilities are.

Perhaps the name "autopilot" has something to do with that misunderstanding.


Eh. This is reminding me of a law school friend bragging about how great lawyers are, giving the example of suing a firearms manufacturer for a 'faulty' safety. They'd done something insane, like pointing a loaded gun at a friend and pulling the trigger, thinking "haha the safety's on!" But lo and behold, the safety failed and someone was dead.

She was adamant that a safety was infallible, and any damage from failure was the manufacturer's responsibility. As someone who grew up with guns, I realized that only an idiot would put someone's life in the hands of a 'safety' mechanism.

If people have dumb, uninformed ideas about what words mean in context, they should probably educate themselves. Based on videos and articles I've seen, people are doing extremely stupid things that are clearly warned against by Tesla, like driving down a winding country road, or assuming it's going to somehow know about road work.


Then it's incumbent on car manufacturers who sell their cars with 'auto-pilot' as a marketing term to educate the public on what that term-of-art means. This might cost some money to do. A line in the manual doesn't count.


How much training does the average pilot get before they are in a situation where they can activate autopilot?


If someone misunderstands you, they may have a problem.

If everyone misunderstands you, you may have a problem.


If almost everyone understands you but a tiny vocal minority pretends they didn't, though...


> A commercial pilot doesn't flip on autopilot and start browsing the internet on a phone or take a nap. And that's at 30k feet where there's little traffic to worry about.

That's rich, you may want to actually do some research into what commercial pilots do when flying before making such claims.

e.g. http://www.bbc.com/news/uk-24296544


> A commercial pilot doesn't flip on autopilot and start browsing the internet on a phone or take a nap

Commercial pilots are trained professionals; Tesla drivers are not.


> where there's little traffic to worry about

I don't think you are aware of how much traffic on the "air roads" really is....


Well, that was condescending for no reason.

I drove around five miles to my friend's house and back today. I'd estimate that I was in near proximity with potential for collision with around 1,000 cars on that trip.

During the portion of a flight where traditional autopilot is used, how many planes are on a potential collision course? They're traveling at specified altitudes, along specified paths through 3D space. So tell me, how many planes is a jet airliner likely to collide with at cruising altitude when operating on autopilot?


Auto pilot implies level 5 autonomous vehicle, while Tesla has glorified assistance capabilities, at level 1.

Idiots are using as if Tesla is Level 5.


No, idiots are naming it as if it's level 5.


"Tesla on cruise control crashes into parked Laguna Beach police cruiser"

^ I wouldn't have bothered to read the article. BUT it is the exact same thing. [Just a system to help you drive]


> The manufacturer puts a warning label, but the usage model/UI is clearly flawed.

Not just the model/UI but the actual sensor suite. Remember, the Tesla that resulted in a decapitation had over-saturated cameras that couldn't see a tractor trailer. Of course, that's where the UI comes into play, because if the driver hadn't been watching Harry Potter, he would have seen the tractor trailer as well.


> because if the driver hadn't been watching Harry Potter

This was proven to not be true: https://jalopnik.com/the-ntsb-to-partially-blame-teslas-auto...


That's the thing, how do you make an UI that unambiguously conveys that the car hasn't seen an object in front of it, so that the driver can react to it?

Maybe with AR the car could highlight the objects detected so that the driver can brake if the car doesn't see an object it knows about, but a head mounted display seems pretty clunky.


The problem is that the "paying attention" part of driving is actually one of the most mentally taxing for the driver and it seems the pay attention 100% of the time is basically the same as saying 5% of the time an event might occurs that requires your immediate action. The problem being that if you have to jump into action to avoid something catastrophic 'sometimes' you basically have to be paying full attention all the time in order to collect enough contextual information about the incident. It's the same as watching someone toss a baseball to you and catching it vs your friend yelling "Headts up!" as a baseball is already flying through the air at your head.


Japanese train drivers are famous for pointing at things to always keep attention. May be similar system can be used for cars: for example car could narrate objects it sees and ask driver to point at them with hand, then check with depth sensor and gesture recognition that user pointed correctly, and then slow down and stop if user pointed wrong.


But that has the problem of "computer recognizes 'car', asks fleshy meat-bag to point at 'car', sees meat-bag point at 'car'".

It doesn't learn anything about the environment from that, so to me it doesn't change anything.


So wait, rather than just pay attention to the road when driving my old beater, I could constantly be pointing at things and struggling with poor voice recognition? Somehow this sounds like more work than just driving the car myself.


Well, how does an airplane do it. A loud robot voice that goes “PULL... UP...”


If a loud robot voice goes "PULL UP", airplane pilots still have much more than a couple seconds to do something about it.


I don't think people realize that "Tesla Autopilot" will never, ever, ever stop for an object in front of it. Tesla has said repeatedly it isn't designed to do so. If you run a Tesla toward a brick wall with Autopilot on, it will crash into it 100 out of 100 times. It will hit ANYTHING in the lane ahead of you. It isn't designed to stop in such cases.

So there's no need for a UI. If there's something ahead of you, you must stop or you will hit it full speed without braking.


They definitely can and do stop. There are plenty of videos of accidents avoided because of stopping, and this is a rather common feature on most mid to high-end cars now, commonly called a collision avoidance system and grouped under the umbrella of pre-crash safety systems.

The avoidance feature is designed to stop the car in the event of an unavoidable obstacle directly ahead. It doesn't guarantee a stop but it will still apply the brakes faster and more fully than a human can react and thus shed more speed from the impact. Pre-crash safety will also tighten the seatbelts, move seat positions, close windows, and inflate auxiliary airbags in preparation, and these are all well-tested and proven features.


What? That's not true. It can and does brake for objects in front of you that it senses. Its ability to sense objects is sadly very incomplete, but it does (quite often, but not often enough) successfully sense objects and slow or stop for them.

See page 93 of this owner's manual, for example: https://www.tesla.com/sites/default/files/model_s_owners_man...

(What it won't do is steer around obstructions in a lane, either full or partial.)


What is the difference between “objects in front of you that it senses” and “obstructions in a lane”?


Well, I guess the second doesn't presuppose that it senses them. But I think you concentrated on the wrong thing:

Autopilot is not designed to steer around obstructions.

Autopilot is designed to brake for obstructions.

It sometimes fails to brake for obstructions. But it sometimes succeeds in braking for obstructions. It never steers around obstructions, even if it senses them perfectly.


Emergency braking is designed to lessen the impact, not prevent crash. See manual. This is mostly for avoiding crashes into a car that suddenly stops in front of you, I.e. in lower speed city traffic and jams. For this, detection within a few meters is all you need - ultrasound sensors might do. If the velocity delta is 120km/h, you’re not going to meaningfully slow down when already this close.


I think the idea is that the UI needs to force you to always pay attention.


The NTSB report said that the driver was not watching any movie.


> Of course, that's where the UI comes into play, because if the driver hadn't been watching Harry Potter, he would have seen the tractor trailer as well.

Reports at the time said he was watching it on a portable DVD player, not the Tesla screen. Not sure about the NTSB report someone just cited though. If that report really said he wasn't watching anything, I wonder how that rumor started?


I test drove a Volvo S90 recently. The Pilot Assist feature almost feels like a gimmick due to the attention required. If you don't give steering input in five seconds, Pilot Assist disables itself. It basically requires enough of your attention that there's little difference between using it and not using it.

It seems pretty clear to me that Volvo only has Pilot Assist in the first place as a method to gather data via telemetry for the next generation of self-driving vehicles, because it otherwise doesn't seem to add anything useful to the driving experience. Consequently, I haven't heard of any notable incidents where Pilot Assist caused an accident.

I haven't test driven Enhanced Autopilot but I see people on YouTube taking their hands off the wheel completely for much longer than the Volvo let me do. Tesla seems like they're pushing something half-baked out and representing it as something it isn't.


I’ve found lane keeping assistants to be fantastic, but you have to think of them as ASSISTING.

It’s not going to drive for you. But it does an amazing job of basically canceling out crosswinds when you find yourself driving down the highway. It makes a huge difference, it makes it much less tiring/frustrating.

Going slow on a windless day? Straight street? Not that useful.

For helping with very gentle curves or very annoying crosswinds it’s quite nice.


And for basic station keeping in relatively predictable stop and go traffic. It relieves you of some unnecessary micromanagement but it's not autonomous.


> Consequently, I haven't heard of any notable incidents where Pilot Assist caused an accident.

It may be, but is not necessarily “consequently”. That Volvo is not cool to write about plays a role. You don’t read about boring daily crashes either.


It’s pure arrogance. In the model 3 when you are in autopilot they have video games and memes to play/watch on the giant screen. How does this pass regulation and is allowed to be sold? Building products to actively distract an operator of a 2 ton vehicle moving 60 mph?


This is actually true. it baffled me too. For proof, check this review video by Top Gear. They demo the Easter eggs that are only available in AutoPilot mode; https://youtu.be/1GrNv3ow9H8


calling it video games and memes is extremely disingenuous. it replaces icons


What are you talking about? There are no games or memes.



Suddenly we’re trusting Elon to deliver on his tweets?


Here are the easter eggs in action, on a production vehicle: https://youtu.be/1GrNv3ow9H8?t=640

Weird, the tweet was dated 2018-05-22 but the Youtube video shows easter eggs already deployed by 2018-05-23. I know all Tesla vehicles have automatic over-the-air updates but that's still surprisingly quick. Must be Musk playing around with his time machine again.


Yes, there are easter eggs. I was replying to a comment that said “video games.”


The screen displays a Mars rover. You turn the steering wheel to control where that Mars rover goes. I'd consider that a video game.

I guess you don't consider that a video game, which is a perfectly valid opinion. But I'd just wish you had made your first comment "I don't consider Tesla's interactive easter eggs to be video games" instead. That way you clearly communicate your intent and this whole needless bikeshed could have been avoided in the first place.


"You turn the steering wheel to control where that Mars rover goes."

No you don't. The Mars map is exactly like the Earth map, with a Mars texture instead of the map, and a rover instead of an arrow. Technically you control where the rover goes by using the steering wheel, because the steering wheel controls where your car goes, and the rover's movements come from your car's GPS. But it's not the sort of video game you describe. There's no way it could possibly work as you describe. The steering wheel is mechanically connected to the front wheels and still steers the car even when Autopilot is engaged, so it could not possibly be used as a video game control.

The problem isn't my failure to disclose my opinion of the easter eggs, it's your misunderstanding of what they actually are.


All the same I'd rather the developers at Tesla were working on improving the autopilot functionality instead of developing easter eggs.


That's fair, but a radically different statement from "when you are in autopilot they have video games and memes to play/watch on the giant screen."


Nothing about that thread suggests those are available while in autopilot, and they're not.


They are, you can see the driver demoing it here: https://youtu.be/1GrNv3ow9H8


Hmm, yup, that's way more eye candy than I thought was available. Agree that it should be very difficult to get to that stuff during auto-pilot.


It's time to hold all of these tech CEOs accountable for their wild promises and myth-making at the expense of the public good.


Nope, I am still blaming government (and I am in no way a Musk fanboy)

Self-driving tech has been there for decades. Convoy driving of trucks (one human-driven truck in front, several trucks following its lead) on highways was developed in the 90s.

On well-maintained infrastructures, automated driving is a problem that was solved before deep learning. The fact that we don't have an automatic lane-driving mode for highways as a standard feature in new cars is a political failure.

In 1997 a huge and successful demonstrator for an automated highway system was made. And legislator did not follow up by allowing on the road the systems they spent more than 10 billions to develop. [1]

In my opinion, a technology should be considered ready/safe enough if it is x times safer than the average driver. Zero accident is a crazy requirement. 10 times safer as human is much more reasonable but no politician will accept that.

And obviously it is easier to develop a safe system on some well known and well maintained road segments (e.g. highways) than on 100% of the roads and dirt roads on the North American continent.

So yes, the only way for an economic actor to get into there is to bend the existing rules, as rules for automated vehicles were never made. You can't sell a car saying "no need to look at the road, I'll drive for you" so you put a disclaimer, while making sure people understand that you actually made a self-driving car.

You can't sell a self-driving car and argue that your rate of accident is acceptable because the laws for such a notions are not there. A single accident in your self-driving car (that may be 10 times safer than a manual car) and you will be considered responsible.

[1] https://www.smithsonianmag.com/history/the-national-automate...


> Blame Elon Musk. I'm serious.

If you want to fix the problem, just charge him with attempted murder of a cop.


Entities like Elon Musk is a symptom of the worsening of the disease of stupidity that has gotten hold of the global human population....

It is not really surprising when our whole system of existence has the stupidity and weakness of its inhabitants at its foundation. Point is, this will only get worse. Because any "progress" along the current lines asks the common man to be more stupid...


> except the damn technology isn't ready/safe enough for what people are clearly using this from

Outrageously false. These incidents are so rare that they have whole news stories written about every single one. How many crashes were there in Laguna Beach yesterday?

Tesla accident rates in aggregate are quite good. Under autopilot the data is noisy but still underneath the level of an average car (not as low as luxury vehicles in the same price range, though).

Claiming that it "isn't ready" is innumerate nonsense. It could clearly be better, sure. So could you. And no one is demanding to revoke your license, because you aren't a scary AI.


The problem is that the argument being presented: "self driving is safer" is defeated by very obvious failures (driving into pillars, police cars, and people) that are /exactly/ the kinds of things that a computer should be able to handle. Human caused accidents in these situations are almost invariably due to inattentiveness that should not apply to any automated system.

So we get to the problem of "if it can't do handle the specific obvious case where it should be absolutely superior" (I would argue an automated system should not have ever been able to make either the police cruiser or the 280 crash in any circumstances. Those were both in perfect conditions - clear weather, etc, etc), how can we trust it to handle any of the complex situations that a self driving car necessarily must handle?

The Tesla system solves this by simply saying "the driver is still driving, we're merely assisting", but that assistance leads to inattentiveness - that's human nature, you have to design around it - which should be fine, if the systems were able to handle that, but that clearly are not.

I would argue that given the configuration of the Tesla system they are less safe, not necessarily due to net accidents, but rather because of the failure modes of the accidents. Bumper to bumper accidents on a freeway happen all the time, even driving perfectly you can end up in them because not everyone else is driving perfectly. The problem with things like the Tesla system is that it's generally fine, except in those cases where it drives at speed into objects because it gets confused.

If your vision or mental state leaves you unable to safely account for large obstructions you aren't allowed to drive. Because it isn't safe.


How many Tesla's are there? How many crash without autopilot, and how many crash with autopilot? How does this compare with other low-ownership rate cars in the same class?

Seems to me that it's premature to me making bold claims like this. Of cars costing over $100k, what's the accident rate?


There were numbers thrown around in the last thread on this, I gave you my memory. The answer is "Teslas are safe enough". Human drivers with the same accident rate wouldn't raise even a hair on an eyebrow.

And the "bold claims" bit is exactly backward. You are the one claiming that the technology "isn't ready" based on sensationalist local news story and not actual numbers.


By definition and design, Autopilot is used in the safest, most predictable driving situations, only. So it really makes no sense when you start comparing it with statistics on human drivers in general. My first reaction was that it is better to compare with interstate driving, but even that is still an unfair comparison, because humans are going to have a disproportionate number of fatalities in precisely the situations that most drivers with Autopilot turn it off.


«the damn technology isn't ready/safe enough»

What data are you basing this statement on? Or is this just your vague gut feeling based on reading about a few accidents in the news? I thought statistical data showed otherwise, that the technology was safe enough (ie. better than humans).

Edit: removed "debunked stats".


Reading that blog entry makes me more distrustful of Tesla than I was, because I don't think it's appropriate to compare Autopilot miles to generic human driver miles. Autopilot is not suitable for all types of driving. The most readily accessible statistics that would be more comparable would be fatalities on interstates.

US interstate deaths are about 1 per 183 million miles, so while Autopilot may be safer, it's not as clear depending on the uncertainty of the 320 million mile figure. Also, German drivers on the autobahn have roughly half the fatality rate, which is better than the claim for Autopilot. All in all, I think an honest assessment would be that it's probably as good as an average driver in good conditions who isn't drunk. But rhetoric about saving 900K lives raises my hackles - I do not want to trust the source of this sort of PR with my life.


These numbers from Tesla have been debunked several times (see every older discussion on this on HN) because of the flawed methodology (mostly comparing apples to oranges).


No offense but this is exactly the attitude that inhibits progress and led the auto industry to languish in mediocrity for years. If you wait until the technology would prevent all accidents, you'd be waiting forever.


Your attitude is inhibiting progress. Everybody knows the technology is not ready and everyone is fine with it and willing to face the consequences of refining it. Everyone except Tesla that is, which insists their self-driving tech is so advanced they even call it autopilot encouraging people to treat it as such and refuse to admit any fault in this mess.


> Everybody knows the technology is not ready And by what standards are you gauging its not ready? My point was if you wait until it becomes "safe enough", you'll be waiting forever because Tesla would be forever at the mercy of whose rules govern what "safe" means. Roads are hazardous places and unfortunately accidents are going to happen.

What I do know is that the Autopilot system will never be distracted, will never fall asleep, will never drive intoxicated or recklessly, will never disobey traffic laws like humans do. Tesla also has a financial incentive to improve their technology to prevent this from happening. Unlike humans who, for the most part, have far less incentive to drive more carefully.


I think you're setting up a dichotomy here that doesn't exist. It's not a choice between waiting forever or disregarding safety. Look at how Waymo is approaching the problem, for example.


It needs to be as good as an average human. Do we have enough stats to compare yet?


As good as a human at what? It's 100% better at not falling asleep behind the wheel or driving while under the influence which statistically are the most common cause of traffic accidents.


They are falling way behind in blasting errant pedestrians. Stories from the last 24hrs.

https://www.google.com/search?q=pedestrian+killed&prmd=nvi&s...


And you would be alive.


tell this to the families of the people who have died


We don't yet have evidence that Tesla's using Autopilot are any less safe than human drivers. We do have plenty of examples of traditional auto makers negligently ignoring safety regulations, manufacturing faulty vehicles and also causing fatalities. But somehow we don't hear the same pleas for sympathy for those souls.


We don't yet have any evidence that Teslas using Autopilot are any safer than human drivers. We do have plenty of evidence suggesting that humans are unable to pay proper attention when asked to do tasks like driving a level 3 automation of cars. There is plenty of reason to believe that this going to cause an increase in accidents, particularly of the fatal variety (say, running over a pedestrian at lethal speed).


The NHTSA found a 40% crash rate reduction from Tesla's autopilot. So how about you tell all the people who lives have been saved that you wish they hadn't been because you're an emotional reactionary


I believe many people have found issues with the data used and how they were prevented. Also it's worth comparing the common case of accidents - bumper to bumper, etc on a freeway which I'd except all self driving systems to instantly reduce the rate of just by increasing the distance to the next car - to the number of serious crashes - e.g. driving into parked vehicles, concrete pillars - those have entirely different failure modes.

That said, the reality is that the types of self driving systems currently deployed - Tesla or whomever - are all fundamentally flawed because they do two things:

* Require attention from the human driver at all times * Cause driver inattention

The first is fairly clear, literally every one of these products is prefaced with "the driver is still in control of the vehicle and must remain attentive", and the latter is clearly demonstrated because of all of the different mechanisms that the manufacturers are deploying to try and deal with the fact that if you tell a human to pay attention to a specific task, but they aren't doing that specific task, they will not pay attention. Every system tried just increases the time required for a human driver to adapt to unconsciously managing the various "pay attention" alarms.

The solution is fully self driving cars. We don't have that yet. I fully expect them to be safer once we get there.


I don't disagree that levels 2/3 are a danger zone in self-driving technology, but I feel the need to point out that there are plenty of people who are disabled or suffer chronic injuries resulting from being rear-ended. Stop and go traffic can be plenty dangerous as well.


Just read their autopilot page and tell me it’s not intentionally misleading to the average person:

https://www.tesla.com/autopilot


To be fair, the system really isn’t designed to encourage the driver to not pay attention. Quite the opposite.


> I really wish all the car manufacturers would stop with this bullshit "you still have to be driving it" self driving nonsense.

To align with your statement, I can state; "I really wish all people would stop this bullshit "auto pilot means it can drive itself and I don't have to do anything" nonsense.

I feel this is quite a critical analysis of an ambiguous product title; [1]'Telsa Autopilot (& Enhanced Autopilot)'.

I read the title; "Tesla in Autopilot mode crashes into parked Laguna Beach police cruiser"... and wondered how this article is even news..

Where, besides future plans for their vehicle product (Which, as suggested by Wikipedia - may be an independent product itself, and not an extension 'Tesla Autopilot / Enhanced Autopilot'), does Tesla's product in question indicate it's self driving? Autonomous Car? I do admit, the 'summon' feature of their vehicles makes this slightly confusing...

Additionally; what is the criteria to be 'self driving'? I assume; you are talking about 'auto pilot'..?

If so; this over-simplifies the entire scenario.

The very definition, and scope, of the title 'autopilot'/ 'automatic pilot' is far from specific. In fact; most of them are applicable only to [2]aerospace.

> Claiming that the driver is still responsible for driving the car, while using a system designed to encourage the "driver" to not pay attention is a dumb idea, and all I can see is this kind of thing leading to regulations the delay actual self driving cars.

This part of your post, I actually agree with. Having said that; I believe this is more so associated to an applied opinion of what a 'self driving' car is, and what we are seeing with these vehicles - and not because of any lack of technology advancements.

But again; I urge caution in applying your, or anyone else', opinion of what a 'self driving' car is.

FINALLY: I really wish we could start taking some responsibility of what we agree to. If the driver, even once, must agree to a clear scope of requirements to use the product - then blatantly disregards this; how, or why, is this even news?

We all accept terms and conditions of use, and privacy policies - with almost every product we use - and don't expect to be shown them every time we use that same product. I'm one of the few that read these documents - and more often than not; refuse to use the product. I made the choice not to agree.

There's other comments to this article that clearly indicate they believe a 'once off' popup is questionable at best - but I really don't understand this. Without a doubt; having to push a button to enable this Telsa/Enhanced 'Autopilot' feature, then scroll through a dialog to the end (Yes, okay, exaggerated) - and then click 'ok' would be a huge hazard.

I don't believe the user should be prompted with the same notice every time they use a product.

However; I do not necessarily believe prompted once is the right answer.

What if someone else uses the vehicle? Or you sell it?

I do not pretend to have the answers...

--

Disclaimer; I know Wikipedia is not gospel.

[1] https://en.wikipedia.org/wiki/Tesla_Autopilot [1.1] https://www.tesla.com/blog/dual-motor-model-s-and-autopilot [1.2] https://www.tesla.com/autopilot [2] https://en.wikipedia.org/wiki/Autopilot


We have seen time and time again that if corporations can wash their hands off of something, they do so. For the past few decades, and perhaps for much longer, the operating principle of corporations have been "if it's not illegal, it's moral."

As an example, rationally speaking credit bureaus and financial institutions are 100% responsible if they give someone a loan or open an account for them under your name. But they were allowed by the legal system to put the onus on the victims of the fraud to prove they are victims not perpetrators of it, and they called it "identity theft" instead of "bank fraud," and now we have victims who never did anything wrong or could have done anything differently battling CRAs and FIs for years to get out of a situation they should have never been put into in the first place [0][1].

If the laws allow companies to sell Level 5 autonomous vehicle while keeping the passenger (because it's not really a driver anymore) responsible for all legal matters, they would do so, and they would lobby against any law which puts the burden correctly where it belongs. And we would be forced to put our financial, mental, and legal well-being on the line just to get from point A to point B because it will be impossible to avoid autonomous tech then like it is impossible to avoid credit bureaus now.

[0] Like this case: http://www.cbc.ca/news/canada/manitoba/credit-report-error-f...

[1] Obligatory Mitchell and Webb: https://www.youtube.com/watch?v=-c57WKxeELY


> If the laws allow companies to sell Level 5 autonomous vehicle while keeping the passenger (because it's not really a driver anymore) responsible

Well, law is allowing Tesla to sell a Level 1 vehicle as Level 5. They have to start applying false advertisement laws first before they can think about other laws.


So then we should ban cruise control too? No, we should educate people how to use new technologies properly.


If Tesla called Autopilot "cruise control with lane-keep assist", I think people would be a lot less likely to abuse it. By calling it Autopilot (and even letting people pay for "full self driving capability" in advance) is enticing people to use it dangerously.


First of all, they do call it that. There is no “turning on AutoPilot”. You turn on “traffic aware cruise control” and “Autosteer” separately. You can turn on TACC and not autosteer, or turn them both on, or drive yourself and switch between these modes very easily. Collectively the self driving features come in an “Enhanced Autopilot” package which you can purchase to have those individual features enabled.

Secondly, I have a Model 3. The idea that the name is misleading people is the dumbest thing i’ve ever heard. Even if people were confused, their confusion would be cleared up very quickly within a few days of use. When you pick up the car they talk to you extensively about what it can and can’t do, and there are many warnings in the car that explain it. People who make this argument probably haven’t tried using AutoPilot.

Put this is context. you can’t just release a perfect system on day 1. In order to get the software to that level earlier versions need to be seeded into the public. I am 100% convinced the software is a huge safety and convenience win as it is.


cruise control - means just controlling cruising speed. thats all. Constant 55mph or whatever speed you desire.

Auto pilot literally means vehicle operation without human participation. The name autopilot should be banned.


“Come check out our really awesome manual auto-pilot! It’s fully manual automatically!”


Assuming that every major AP collision, it is pretty amusing that the latest non-fatal ones have all coincidentally involved emergency vehicles. I imagine it must be related to how such vehicles have the right to park just about anywhere, including on roadways in which drivers do not expect parked vehicles.


The photo looks like it was parked in a marked parking spot (those white crossed lines indicating each slot along the curb).


Half on the sidewalk, half on the road.


After being crashed into. Why would they have driven up onto the curb when there's so much empty space and all the other cars in the photo are parked fully in the street? Not to mention, being further in should make it easier to avoid, not harder.


One way to avoid these types of crashes IMO is anomaly detection. It's quite simple to do anomaly detection in pixels using modern deep pixel prediction nets like PredNet. In my experiments you get a few seconds lead time on something like a car cutting you off (the car starts to head out of the lane before actually crossing it for example). This allows alerting the driver, and with a full windshield HUD you could even highlight the anomalous pixels on the windshield. The nice thing about this is that it can be trained in an unsupervised manner on all the available data. Some important details are to find anomalies in object bounding boxes, using something like Tensorflow's object detection pretrained net. Otherwise buildings with lots of striations would light up the anomaly detector. Also, you should detect anomalies in a human colorspace like CIELAB so that white cars (#fff) are not artificially weighted as more anomalous.

Finally, you could use this as input to a planner like Model Predictive Control where a higher cost is incurred for approaching anomalous objects.


Great you solved AD with this one simple trick!


Elon Musk has constantly underestimated the difficulty of autonomous driving.

This video (https://youtu.be/wsixsRI-Sz4?t=1h18m28s) shows Elon Musk, two years ago, saying the following:

"I basically consider autonomous driving to be a solved problem".

"A Model S and Model X can drive with greater safety than a person, already. Right now."

"We are less than two years away from complete autonomy".


Totaled? Maybe insurance totaled, but that police cruiser looks relatively in fact from the picture. The driver's side door is open and and it doesn't look like it was crumpled in any way in that zone. The rear driver's side passenger door looks like it took more damage, and the entire frame of the cruiser still looks like it's in relatively okay condition. The Tesla is in the same shape.

I'm not trying to downplay the impacts of the driver or Autopilot here, but even if a police officer was in the cruiser, it doesn't look like it would have been as catastrophic as the article implies.


From TFA:

> "Tesla has always been clear that Autopilot doesn't make the car impervious to all accidents, and before a driver can use Autopilot, they must accept a dialogue box which states that 'Autopilot is designed for use on highways that have a center divider and clear lane markings,'" a Tesla spokesperson said in an emailed statement.

Does this dialog box appear every time the car is started or is it a one time thing, just like terms of services?


One time thing. The display does tell you to pay attention and keep your hands on the wheel on every activation though.


Lovely... But if it works great 99.9% of the time it may just lull you into a false sense of security. Not saying the driver isn't at fault but this is especially true for Tesla where a firmware update may change autopilot behaviour in ways the driver doesn't expect.


The car should be able to detect if those 'center dividers and clear lane markings' are visible warn the user, or shutdown.


The information should already be in their mapping systems. It should not offer to activate on unsupported roads. It will offer to activate in downtown with stop signs even couple hundred feet in the small city I live in. It will offer to activate driving through a residential neighborhood...


The next evolution in clickwrap licensing, this time its fatal.


If you need to stay alert and have your hands on the wheel, why is the damn thing called "Autopilot"!? This should not be legal.


Los Angeles Times blocked Europe because of GDPR. Here's a link we can read from there https://www.reuters.com/article/us-tesla-autopilot/tesla-hit...


Good reading: https://www.vanityfair.com/news/business/2014/10/air-france-...

Basically this is a well-known issue in aviation. As automation gets better and better, humans rely on it more and more and get less and less able to handle even minute failures in it. Additionally humans are absolutely terrible at being able to be dropped into a complex situation and have to make immediate rational decisions about it - thus the failure mode of "in case of error, tell human to handle it" is a bad idea if time between "tell human about it" and "crash into things" is under ten seconds or so. This has been the subject of many NASA studies and NTSB reports and the above article does a good job presenting this info in a form a layman can understand.

There are currently no known easy solutions, sadly.


I enjoy how "EU" means "outside of us". I'm in South Africa and it's blocked for the eu


Maybe they're using browser timezones


That wouldn't be enough, there is EU territory outside Europe.

(French Guyana, Martinique, Réunion etc.)


2 fire trucks, 1 police cruiser.

Tesla really is sticking it to the man!


Usually other types of vehicles get a handsome ticket when they stop somewhere they should not.


Also Chinese highway sweeper.


They should name the feature smart cruise control. Case closed. IT does not imply self driving capability but rather more advanced cruise control system. Which is basically what it is.

Nobody is going to set a cruise control to a certain speed and look away from the road!!!


The numerous drivers I see texting every day would beg to differ.


Tesla could still salvage this somewhat by loudly rebranding Autopilot to "Cruise Control 2.0" and issuing a serious apology.

But somehow they (or just Elon?) seem unreasonably averse to doing that. The man is losing hero-status by the minute.


“You are about to hit a solid object. Stop.”

Why can’t we solve the most basic use case? We don’t have the sensors and it’s all done with cameras?


It’s a solved issue, but Tesla doesn’t use LIDAR so it will never be solved for them.


But they use Radar and ultrasonic sensors, which should be enough (and even better, since LIDAR doesn't work in fog).


They doesn't seem to build any 3D scene. Their radar doesn't seem to be sweeping in any axis, so it is good only for detection in narrow sector along the road surface. This is in particular was stated as the reason behind that case of driver being decapitated by driving under the road crossing semi truck. Their ultrasonic i'd suppose have similar fixed direction functionality and probably for much smaller distances.


Binocular vision can do it just fine under most circumstances. Subaru Outback with Eye Sight is better than AP for this by far.


Why not? Isn't this the solution? I do remember hearing a while back that the cost for a lidar rig was around 5k, but given the fact it would save lives this seems like an obvious move?


Musk has stated publicly multiple times that he believes autonomous driving can be solved without LIDAR and directed the team at Tesla to not use it. https://www.theverge.com/2018/2/7/16988628/elon-musk-lidar-s...


We? We have already solved it. They (Tesla) hasn't.


Too many false positives; sensors would only be usable for cars that moved a few km/h at most. Even if you're driving normally, various other objects appear in your path all the time.


And yet the forward collision warning on a lot of inexpensive cars like Hondas would have warned the driver and then hit the brakes in this situation.

We clearly don't have an epidemic of cars with forward collision warning systems slamming on their brakes for no reason, so those systems must be able to deal with the false positives somehow.


So don’t ship it then? Honestly, it seems like a fairly fundamental feature, doesn’t it?


Sounds like a bad excuse to me


Articles like this should really include incident rates per miles driven for the self driving vehicle and compare that against human rates. Without that this feels a lot like an intentional effort to mislead. What matters are the crash rates. E.g. - how many humans crashed into an unmoving vehicle yesterday? An immense number but of course they also drove far more miles as well. So the question is exactly how many miles?

The first milestone for vehicular automation is not perfection, but to simply be better, and in the worst case - comparable, to humans. Given the media's tendency to try to make big news out of every single self driving vehicle crash, it seems like we're already approaching (if not passing) this first milestone, as these articles are relatively rare while these vehicles have driven hundreds of millions of miles in autopilot. I can't find the latest number, but back in October 2016 Tesla already had 222 million miles under autopilot. It's safe to assume it's now some substantial multiple of that.

Some time it would be interesting to see media releases around the time we transitioned from horses/carriages to automobiles, to see how the state of the media has changed. Or perhaps it has always been this way. "I will add, that the man who never looks into a newspaper is better informed than he who reads them; inasmuch as he who knows nothing is nearer to truth than he whose mind is filled with falsehoods & errors. - Thomas Jefferson, 1807"


Tesla Autopilot doesn't see stationary objects. It sees moving objects with its radar, and tries to avoid hitting them. It sees lane markings with its camera, and it tries to stay between them. Sometimes lane markings are confusing, worn, or inconsistent. If that happens, your Tesla will smash you at 70mph into a concrete barrier/police car. It will not slow down if it is uncertain about the lane markings, it will not beep if there is a stationary object ahead.


Let's make the roads out of rubber and the wheels out of pavement. I feel like this is the backwards mentality with autonomous vehicles. I wrote a while ago about how they simply cannot solve the transportation problem, at least not in a way that's cheaper than just building mass transit:

https://penguindreams.org/blog/self-driving-cars-will-not-so...

I realize this particular case is different with us talking about assistance/safety features, but I still feel we're going the wrong direction here. These features reduce your awareness. They're like a teenage driver you constantly have to monitor, but worse.

They teach people to be comfortable and less alert with systems that have potentially fatal bugs that we can't yet identify.

We can build mass transit now, at a fraction of the cost of developing the tech for automated vehicles. There are so many corner cases, with highways, city driving conditions that simply cannot pan out for autonomous vehicles without a lot of testing; and a lot of people getting injured and dying.

I wish people would shelve this pipe dream for now. Maybe once we have really solid rail transport again in the US, like we use to decades ago, and rich and poor people can have a means to work even in smaller cities; then we can work on frivolous junk that will benefit the few that can afford it. Right now we just run the risk of killing more people who can't.


We've had mass transit for a century. I don't see much harm in trying something new that may save a million lives per year.


But we haven't, not in America. Sure the rest of the world does, but we've lost most of our mass transit over the past century.

Knowing you can get on a train at 2am and make it home does play a significant role in reducing drunk driving, and can benefit people of all income levels.


Another auto pilot crash. List so far: 1 Hebei, Fatal China crash https://jalopnik.com/two-years-on-a-father-is-still-fighting... 2 Florida, Fatal Joshua Brown crashes into truck. https://jalopnik.com/tesla-driver-in-fatal-florida-crash-got... 3 Mountain view, Fatal Model X crash. https://www.mercurynews.com/2018/03/30/tesla-autopilot-was-o... 4 Reference to auto-pilot crash in Hayward. http://abc7news.com/automotive/i-team-exclusive-tesla-crash-... 5 You You Xue crashes in Greece http://abc7news.com/automotive/millbrae-driver-says-tesla-mo... 6 Utah autopilot crash, Tesla driver injured, Firetruck driver injured. https://www.sfgate.com/business/technology/article/Unclear-i... 7 Texas autopilot crash 'Car crashed, then continued to accelerate' https://www.dallasnews.com/business/autos/2016/08/25/dallas-... 8 SF VC autopilot crash, she took over control just before the autopilot crash, Tesla blamed her for being in control when the crash happened. In her words 'its your fault if you take control, its your fault when you do not take control'. https://www.digitaltrends.com/cars/tesla-autopilot-related-c... 9 Laguna Canyon road crash into a parked vehicle. Driver injured. https://twitter.com/LBPD_PIO_45/status/1001541486146547717 10 FenderBender Crash video on youtube. https://www.youtube.com/watch?v=qQkx-4pFjus&feature=youtu.be 11 Wisconsin Model X crash https://www.carcomplaints.com/news/2018/insurance-company-su... 12. Culver City crash into Fire Truck http://abc7.com/traffic/2-federal-agencies-investigate-tesla... 13 Autopilot crash in Germany https://www.reuters.com/article/tesla-germany-crash/tesla-cr... 14. Model X crashes into Semi in California. Not many details. https://electrek.co/2017/03/27/tesla-model-x-autopilot-crash... 15. Laguna Canyon road crash into semi at same spot as (9) april 10, 2017. http://ktla.com/2018/05/29/tesla-on-autopilot-crashes-into-l... There are probably more crashes that have not been reported prominently and not disclosed by Tesla either.


> 8 SF VC autopilot crash, she took over control just before the autopilot crash, Tesla blamed her for being in control when the crash happened. In her words 'its your fault if you take control, its your fault when you do not take control'.

That's right. It's your fault either way, driver, just like it is in airplanes with autopilot.


Perhaps they should require the same level of training for car auto-pilot as for plane auto-pilot


I don’t think airplane autopilots require any specific training. If you’re allowed to fly the plane, you’re allowed to use the autopilot.


If your aircraft is equipped with automation you may be required to demonstrate proficient use of the automation during your check ride. Any piece of equipment installed in the aircraft is fair game for the evaluator during a practical test.


Yes, but as long as you're flying something that doesn't require a type rating, you can take your checkride in a plane without autopilot, pass, then hop into a plane that has one and start using it.


This list is misguided.

How many crashes do we have from people using regular cruise control features? Is the incident rate per mile when using Autopilot from Tesla higher or lower than regular cruise control?

Those are the right questions in my opinion.


Why are you asking me? Ask Tesla! And they won't answer. Don't be a fan boy.


My point was that your list doesn't answer the most important questions and trying to proxy any information from a selected list like that is flawed thinking.


Tesla does not answer the most important questions about their own product.


That's a fair position to hold, I concede that.


Few questions that would be helpful to know the answer to

- accident rate of Tesla drivers that don’t use autopilot

- accident rate of other cars in the same market/category as Tesla’s (are wealthy people better drivers?)

- accident rate of cars with Tesla Autopilot engaged

- accident rate of other cars with similar ‘assistive’ technologies.


Great work.


I can't figure out the geometry of this accident. It looks like the Tesla hit the cruiser near the left rear wheel. And the cruiser was pushed sideways over the curb. But its right rear tire looks to be inflated.

So why would the Tesla have been heading toward the roadside? Was the cruiser perhaps parked in the middle of the roadway?


How does Tesla fail so badly at basic collision avoidance?? Surely the LIDAR is able to detect an imminent collision, but at least in the recent fire truck and lane-split barrier incidents, the cars didn't even slow down?! That seems really fundamental.

Edit: so they don't use lidar... Is this a machine vision failure, then?


My understanding is that they detect moving obstacles much more easily than stationary.

This is because almost everything in the environment is stationary to the car, so it is hard to distinguish between stationary things in your way and stationary things just off to the side.

So if the car in front of you slows down, it's obvious that you'll hit it if you don't slow down.

A car parked partially in your lane might just look like regular noise to the sensors. That is, using the current technology, if they (correctly) classified the parked car as an obstacle then they would (incorrectly) classify so many non-dangerous situations as obstacles so as to make the system infeasible; the car would emergency brake constantly as it drives down the road.

Some would say that until those obstacles can be classified correctly such technology should not be used at all, but I think it's probably doing more good than harm at the moment and hopefully a good solution can be found quickly. Even if they do fix this problem soon, I'm sure we'll hear plenty of stories where something goes wrong anyway, but hopefully they won't seem so obviously avoidable (who hits a parked car!)


How dumb of an idea is it to attach the camera/lidar/what have you to a 2 axis rail and move it around at a high speed to introduce some parallax movement to the scene?


I assume you could achieve the same thing with less complexity and more cheaply using a fixed system.

Either having more than one sensor or using mirrors or something.

In fact, I'm sure that the camera systems in place are already doing this to some degree. I think it's more that it's not yet good enough at picking out these stationary objects that are in the path of travel.


People have stated in older threads that the lane assist will not brake for stationary objects - if it did, it would brake all the time due to all the false positives. The lane following is pretty impeccable right until the collision for the Chinese accident. https://www.youtube.com/watch?v=fc0yYJ8-Dyo


I really wish everyone had a chance to use the autopilot feature and judge themselves. I use it every single day and trust the machine.

Issue is, with every complex system, there are few ins and outs that users need to know - like lane merge buffer lines, sharp lane merges by another vehicle in front of view, etc. Within your first few rides(below 10), users could pretty much guess when it works and when it might have issues.

On labeling the feature, it is marked clearly BETA and wiggle prompt is shown pretty much in all the confusing cases - so alert is never missed for me.

My only gripe is with all these incidents, Tesla is forced to update the feature to make it less user friendly or take out completely. I genuinely feel safe and come home every single day with less stress. Elon's tantrums are not helping the situation either.


Blaming the users doesn’t seem like a winning strategy. If the system needs users “guessing” about its behavior maybe it should be gates behind a test with periodic checkpoints as the software behavior changes.


I have been using AP2 every day for more than a year now as its very convenient but inherently dangerous as its addictive and provides a false sense of security as it works great ~ 90-95% of the time. I would not be opposed to an auto-steer ban to force Tesla to get their priorities straight about stationery object detection and auto-braking and stop scamming people about their self-driving if its really years away. You can still use adaptive cruise which eases all the commute hassles yet forces you to be actively engaged.


can i ask why you “trust” the system with your life so much?

there’s not a single piece of software i trust in my daily life. i mean, something as simple as an iPad, phone, computer, tv, etc. hardly goes a day without a glitch, but yet people are ready to trust “self-driving” auto features as if they’ve never experienced a software failure before. and normally, it isn’t a software failure in the strictest sense. it’s a human error embedded in software. so i ask myself, do i trust the however many overworked, sleep deprived, wide-eyed engineers’ abstract and concrete thought now embedded in cars’ active driving systems? the answer is no way.


I liked a comment from the hosts of The Grand Tour (paraphrasing): "When the owners of the car companies that promote these self-driving cars will allow the cars to drive them along the Yungas Road in Bolivia without a steering wheel, then it can be classified as safe."


This is another problem with the “imperfect specialist”. Fortunately and unfortunately we don’t need to grow our own vegetables anymore. Most of us have adopted the concept of buying vegetables from “specialists” who grow them more efficiently then we can. So, we use these specialists and detach ourselves from the growing process altogether. Then we receive a batch of Romain Lettuce comes with ecoli and die. This doesn’t happen often but we’ve delegated this task to someone else and inherited the imperfections of their process. I’d argue vegetable growing is pretty damn safe these days. I’m ok with that risk. But I don’t think Tesla is anywhere near worth the delegation of my driving. I’ll keep my brained turned on for now.


Step 1: Stop calling it autopilot.


In the car's defense... I think the police cruiser looked a lot like a highway median.


The fix is to implement self driving bumper cars.


I wish all those advanced assist functions in today's upper middle-class cars would trickle down to cheaper cars faster. That would give us much more safety now.

It seems Teslas autopilot cannot reliably do much more than conventional lane assist etc.

Tesla could disable autonomous lane changing and autonomous changing of streets, requiring driver's input for all changes of direction (that are not smoothing out, but staying in lane), and it would be really good.

But Tesla needs the impression of game-changing, not just best-in-class for financial markets reasons.

I believe authorities should order call-backs and force Tesla to disable all traces of autonomy.


The thing is, a tesla crashing while running on autopilot is more newsworthy than a tesla crashing while being driven. We should not let anecdotal evidence drive our fears.


I don't see how this comment could possibly be a reply to mine.


Tesla and the driver are at fault here. Tesla for not prohibiting operation on the types of roadways where it is not designed to be used and the driver for using it where it’s not designed to be used.


Why they not partner with Google? I think they have much better talent pool at AI to handle this. I hope they are using all the crash data to make it more robust.


Because Google/Waymo is using lidar which adds ~$150K to the cost of the car.


The cost of the Lidar hardware was down to $7,500 well over a year ago and may have come down even more in the intervening time: http://www.businessinsider.com/googles-waymo-reduces-lidar-c...


I thought it was more the fact Elon didn't like the aesthetics of the roof equipment, like he's trying to design an automobile the same way Apple designs a laptop


Pay up the nose to be a billionaires guinea pig so he can go can down in history as a pioneer of robot cars while while you go up in flames...HELL NO!!!


All I see is this:

"Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism."

Maybe we should develop a habbit here on HN not to link to geo-fenced sites?


"Unfortunately, our website is currently unavailable in most European countries. We are engaged on the issue and committed to looking at options that support our full range of digital offerings to the EU market. We continue to identify technical compliance solutions that will provide all readers with our award-winning journalism."

I can't read the article. Weird. is this because of GDPR?


Yep, it's because of GDPR.


> The Palo Alto-based automaker, led by Elon Musk, has said it repeatedly warns drivers to stay alert, keep their hands on the wheel and maintain control of their vehicle at all times while using the Autopilot system.

That Tesla is taking so long to learn what "human factors" are is painful, especially since HF has been part of other industries for decades.



I never knew about that subreddit, thanks!


My sinister side is tingling...

How convenient for non-autopilot auto manufacturers... to have a competitor out there boasting semi-autopilot capabilities...

I wonder how many of these "autopilot crashes" could be machinations... if not already, then in the future perhaps?


Or, the incredibly more likely and predicted option is that you shouldn't promote a self driving car that requires constant attention from the driver. Comments on every HN thread have literally warned about this for years.


To successfully develop self-driving cars, without spending NASA moon mission like time and money, we need to put a value on a human life. Maybe even different values for passenger, driver, pedestrian, other motorists, policemen, etc. Then the risk, time, and money to spend will be easily defined. If some task will take a software engineer (paid $150k/year) half year to develop but only increase safety enough to save 1 pedestrian per decade, and a pedestrian is valued at $50k, then maybe the dev can work on something else. Numbers are made up just to illustrate the point. But this is how Tesla, or any other self-driving car-maker can quickly bring self-driving tech to the public in a cost-effective manner, and within reasonable time-frames.


Your guess at numbers, even though made up, have a concerning low regard for human life.

Since they're required to, the us dept of transportation puts it at just over 6 million.

Priceless is of course the right answer, but engineering wise you're in the 5-10 million range.


That disclaimer wears thinner each time someone does something stupid like this. Tesla can keep repeating it until a third party gets killed or maimed, or it can act responsibly.


"Elon Musk, has said it repeatedly warns drivers to stay alert"

Its AutoPilot, why the hell are all those people expecting it to Pilot the car Auto matically? ....


Just like humans are self driving and self crashing the car, "auto-pilot" is doing the same. Machines are too good at learning from humans. /s


> Cota said. > Cota said. > Cota said.

Three times in a row, has this article been written by a robot? It doesn't feel like something a human would write.


they need to reign this in fast before the NHTSA forces their hand. supposedly they already on that agencies bad side and the last thing Tesla needs is a big recall or stop sale. they will upset some of their fans but in the end it may be safer to OTA restrict its usage.


Yes Tesla are leveraged so that they are far more vulnerable to a crisis of confidence than any of the traditional manufacturers.


When will the right mirror be replaced with a camera and dedicated screen. Until the 1980s cars didn't even have a mirror on the right side. Getting rid of the mirror will make cars more aerodynamic. It will also make cars safer, no more knocking over cyclists with the right side mirror. It will make cars cheaper to maintain, mirrors are expensive to maintain.


In the last 24 hours, 100 people died in car crashes - but 2 crashes a year apart and its "why does this keep happening". I realize the numbers are not comparable - however, 100 people dying a day is seen as act of God, but 100% driver-less society and getting 1 person dying a week will get these cars banned probably.


I could be wrong but that looks like an autopilot 1 car which is an older version of the tech that they're not really developing anymore. The newer autopilot 2 and 2.5 is what they're going forward with for now. That also doesn't look like a highway and autopilot isn't designed to handle that yet.


Tesla made a huge mistake naming this feature "Autopilot".


ditto


More overpromising and deadly consequences. He has overpromised on delivery dates, autopilot abilities, he has underestimated capital requirements, and timeline trips to Mars.

Perhaps Tesla should consider a new CEO and Elon take a move to a more figurehead position?


Elon Musk: „Fake News!“


Unavailable in most European countries. #GDPRproductivity


Sounds like Tesla has the right idea?


Acab :D


how is autopilot feature still allowed to be called auto pilot? it mistakenly leads people to believe that it can drive itself. The name literally means - a system used to control the trajectory of an aircraft without constant 'hands-on' control by a human operator being required.

After multiple deaths now the head of marketing at Tesla should be criminally prosecuted (or whoever is the big shot who signed off on this name).


Its capabilities are about on par with an airplane autopilot.

I have a lot of problems with Tesla, including the way they market this, but the name is accurate.


> Its capabilities are about on par with an airplane autopilot.

Some one should study you. Because I don't know how someone with a brain can be so stupid...Not considering the possiblity that you are paid for it, because everyone knows that there are no shills on hackenews...


Pilots require a much higher level of licensing however.


Well, then they should also provide the same regorous training that the pilots recieve, or make Tesla available to those who are equally trained.


That, I can agree with. One might also argue that a mere autopilot does not imply good enough capabilities to work in a cluttered, dense environment such as roads.


Autopilot is like the steering wheel, brake, stickshift, etc. You don't get to blame an accident on it. Airplanes have autopilot as well. We live in a world where movies like jackass are made, based on real people and their attitudes. It's not reasonable to suggest that safety labels or terminology can fix everything. Some people are plain irresponsible and foolish.


Everybody is so hard on Tesla. And I do agree that the name "AutoPilot" gives the wrong impression of what it can do.

But think about it like Elon Musk would: If AutoPilot is statistically saver that a car without AutoPilot then it's good enough to be sold on production cars. Tesla cars are saver that normal cars statistically.

Off course, picking one crash and saying AutoPilot sucks is statistically not the right approach. I do agree that Tesla should improve on this. But crashing into a parked car probably means that the driver was playing with his phone, or the cars touch screen too much. Tesla should have systems in place that detects the driver not paying attention. (something better than touching the steer)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: