Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What is happening was clear to many since the start: Tesla incarnates the behavior of his founder, exaggerating the technology that is really available right now, and selling it as a product without the premise for it to be safe. A product that kinda drives your car but sometimes fails, and requires you to pay attention, is simply a crash that we are waiting to happen. And don't get trapped by the "data driven" analysis, like "yeah but it's a lot safer than humans" because there are at least three problems with this statement:

1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.

Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.



> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.

I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.

In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.

Bottom line, that point just doesn't work.


I think the point he's making is that it's worse if a person dies because of something they have no control over (self-driving car malfunction) than if a person dies because of their own stupid choices (driving drunk, driving too fast for conditions, running red lights, etc).

This, of course, ignores the fact that stupid choices drivers make tend to affect other people on the road who did nothing wrong, so the introduction of a self-driving car which makes less stupid decisions would reduce deaths from both categories of people here.


> I think the point he's making is that it's worse if a person dies because of something they have no control over

Perhaps. I reject this argument though. A death is a death.

And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.


>Perhaps. I reject this argument though. A death is a death.

A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.

>And if you prevent lifesaving technology then you are responsible for a whole lot of deaths, irregardless of how "deserved" those deaths are.

How many deaths has Tesla prevented, which would have happened otherwise?


> How many deaths has Tesla prevented, which would have happened otherwise?

Certainly a larger number than the number of deaths caused by Autopilot failure (which makes major news in every individual case). Have a look at YouTube for videos of Teslas automatically avoiding crashes.


I don't see a number.


> A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.

The result is the same, no? Isn't the idea to save lives? Why does one life have more value than another's simply because someone's death was caused by their own negligence?

What about the other people on the road? Do their lives matter less because of the gross negligence of the person not paying attention while they're driving?

This issue is a lot more complicated than you're making it seem.


It is unethical to ship non-essential/luxury features containing known defects which could lead to someone's death.


> because of something they have no control over

Expect in this case, the driver has 100% control.


The driver's not the only one on the road.


No, that conclusion completely ignores the fact that the kind of people who buy Teslas are generally not the kind of people who drive unsafely anyway and die in terrible wrecks.

If you give every shitty driver a self driving Tesla maybe you would do something to make roads safer, but if you’re just giving it to higher net worth individuals who place greater value on their own life, you haven’t even made a dent in traffic safety.

In fact in some cases all you’re doing is making drivers unsafer because the autopilot encourages them to not pay attention to the road no matter how much you think they are watching carefully. The men killed in Teslas could have all avoided their deaths if only they had been paying attention. If I see a Tesla on the road I stay the hell away lest it do something irrational from an error and kill us both.


Is there evidence for your better driver claim? https://wheels.blogs.nytimes.com/2013/08/12/the-rich-drive-d...

I do see some sources that claim rich divers get better insurance rates, but it is unclear to me if that is due to driving skill or a number of other factors that increase rates like liklyhood of being stolen or miles driven.


Seems like your first two paragraphs are amenable to analysis. Surely there is data out there that splits traffic accident statistics on income, or some proxy for income. Is a Tesla on autopilot more accident-prone than a BMW or Lexus? The numbers as they stand certainly seem to imply "no", but I'd be willing to read a paper.

The third paragraph though is just you flinging opinion around. You assert without evidence that a Tesla is likely to "do something irrational from an error and kill us both" (has any such crash even happened? Seems like Tesla's are great at avoiding other vehicles and where they fall down it tends to be in recognizing static/slow things like medians, bikers and turning semis and not turning to avoid them). I mean, sure. You be you and "stay the hell away" from Teslas. But that doesn't really have much bearing on public policy.


Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters... those are the demographic you have to compare the Tesla drivers to. Do people who buy Teslas engage in those kinds of dangerous activities?


> Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters.

Wait, so you are willing to share the road with all those nutjobs, yet you're "staying the hell away from" Teslas you see which you claim are NOT being driven by these people? I think you need to reevaluate your argument. Badly.

That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...


>That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...

What are you basing this on? Specifically how is it 'clear' and what data has shown this to be 'significant' ?


Elon Musk probably


I stay away from all these people. I have no illusions that a self driving Tesla is significantly safer.


I don't think a self-driving car is safe today but the fundamentals of machine learning seem sound so arguably they will become safer with every mile they drive (and every accident they cause).

My only concern is that there should be somewhat responsible people working on it (this means for example no uber, Facebook, linked in, or ford self-driving car on public roads).

But thinking about bit more, what if competitors shared data? Would that get us "there" (level 4+) at all? Would it be a distraction?


There was a recent news item about a Tesla driver who fell asleep at the wheel and who was allegedly over the legal limit:

https://duckduckgo.com/?q=tesla+driver+drunk+asleep&t=ffsb&a...


Yes, that sounds like a pretty good profile of people buying expensive high-performance cars.


yes


Well, this is pretty classist...


The reason why people buy expensive cars is because it is one of the ways in which they can quickly and quietly signal wealth or status. Cars are classicist, like it or not.

After all, how much better can a $500K supercar be compared to a $50K car? Definitely not ten times better, the speed limits are the same, seating capacity is likely smaller, there may be a marginal improvement in acceleration and a corresponding reduction in range (and increased fuel consumption).

Even having a car / not having a car is a status thing for many people (and it goes both ways, some see not having a car as being 'better' than those that have cars and vice versa, usually dependent on whether or not car ownership was decided from a position of prosperity or poverty).


>Cars are classicist, like it or not.

as a trained classicist, I take exception to the idea a car could do what I do


I meant it as an adjective, not as a noun (I'd have added 'a' in front). Is it incorrect?


You'd want "classist" if you mean prejudice against a class


Noted! Thank you.


Some of us just really like nice cars without signaling wealth. I recently bought a Hyundai Genesis even though I liked and could afford the better Mercedes Benz because I preferred to have a non-luxury brand. It's a New England thing.


I went from a two door Hyundai Getz to a Tesla because I wanted an electric car. Not to signal anything. I also couldn't wait for the Model 3 to come to Australia due to a growing family and a 4 door requirement.


Oh I don't disagree. Rather, I was responding to the assertion that wealthy people are better drivers.


>You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe"

There are plenty of people making that case. See for example this piece in The Atlantic the other day (https://www.theatlantic.com/technology/archive/2018/03/got-9...) talking about the concept of moral luck in the context of self driving cars. It puts the point eloquently.


Ethics works by consensus. If the majority of the public become convinced that they prefer to die as a result of their choices than as a result of the choices of a machine they have no control over, even if the machine kills less often, then the machine is not an ethical choice anymore. Basically, forcing people to die in one specific way stinks to high heaven.

As to why we use certain technologies, that is not so clear cut either. For instance, if I have a heart attack and someone uses a defibrilator on me- at that time, that is not necessarily my choice. I'm incapacitated and can't communicate and if I die at the end there's no way to know what I would have chosen.

Not to mention- most people are not anywhere nearly informed enough to decide what technology should be used to save or protect their lives (for instance, that's why we have vaccine deniers etc).


> Basically, forcing people to die in one specific way stinks to high heaven

Then these people should stop trying to force me to accept a less safe method of transportation, by preventing me from using new technology!

Yeah, those people shouldn't be forced to use a self driving care. As they are ALREADY not being forced to do.

Nobody is being forced to use the technology. Just don't use it if you don't like it.

It is literally the opposite. These other people are forcing ME into a more dangerous situation.


Ok, you take the red tape away from silencers and short barreled rifles and I'll take it away from your self driving cars.

I'm all for less red tape.


Might help to give the context that suppressors can save target shooters' hearing.


The technology is not yet safe, as should be evident by now. It's being promoted as safer than humans, but it's not anywhere near that, yet, mainly because to drive safely you need at least human-level intelligence. Even though humans also drive unsafely, they have the ability to drive very, very safely indeed under diverse conditions that no self-driving car can tackle with anything approaching the same degree of situational awareness and therefore, again, safety.

For the record, that's my objection to the technology: that it's not yet where the industry says it is and it risks causing a lot more harm than good.

Another point. You say nobody is being forced to use the technology. Not exactly; once people start riding self-driving cars then everyone is at risk- anyone can be run over by a self-driving car, anyone can be in an accident caused by someone else's self-driving car, etc.

So it's a bit like a smoker saying nobody is forced to smoke- yeah, but if you smoke next to me I'm forced to breathe in your smoke.


>In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.

Yes, we would also tolerate Teslas if they were critical life support technology. How many lives has it saved BTW?

>and in fact these things have non-zero (often fatal) false positive rate

What is non zero? Ten to the power -20?


The key here is that in the larger system, we can do more to ensure humans perform better. If we encouraged and enforced behaviors that improve driving statistics, perhaps with other technologies, we would yield a more difficult metric to beat than our current driving stats. I agree we should spend our time doing that, rather than accepting in an inferior level of performance.


It is definitely much safer. However it's unethical to gloss over/coverup/play down the fact that people have died because it failed to drive to car. Both can be true.


Everyone talking about statistical evidence should take a look at this NHTSA report [1]. For example, "Figure 11. Crash Rates in MY 2014-16 Tesla Model S and 2016 Model X vehicles Before and After Autosteer Installation", where they are 1.3 and 0.8 per million miles respectively.

Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.

Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.

[1] https://static.nhtsa.gov/odi/inv/2016/INCLA-PE16007-7876.PDF


I mentioned this in a reply to another comment, but the NHTSA findings have apparently raised suspicions in other researchers. Currently, the DOT/NHTSA are facing a FOIA lawsuit for not releasing data (they assert that the data reveals Tesla trade secrets) that can be used to independently verify the study's conclusion:

http://www.safetyresearch.net/blog/articles/quality-control-...

https://jalopnik.com/feds-cant-say-why-they-claim-teslas-aut...


You can add safety automated features without requiring completely insane and dangerous mode of operations (like pretending the car can drive itself except that it can not and pretending that you did not pretend in the first place and reminding that the driver should have its hand on the wheel and be alert and what is even the point then?????)

They could be more useful with -- for now -- less features. They probably won't do it because they want some sacrificial beta testers to collect some more data for their marginally less crappy next version. But given the car does not even have the necessary hardware to become a real self driving car (and that some analysts even think Tesla is gonna close soon) the people taking the risks of being sacrificed will probably not even reap the benefits of the risks they have taken, paying enormous amount of money to effectively work for that company (among other things).


That could be because the car is usually right when it thinks it's in danger; this crash happened because it incorrectly thought it was safe. Having an assistive device that emergency-brakes or steers around obstacles is great as long as there's very few false positives, as long as its assistive and not autonomous. Once it's autonomous, you need to have extremely, extremely low false negative rates as well.


4. There is another completely unknown variable - how many times would the autopilot have crashed if the human hadn't taken over. So Tesla's statistics are actually stating how safe the autopilot and human are when combined, not how safe the autopilot is by itself.


I thought the autopilot was always supposed to be combined with the human.

It’s not a self-driving car. It’s really an advanced cruise control.


From https://www.tesla.com/autopilot: "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver"

Why do we keep referring to something that we understand should require human supervision as "auto"? Stop the marketing buzzfeed and let's be real.


I'm not sure whether it's intentionally been worded that way, but that sentence makes only a statement about hardware, not software. So technically, it's correct that the hardware is more capable than that of a human ("sees" and "hears" more), but it's the software that's not up to par.


That's like "Made with 100% local organic chicken" is only pointing out that the organic chicken is local, unlike the non-organic chicken whose provenance is not guaranteed.

That would be great to live in the world of marketing people where everyone is so able to parse weasel words. That would solve the fake news problem overnight.


People still use waterproof and water resistant interchangeably. People don't get the difference. Same with HW/SW, they won't know the difference. They read this web page and they think they are buying a self driving car.


bizarre statement

is there any point in saying that a hypothetical brain-dead, comatose bodybuilder is stronger than a starving man?


Yes, if the bodybuilder can be software-upgraded remotely out of the coma.


And if the coma was remote software upgrade induced? Stretching the analogy a bit, but a lot of people hype Tesla’s remote updates without considering how many remote updates tend to brick our devices.


Then there's still point in making the claim.


That sentence is just saying that the _hardware_ (not the software) is sufficient for "full self-driving capability". The current _software_ doesn't support that.

The point being that in the future the car _could_ get "full self-driving capability" via a software update. In contrast, a car that doesn't have the necessary hardware will never be fully self-driving even if we do develop the necessary software to accomplish that in the future.


And yet that is quasi criminal (from an ethical pov) that they have worded it that way for 2 reasons:

a. When you buy a car why should you even care about that hw/sw distinction, and more importantly do you have the distinction in mind at all time, and are advertisement usually worded that way, stating that maybe the car could become self-driving one day (but without even stating the maybe explicitly, just using tricks)

b. It is extremely dubious that the car even have the necessary hardware to become a fully autonomous driving car. We will see, but I don't believe it much, and more importantly competitors and people familiar with the field also don't believe it much...

People clearly are misunderstanding what Tesla Autopilot is, but this is not, ultimately, their fault. This is Tesla's fault. The people operating the car can NOT be considered as perfect flawless robot. Yet Tesla's model consider them like that, and reject all responsibility, not even considering the responsibility that they made a terrible mistake in considering them like that. We need to act the same way as when similar cases happens for a pilot mistake in an Airplane: change the system so that the human will make less mistakes (especially if the human is required for safe operation, which is the case here). But Tesla is doing the complete opposite! By misleading buyers and drivers in the first place.

Tesla should be forced by the regulators to stop their shit: stop misleading and dangerous advertisement; stop their autosteer uncontrolled experiment.


A.) Pretty sure that statement was made to assuage fears that people would be purchasing an expensive asset that rapidly depreciates in value, only to witness it becoming obsolete in a matter of years because its hardware doesn't meet the requirements necessary for full self-driving functionality. Companies like Tesla tout over-the-air patching as a bonus to their product offering. Such a thing is useless if the hardware can't support the new software.

I think I actually sort of disagree with your reasoning in precisely the opposite direction. Specifically, you state the following: "The people operating the car can NOT be considered as perfect flawless robot.".

I agree with that statement 100%. People are not perfect robots with perfect driving skills. Far from it. Automotive accidents are a major cause of death in the United States.

What I disagree with is your takeaway. Your takeaway is that Tesla knows that people aren't perfect drivers, so it is irresponsible to sell people a a device with a feature (autopilot) that people will use incorrectly. Well, if that isn't the definition of a car, I don't know what is. Cars in and of themselves are dangerous and it takes perhaps 5 minutes of city driving to see someone doing something irresponsible with their vehicle. This is why driving and the automotive industry is so heavily (and rightly) regulated.

The knowledge that people are not save drivers, to me, is a strong argument in favor of autopilot and similar features. I suspect, as many people do, that autopilot doesn't compare favorably to a professional driver who is actively engaged in the activity of driving. But this isn't how people drive. To me, the best argument in favor of autopilot is - and I realize this sounds sort of bad - that as imperfect as it may be, its use need only result in fewer accidents, injuries, and deaths, than the human drivers who are otherwise driving unassisted.


Wow! I'm glad you pointed that out. It was subtle enough I didn't catch it. But perhaps we should consider this type of wording a fallacy, because with that level of weasel-wording, almost anything is possible! The catch is that it presupposes a non-existent piece of information, the software. And we don't know if that software will ever - or can ever - exist.

Misleading examples of the genre:

My cell phone has the right hardware to cure cancer! I just don't have the right app.

The dumbest student in my class has a good enough brain to verify the Higgs-Boson particle. He just doesn't know how.

This mill and pile of steel can make the safest bridge in the world. It just hasn't been designed yet.

Your shopping cart full of food could be used to make a healthy, delicious meal. All you need is a recipe that no one knows.

Baby, I can satisfy your needs up and down as well as any other person. I just have to... well... learn how to satisfy your needs!


All depends on how likely you think it is that self-driving car tech will become good enough for consumer use within the next several years.

If we were well on the way to completing a cure for cancer that uses a certain type of cell phone hardware, maybe that first statement wouldn't sound so ridiculous.


Yes, but of course the only thing that matters is whether or not the car can do it. That it requires hardware and software is important to techies but a non-issue to regular drivers. They buy cars, not 'hardware and software'.

And if by some chance it turns out that more hardware was required after all they'll try to shoehorn the functionality into the available package. If only to save some $ but also not to look bad from a PR point of view. That that compromises safety is a given, you can't know today what exactly it will take to produce this feature until you've done so and there is a fair chance that it will in fact require more sensors, a faster CPU, more memory or a special purpose co-processor.


I agree that since that statement is at the top of the /autopilot page may insinuate that that's what Autopilot is, but that statement is describing the hardware on the cars rather than the software. I think that's intended to be read as "if you buy a new Tesla, you'll be able to add the Full Self-Driving Capability with a future software update; no hardware updates required." It could be made more clear, though.


People will differ about whether the statement is worded clearly enough, but it is a bizarre thing to put on the very top of the page. It is completely aspirational, and there is no factual basis for it either. No company has yet achieved full self-driving capability, so how can Tesla claim their current vehicles have all the hardware necessary? Even if it's true that future software updates will eventually get good enough, what if the computational hardware isn't adequate for running that software (e.g. older iPhones becoming increasingly untenable to use with each iOS update).


The autopilot page needs to start with a description of what autopilot is, and then farther down the page, the point about not having to buy new hardware for "full" self driving could be made. This probably still needs a disclaimer that that is the belief of the company, not a proven concept.

But that's not going to happen, because Tesla wants to deceive some people into believing that autopilot is "full self driving" so they will buy the car.


That's what Tesla says but that's not how people are using it - and as people grow more comfortable with the autopilot, the less vigilant they'll become. I have this picture in my head where people are trying to recoup their commute time as though they're using public transport. I suspect we'll get there some day but today is not that day.

While the Tesla spokespeople are good at saying it's driver assist, their marketing people haven't heard - https://www.tesla.com/autopilot/. That page states "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." but as I noted above, we don't know how much of that safety should be attributed to the human. Tesla apparently knows when the driver's hands are on the steering wheel and I presume they can also tell when the car brakes, so they may have the data to separate those statistics. At a minimum, their engineers should be looking at every case where the autopilot is engaged but the human intervenes. They should probably also slam on the brakes (okay ... more gently) if the driver is alerted to take over but doesn't.

As an aside, just the name "Autopilot" implies autonomy.


"All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."

This is perhaps a case of purposely confusing marketing. All vehicles have the hardware for full self-driving capability but not yet the software. The full self-driving is to be enabled later on through an over-the-air software update.


They can't even honestly claim they've got the hardware, when a "full safe driving capability at a safety level substantially greater than that of a human driver" is at this stage something in the realms of hypothesis, and the software which gets closest to doing so relies on far more hardware than a Tesla to achieve this.

It's not merely purposely confusing. It's at best, not an outright lie only because they hope it's true.


They don't even have the hardware equivalent of a human driver.

People's eyes can detect individual photons, at much higher resolution and dynamic range than a typical camera.


It's amazing how many "basics" Tesla is missing with the statement that their cars have all the tech to be fully self driving. I have an Infiniti QX80 with the surround camera system, lane stay, and collision warning system. Unlike Tesla, they've not gone as far as to implement an autopilot feature and from what I can tell, for good reason. In the less than a year I've owned it, the side warning collision sensor / camera combination have falsely identified phantom threats due to road dust and debris and entirely missed real threats. The sensors simply don't have a self cleaning mechanism like our eyes which is one fairly obvious problem that's lead to a few issues with my QX80. In looking through Tesla's marketing white papers and what-not I see no mention of how they keep their sensor system clean. It seems like that should be a pretty basic concern.


I'm not even sure it's fair to describe it as purposefully confusing. If I'm in the market for a Tesla I want to make sure the expensive car I'm buying will be getting all the updates I'm excited about. Otherwise I might hold off buying for a few years.


Even HN readers are misinterpreting it, and there is nothing to prevent confusion from Tesla side on these marketing pages.

The question of purposefulness is mostly irrelevant. It is their responsibility to avoid ambiguity in this domain because this could be dangerous. They are not doing it => they are putting people in danger. Now a posteriori if somebody manages to sue Tesla after a death a bad injury, maybe the purposefulness will be studied (but it will be hard to decide), but a priori it does not matter much as for the consequences of their in any case mislead attempts (even if it was only for an advertising harmless reason in the mind of the people who wrote it like that in the first place).

To finish, that they are carefully choosing their words to be technically true over and over, yet understood in an other way by most of the people, is at least distasteful. That they are doing it over and over through all existing channel makes it more and more probable that this is even on purpose, of course there is no proof but when we reach a certain point, we can be sufficiently convinced without a formal proof (hell even math proofs are rarely checked formally...)


The statistics also includes all the cases where drivers are not paying attention as they should, and it's still safer than the average car (at least according to Tesla).


> it's still safer than the average car (at least according to Tesla).

This is Tesla's big lie.

In all their marketing, Tesla is comparing crash rates of their passenger cars to ALL vehicles, including trucks and motorcycles, which have higher fatality rates. Motorcycles are about 10x-50x more dangerous than cars.

Not only that, they aren't controlling for variances in driver demographics - younger and old people have higher accident rates than middle-aged Tesla drivers - as well as environmental factors - rural roads have higher fatalities than highways. Never-mind the obvious "accidents in cars with Autopilot" vs "accidents in cars with Autopilot on".

If you do a proper comparison, Tesla's Autopilot is probably 100x more fatal than other cars. It's a god-dammed death trap.

And remember, there are several extremely normal cars with ZERO fatalities: https://www.nbcnews.com/business/autos/record-9-models-have-...

This is not a problem that will be solved without fundamental infrastructure changes in the roads themselves. Anyone that believes in self-driving cars should never be employed, since they don't know what they're talking about.


I agree that the comparison with all motor vehicle deaths is misleading, and that we ought to be looking at accident rates for Tesla cars with Autopilot on versus off. That Tesla hasn't answered the latter despite having the data to do so is concerning.

However, I don't see the evidence for the claim that "Tesla's Autopilot is probably 100x more fatal than other cars". The flip side of the complaint that Tesla hasn't released information to know how safe Autopilot really is, is that we don't know how unsafe it really is either. If this is merely to say "I think Autopilot is likely very unsafe" then just say so, rather than giving a faux numerical value.

As for the claim that self-driving cars can't work without "fundamental infrastructure changes" and everybody working on self-driving cars should be fired, I think you're talking way beyond your domain of expertise.


You're complaining about one side using wildly misleading and baseless stats, but then turn around and throw out a completely baseless and fairly absurd claim with no attempt to even back or source it, and then claim that because some cars have 0 fatalities that means something.

The truth is somewhere in between Tesla's marketing and your wildly absurd 100x more fatal claim, but its much closer to Tesla than you.

Tesla's statistics (i.e. real numbers, but context means everything) do involve a whole whack of unrelated comparisons (buses, 18-wheelers, motorcycles) that all server to skew the stats in various ways, we can ignore them claiming to be slightly safer than regular cars.

However comparing more like to like, IIHS numbers for passenger cars driver deaths on highways puts Tesla at 3.2x more likely than all other cars to be involved in a fatal crash (1 death/428,000,000 miles driven vs tesla's 1 death/133,000,000 miles driven).

Of course this too is an unfair comparison. A 133hp econobox/prius vs a sports car in terms of performance is considered equal in that comparison. If one was really interested in accuracy, a comparison of high power AWD cars in similar price ranges driven on the same roads by the same demographics would be needed.

So by even standards clearly biased against Tesla, they are no where near 100x more fatal than other cars. Tesla's own numbers claim autopilot reduces accidents, and supposedly NHS numbers back them up.

Its important and critical to not believe marketing hype and lazy statistics. If you want people to take you seriously, countering hype and bad stats with equal or worse hype and worse counter stats is not the way to do it.


Technically Tesla cars have an infinite higher death rate than the cars with zero fatalities.

Since Tesla is comparing their crash rates against motorcycles, the 100x number isn't so absurd.


I bet Tesla's are still safer without Autosteer, because the average car includes a lot of old cars and more young drivers.


What is the catchment range for new (inexperienced) drivers? 18-25? There are a lot of people who have been driving for a long time who are not good at driving at all and lack all kind of self awareness. For example, following the car in front too close.


Young people are also less risk averse.


I would also think that the average Tesla owner is less likely to experience constant stress, long commute hours, tiredness and possible mental health issues that can contribute to car accidents.


Drivers not paying attention in non-autopilot vehicles is an increasing problem with the prevalence of smart phones. In places where it's illegal to text and drive I don't think you're going to get out of a ticket by telling the police officer "it's okay because Tesla was driving".

I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash. But you're talking about the autopilot and it's statistically incorrect to say it's safer than the average car. It's merely safer than a driver alone - this should be no surprise as you'll find that cars with backup cameras and alarms don't hit have as many accidents while in reverse as cars without them.


I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash.

How does a Tesla get such good range? There's still a lot of energy in those batteries, and damaging them is far easier to cause a fire than leaking fuel --- the former can self-ignite just from dissipating its own energy into an internal short, while the latter needs an ignition source. In addition, the batteries are under the entire vehicle and thus more likely to be damaged; a fuel tank has a smaller area and is concentrated in one place.

It is extremely rare for fuel tanks to explode in a crash.


I see how that is intuitively true, but it isn't really true in practice. Post crash fires with ICE are unusual, but not extremely rare. Similarly, post-repair car fires (leaky fuel lines) are not as unusual as most people think.

So far, experiential evidence with Tesla seems to be showing a lower than average risk of fires, though the breadth and nature of the battery leads to challenges in managing the fire itself.

All cases that I'm aware of proceeded at a slow enough pace to allow evacuation of the vehicle.


It is the opposite, LiIon/LiPo batteries are inherently dangerous and can cause chemical fire for various reasons (overcharging, undercharging, puncture, high temperature, etc.). These things are monitored/controlled in any modern application in normal usage, but in a crash you have to remember that you are literally few inches away from a massive stored up potential chemical energy. The fire burns very hot, the smoke is toxic and assuming somebody gets to you in time, it can only be extinguished reliably using special dry powder fire extinguishers (Class D)...


Fire needs oxygen, fuel and an ignition source. A battery provides the latter two in very close proximity to each other.

How often does a damaged and leaking fuel tank start a fire?

How often does a lithium battery that has been structurally compromised start a fire?

Fire safety is a major negative for lithium batteries. That much electrical energy in that form factor can only be so safe.


That's true but is against any good sense: cars don't allow you to play a video while driving, and allow you to have the false sense of security that somebody else is driving while instead you have to pay constant attention?


This morning I saw a minivan drifting all over the road. As we passed him, my passenger noticed that he had CNN playing on a phone attached by suction cup to the middle of the windshield.


That doesn't sound like something that has been extensively studied -- it's strange to me how Tesla keeps citing [0] miles driven by "vehicles equipped with Autopilot hardware", as if it couldn't estimate the subset of miles in which Autopilot was activated -- and it seems like something very hard to measure anyway. How can Tesla or the driver know whether an accident was bound to happen if the accident was prevented?

However, I would think that testing Autopilot-alone seems impractical. It's been asserted that AP has no ability to react to or even detect stationary objects in front of it. Can't we assume that in all those cases, non-driver intervention would result in a crash?

[0] https://www.tesla.com/blog/update-last-week%E2%80%99s-accide...


A lot of human-driven car accident victims have done nothing wrong at all.

Almost every driver thinks they're better than average.

Even when it's a stupid person dying from their stupidity, it's still a tragedy.

I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.

Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.


>I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.

What this ignores is that self-driving cars will by and large massively reduce the number of 'stupid' drivers dying (the ones who are texting and driving, drinking and driving, or just simply bad drivers) but may cause the death of more 'good' drivers/innocent pedestrians.

So the total number could go down, but the people who are dying instead didn't necessarily do anything deserving of an accident or death.

I say this as someone who believes self driving cars will eventually take over and we need to pass laws allowing a certain percentage of deaths (so that one at case of the software being at fault doesn't cause a company to go under), but undeserved deaths are something people will likely have to deal with somewhere down the line with self driving cars. At the very least, since they're run by software they should never make the same mistake twice, while with humans you see the same deadly mistakes being made every day.


OK but if by saving 1000 lives a year required as a side-effect that you personally be among the fatalities, would that be OK for you? I hope not. Think of this as a technical corner case; so, the question is the soundness of the analysis—for example, the distribution of deaths and what that means for safety—and not letting various facile logic get in the way of that work.


Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.

I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.

Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.


>> Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.

You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?

So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.

Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?


It saves 2001 (not 1000 as you said, or perhaps said differently, I'm exchanging the lives of 1001 to preserve the lives of the 2001).

It kills 1001.

Net lives saved = 1000.

> How is it that their opinion doesn't matter?

The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?

It's a trolley problem[1]. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.

[1] - https://en.wikipedia.org/wiki/Trolley_problem


>> It kills 1001. >> Net lives saved = 1000.

Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.

>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?

Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.

Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).

>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.

An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.


> And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We [exchanged] their lives.

No. Net lives lost = -1000. Gross lives lost = 1001.

We killed 1001 people to let 2001 live.


I guess we're not really communicating very well.

Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).

Do you think such a vaccine would be considered successful?

I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.


That's not really a very good argument. If you change parameters in a complex system then the odds are that you are going to find pathologies in new places.

People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair ([0]). I still think it's safer to drive a car with a seatbelt rather than without.

[0] http://www.againstseatbeltcompulsion.org/victims.htm


The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside. Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random. Run over, rear-ended, T-crashed at an intersection, and could not reasonably have done anything to prevent it.


>> Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random.

But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.

Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?


So you'd rather 1000 other people die, rather than yourself? What kind of logical exercise is this?


There is a simple solution for this. If you think you are an above average driver, don't use autopilot.


Until self driving cars start crashing in to you


As long as your car doesn't look like an broken road barrier you should be good.


US National Safety Council estimated that there were 40,100 fatal car crashes in 2017 https://www.usatoday.com/story/money/cars/2018/02/15/nationa...


Designing for safety means that you take into account human behavior at every level and engineer the product to avoid those mistakes.

We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.

The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.

The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road. You can't have a driving assistant that can be used as an autopilot.


I agree with most of your points however I'm not convinced by your problem number 2:

>you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

I think you overestimate the rationality of human beings. I commute to work by motorcycle every day and I've noticed that I tend to ride more dangerously (faster, "dirtier" passes etc...) when I'm tired, which rationally is stupid. I know that but I still instinctively do it, probably because I'm in a rush to get home or because I feel like the adrenaline keeps me awake.

This is an advantage of autonomous vehicles, they can be reasonable when I'm not. I expect that few drivers (and especially everyday commuters like me) constantly drive "slowly and with a lot of care". A good enough autonomous vehicle would.


How can anyone take a company serios that sells features such as "Bio-Weapon Defense Mode". It almost sounds like snake oil and not far from something like the ADE 651.

[1] https://en.wikipedia.org/wiki/ADE_651


"Not only did the vehicle system completely scrub the cabin air, but in the ensuing minutes, it began to vacuum the air outside the car as well, reducing PM2.5 levels by 40%. In other words, Bioweapon Defense Mode is not a marketing statement, it is real. You can literally survive a military grade bio attack by sitting in your car."

https://www.tesla.com/blog/putting-tesla-hepa-filter-and-bio...


Here is my reply from the discussion 2 years ago.

https://news.ycombinator.com/item?id=11617945

I think they are not testing with small enough particles. In the article, they test with PM 2.5 particles which would be around 2.5 micro meters. However, if you look at the table on page 11 of

http://multimedia.3m.com/mws/media/409903O/respiratory-prote...

Potential bio weapons such as smallpox, anthrax, influenza and the hemorrhagic viruses are far smaller than 2.5 microns.

Also, there are probably issues with the sensitivity of the detection equipment. If you look at the table on page 8 of

https://www.ll.mit.edu/publications/journal/pdf/vol12_no1/12...

And at the table at

http://www.siumed.edu/medicine/id/current_issues/bioTable2.p...

You will see that some of the biological agents can cause infection with as few as 10 particles. I doubt that the Tesla equipment could detect a concentration of 10 particles of these sizes.

This article is basically the biological equivalent of the I can't break my own crypto article.

>Bioweapon Defense Mode is not a marketing statement, it is real.

is false. Extraordinary claims require extraordinary evidence, and the evidence of Bioweapons Defense Mode working is entirely lacking


This claim is still wrong.

HEPA filters capture particulates. PM2.5 means particles above 2.5 micrometers in diameters. Good 0.2 - 0.3 micrometer HEPA filter is fine enough to catch bacteria like anthrax. Smallpox, influenza virus are smaller. You need carbon absorber to be safe.


> a military grade bio attack

This is like thinking you will survive a storm in the middle of the ocean because you have a life jacket on you.


"extreme levels of pollution" is not the same as a bio-chemical weapon.


Can you expand on this and provide evidence to support your claim? I would imagine Telsa would have throughly vetted this statement however I'm curious to hear how bio-chemical weapons differ from extreme pollution (they do also mention viruses)


Had to look it up [1]

>"We’re trying to be a leader in apocalyptic defense scenarios," Musk continued.

Is this guy serious?

[1] https://www.theverge.com/2015/9/30/9421719/tesla-model-x-bio...


Scenarios, indeed. Hollywood-grade threat models - riveting yet improbable. As opposed to such mundane threats such as "not driving into massive stationary objects".


While the marketing fuzz might have the tone of self driving car, I doubt the legal material in a Tesla model S as it relates to 'autopilot' has such language - its advanced steering assist and cruise control, its not an entirely autonomous car - but its marketed as an autonomous car.

Functionality that can _most_ of the time drive itself without human intervention and occasionally drives itself into a divider on the highway seems like a callous disregard for human life & how such functionality will be used.

Sure, everything is avoidable, if there's some expectation it needs to be avoided.

The whole point of autopilot is to avoid focusing on the road all of the time. So setting up circumstances under which humans perceive the functionality (autopilot) behaving as expected most of the time, then its highly likely they'll treat it as such & will succumb to a false sense of security.

My point: when a feature is life threatening your marketing fluff shouldn't deviate significantly from your legal language.


> you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care

You raise an interesting point, accidents are not normally distributed throughout the driver's day, or even the population. Your likelihood of having a crash with injuries is highly correlated with whether or not you've had one before. A substantial number involve alcohol, consumed by drivers previously cited for DUIs.

We keep using average crash statistics for humans as a baseline, but that may be misleading. Some drivers may actually be increasing their risk by moving to first gen self driving technology, even while others reduce their risk.

On the other hand, we do face a real Hindenburg threat here. Zeppelins flew thousands of miles safely, and even on that disaster, many survived. But all people could think of when boarding an airship after that was Herbert Morrison's voice and flames.

I have already heard people who don't work in technology mumbling about how they think computers are far more dangerous than humans (not because of your nuanced distinction, but simply ignoring or unaware of any statistics).

I worry we're only a few high profile accidents away from total bans on driverless cars, at least in some states. Especially if every one of these is going to make great press, while almost no pure human accidents will. The availability heuristic alone will confuse people.


> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.

I'm not sure I follow you here. Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?

Those two statements don't connect for me.

I suppose that's partly because I am an automation engineer, and I deal a lot with inspecting operator-assembled and machine-assembled parts. If the machine builds parts that also pass the requirements, it's good.

It's nice if it's faster or more consistent, and sure we can build machines that produce parts with unnecessarily tight tolerances, but not meeting those possibilities doesn't feel like an ethical failing to me.


> Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?

Yes, not infallible, but I believe that to take our lives in the hands of machines, the technology must be at least on par with the best drivers. To be better than average, but for instance to be more fallible than a good driver, is IMHO still not a good standard to be ethically ok to sell self driving cars, even if you have 5% less death per year compared to everybody driving while, like, writing text messages in their phones. If instead the machine will be even a bit better than an high standard driver, driving with care, at that point it makes sense because you are not going to care about the distribution of deaths based on the behaviors.


If a survey is conducted in almost any part of the world, I'd guess that most people (including me) would prefer to be hit by a human rather than an autonomous car. I'm not sure why this is, but having someone to blame and the fact that being a victim of an emotionless lifeless machine is subjectively way more horrifying than being a victim of a person are some that I can think of.


I'm in central Europe, switched from a train to a car. Reduced commute time, increased comfort and reliability. Costs several times more though.


What about the fact you cannot do anything useful other than podcasts / music while driving? Btw your point sounds to me just that public transport needs to get better, not that in general is not a good idea to move towards this model.


> What about the fact you cannot do anything useful other than podcasts / music while driving?

To be fair I rarely was doing anything useful on my 40 min train commute either :) Mostly reading Hacked News on my phone. Now I'm at least looking into the distance, taking some strain from the eyes.

I totally agree that public transport has to get better, it's just that there always has to be a mixture of transportation options.


> Reduced commute time, increased comfort and reliability.

Can the reason for these be that many other people use public transport instead and thus the roads are way less busier?


I'm sure that's a big contributor. I myself relied solely on public transport for years we lived close to the subway. Now that we moved outside of the city driving makes more sense.


What makes the "self driving" functionally that all the other brands are marketing different though? Is it only that Tesla was first? Volvo literally calls their technology Autopilot too.


Saying “When you drive you are in control” is fine, but you’re not always the only person impacted when you crash.

As we have no control over how others drive, statistics are more relevant.

As a pedistrian, I care about cars being on average less likely to kill me. If it means I’m safer I would rather the driver wasn’t in control of their own safety.

The ethical solution is the one with the least overall harm.

Of course being safer overall while taking control from the driver is unlikely to drive sales.


> Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.

Do these car companies test their software by just leaving it on all the time in a passive mode and measuring the deviance from the human driver?

I'd think that at least in MV case, you'd see a higher incidence of lane following difference at this spot and it would warrant an investigation.

Something like this isn't easy but for a company as ambitious as Tesla it doesn't sound unreasonable. Such a crash with such an easy repro should have been known about a long long time ago.


4. This statistics must be conditioned. Not all drivers are the same. Some are careful, some are not. So in fact, the careful drivers will make their life miserable by using autopilot.


> A product that kinda drives your car but sometimes fails, and requires you to pay attention

So, cruise control? If people got confused and thought that cruise control was more than it really was in the 80s, or whenever it came out, what would we have done?


That is a long description of a problem that is much deeper than just marketing, IMO. The biggest issue that I see is that the AutoPilot system does not have particularly strong alertness monitoring.


Also, it's my belief that the "miles driven" statistic they always point at (wrt safety) should be reset with each software release.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: