What is happening was clear to many since the start: Tesla incarnates the behavior of his founder, exaggerating the technology that is really available right now, and selling it as a product without the premise for it to be safe. A product that kinda drives your car but sometimes fails, and requires you to pay attention, is simply a crash that we are waiting to happen. And don't get trapped by the "data driven" analysis, like "yeah but it's a lot safer than humans" because there are at least three problems with this statement:
1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.
Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.
> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
This needs elaboration or citation. You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe" and should be shunned in favor of less safe stuff.
I don't buy it. This isn't individual action here, there's no utilitarian philosophy rejection that's going to help you.
In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.
I think the point he's making is that it's worse if a person dies because of something they have no control over (self-driving car malfunction) than if a person dies because of their own stupid choices (driving drunk, driving too fast for conditions, running red lights, etc).
This, of course, ignores the fact that stupid choices drivers make tend to affect other people on the road who did nothing wrong, so the introduction of a self-driving car which makes less stupid decisions would reduce deaths from both categories of people here.
> How many deaths has Tesla prevented, which would have happened otherwise?
Certainly a larger number than the number of deaths caused by Autopilot failure (which makes major news in every individual case). Have a look at YouTube for videos of Teslas automatically avoiding crashes.
> A death caused by shitty engineering (e.g. Tesla in this case) is not the same as a death caused by gross negligence on ones own part.
The result is the same, no? Isn't the idea to save lives? Why does one life have more value than another's simply because someone's death was caused by their own negligence?
What about the other people on the road? Do their lives matter less because of the gross negligence of the person not paying attention while they're driving?
This issue is a lot more complicated than you're making it seem.
No, that conclusion completely ignores the fact that the kind of people who buy Teslas are generally not the kind of people who drive unsafely anyway and die in terrible wrecks.
If you give every shitty driver a self driving Tesla maybe you would do something to make roads safer, but if you’re just giving it to higher net worth individuals who place greater value on their own life, you haven’t even made a dent in traffic safety.
In fact in some cases all you’re doing is making drivers unsafer because the autopilot encourages them to not pay attention to the road no matter how much you think they are watching carefully. The men killed in Teslas could have all avoided their deaths if only they had been paying attention. If I see a Tesla on the road I stay the hell away lest it do something irrational from an error and kill us both.
I do see some sources that claim rich divers get better insurance rates, but it is unclear to me if that is due to driving skill or a number of other factors that increase rates like liklyhood of being stolen or miles driven.
Seems like your first two paragraphs are amenable to analysis. Surely there is data out there that splits traffic accident statistics on income, or some proxy for income. Is a Tesla on autopilot more accident-prone than a BMW or Lexus? The numbers as they stand certainly seem to imply "no", but I'd be willing to read a paper.
The third paragraph though is just you flinging opinion around. You assert without evidence that a Tesla is likely to "do something irrational from an error and kill us both" (has any such crash even happened? Seems like Tesla's are great at avoiding other vehicles and where they fall down it tends to be in recognizing static/slow things like medians, bikers and turning semis and not turning to avoid them). I mean, sure. You be you and "stay the hell away" from Teslas. But that doesn't really have much bearing on public policy.
Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters... those are the demographic you have to compare the Tesla drivers to. Do people who buy Teslas engage in those kinds of dangerous activities?
> Drunk drivers, recklessly fast drivers, redlight runners, stop sign blowers, high speed tailgaters.
Wait, so you are willing to share the road with all those nutjobs, yet you're "staying the hell away from" Teslas you see which you claim are NOT being driven by these people? I think you need to reevaluate your argument. Badly.
That even leaves aside the clear point that a Tesla on autopilot is significantly less likely to make any of those mistakes...
I don't think a self-driving car is safe today but the fundamentals of machine learning seem sound so arguably they will become safer with every mile they drive (and every accident they cause).
My only concern is that there should be somewhat responsible people working on it (this means for example no uber, Facebook, linked in, or ford self-driving car on public roads).
But thinking about bit more, what if competitors shared data? Would that get us "there" (level 4+) at all? Would it be a distraction?
The reason why people buy expensive cars is because it is one of the ways in which they can quickly and quietly signal wealth or status. Cars are classicist, like it or not.
After all, how much better can a $500K supercar be compared to a $50K car? Definitely not ten times better, the speed limits are the same, seating capacity is likely smaller, there may be a marginal improvement in acceleration and a corresponding reduction in range (and increased fuel consumption).
Even having a car / not having a car is a status thing for many people (and it goes both ways, some see not having a car as being 'better' than those that have cars and vice versa, usually dependent on whether or not car ownership was decided from a position of prosperity or poverty).
Some of us just really like nice cars without signaling wealth. I recently bought a Hyundai Genesis even though I liked and could afford the better Mercedes Benz because I preferred to have a non-luxury brand. It's a New England thing.
I went from a two door Hyundai Getz to a Tesla because I wanted an electric car. Not to signal anything. I also couldn't wait for the Model 3 to come to Australia due to a growing family and a 4 door requirement.
>You're one of the only folks I've seen come straight out and make this point: that somehow a technology or practice that is objectively safer can still be "unethically safe"
There are plenty of people making that case. See for example this piece in The Atlantic the other day (https://www.theatlantic.com/technology/archive/2018/03/got-9...) talking about the concept of moral luck in the context of self driving cars. It puts the point eloquently.
Ethics works by consensus. If the majority of the public become convinced that they prefer to die as a result of their choices than as a result of the choices of a machine they have no control over, even if the machine kills less often, then the machine is not an ethical choice anymore. Basically, forcing people to die in one specific way stinks to high heaven.
As to why we use certain technologies, that is not so clear cut either. For instance, if I have a heart attack and someone uses a defibrilator on me- at that time, that is not necessarily my choice. I'm incapacitated and can't communicate and if I die at the end there's no way to know what I would have chosen.
Not to mention- most people are not anywhere nearly informed enough to decide what technology should be used to save or protect their lives (for instance, that's why we have vaccine deniers etc).
The technology is not yet safe, as should be evident by now. It's being promoted as safer than humans, but it's not anywhere near that, yet, mainly because to drive safely you need at least human-level intelligence. Even though humans also drive unsafely, they have the ability to drive very, very safely indeed under diverse conditions that no self-driving car can tackle with anything approaching the same degree of situational awareness and therefore, again, safety.
For the record, that's my objection to the technology: that it's not yet where the industry says it is and it risks causing a lot more harm than good.
Another point. You say nobody is being forced to use the technology. Not exactly; once people start riding self-driving cars then everyone is at risk- anyone can be run over by a self-driving car, anyone can be in an accident caused by someone else's self-driving car, etc.
So it's a bit like a smoker saying nobody is forced to smoke- yeah, but if you smoke next to me I'm forced to breathe in your smoke.
>In fact, medical devices seem actually counter to your argument. No one says an automatic defibrillator's decisions as to when to fire are going to be as good as a human doctor, and in fact these things have non-zero (often fatal) false positive rates. But they save lives, so we use them anyway.
Yes, we would also tolerate Teslas if they were critical life support technology. How many lives has it saved BTW?
>and in fact these things have non-zero (often fatal) false positive rate
The key here is that in the larger system, we can do more to ensure humans perform better. If we encouraged and enforced behaviors that improve driving statistics, perhaps with other technologies, we would yield a more difficult metric to beat than our current driving stats. I agree we should spend our time doing that, rather than accepting in an inferior level of performance.
It is definitely much safer. However it's unethical to gloss over/coverup/play down the fact that people have died because it failed to drive to car. Both can be true.
Everyone talking about statistical evidence should take a look at this NHTSA report [1]. For example, "Figure 11. Crash Rates in MY 2014-16 Tesla Model S and 2016 Model X vehicles Before and After Autosteer Installation", where they are 1.3 and 0.8 per million miles respectively.
Unfortunately, this report seems to have shot itself in the foot by apparently using 'Autopilot' and 'Autosteer' interchangeably, so it leaves open the possibility that the Autopilot software improves or adds features to the performance of fairly basic mitigation methods, such as in emergency braking, while having unacceptable flaws in the area of steering and obstacle detection at speed. In addition, no information is given on the distribution of accident severity. It is unfortunate that this report leaves these questions open.
Even if these claims are sustained, there are two specific matters in which I think the NHTSA has been too passive: As this is beta software, it should not be on the on the public roads in what amounts to an unsupervised test that potentially puts other road users at risk. Secondly, as this system requires constant supervision by an alert driver, hands-on-the-wheel is not an acceptable test that this constraint is being satisfied.
I mentioned this in a reply to another comment, but the NHTSA findings have apparently raised suspicions in other researchers. Currently, the DOT/NHTSA are facing a FOIA lawsuit for not releasing data (they assert that the data reveals Tesla trade secrets) that can be used to independently verify the study's conclusion:
You can add safety automated features without requiring completely insane and dangerous mode of operations (like pretending the car can drive itself except that it can not and pretending that you did not pretend in the first place and reminding that the driver should have its hand on the wheel and be alert and what is even the point then?????)
They could be more useful with -- for now -- less features. They probably won't do it because they want some sacrificial beta testers to collect some more data for their marginally less crappy next version. But given the car does not even have the necessary hardware to become a real self driving car (and that some analysts even think Tesla is gonna close soon) the people taking the risks of being sacrificed will probably not even reap the benefits of the risks they have taken, paying enormous amount of money to effectively work for that company (among other things).
That could be because the car is usually right when it thinks it's in danger; this crash happened because it incorrectly thought it was safe. Having an assistive device that emergency-brakes or steers around obstacles is great as long as there's very few false positives, as long as its assistive and not autonomous. Once it's autonomous, you need to have extremely, extremely low false negative rates as well.
4. There is another completely unknown variable - how many times would the autopilot have crashed if the human hadn't taken over. So Tesla's statistics are actually stating how safe the autopilot and human are when combined, not how safe the autopilot is by itself.
From https://www.tesla.com/autopilot: "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver"
Why do we keep referring to something that we understand should require human supervision as "auto"? Stop the marketing buzzfeed and let's be real.
I'm not sure whether it's intentionally been worded that way, but that sentence makes only a statement about hardware, not software. So technically, it's correct that the hardware is more capable than that of a human ("sees" and "hears" more), but it's the software that's not up to par.
That's like "Made with 100% local organic chicken" is only pointing out that the organic chicken is local, unlike the non-organic chicken whose provenance is not guaranteed.
That would be great to live in the world of marketing people where everyone is so able to parse weasel words. That would solve the fake news problem overnight.
People still use waterproof and water resistant interchangeably. People don't get the difference. Same with HW/SW, they won't know the difference. They read this web page and they think they are buying a self driving car.
And if the coma was remote software upgrade induced? Stretching the analogy a bit, but a lot of people hype Tesla’s remote updates without considering how many remote updates tend to brick our devices.
That sentence is just saying that the _hardware_ (not the software) is sufficient for "full self-driving capability". The current _software_ doesn't support that.
The point being that in the future the car _could_ get "full self-driving capability" via a software update. In contrast, a car that doesn't have the necessary hardware will never be fully self-driving even if we do develop the necessary software to accomplish that in the future.
And yet that is quasi criminal (from an ethical pov) that they have worded it that way for 2 reasons:
a. When you buy a car why should you even care about that hw/sw distinction, and more importantly do you have the distinction in mind at all time, and are advertisement usually worded that way, stating that maybe the car could become self-driving one day (but without even stating the maybe explicitly, just using tricks)
b. It is extremely dubious that the car even have the necessary hardware to become a fully autonomous driving car. We will see, but I don't believe it much, and more importantly competitors and people familiar with the field also don't believe it much...
People clearly are misunderstanding what Tesla Autopilot is, but this is not, ultimately, their fault. This is Tesla's fault. The people operating the car can NOT be considered as perfect flawless robot. Yet Tesla's model consider them like that, and reject all responsibility, not even considering the responsibility that they made a terrible mistake in considering them like that. We need to act the same way as when similar cases happens for a pilot mistake in an Airplane: change the system so that the human will make less mistakes (especially if the human is required for safe operation, which is the case here). But Tesla is doing the complete opposite! By misleading buyers and drivers in the first place.
Tesla should be forced by the regulators to stop their shit: stop misleading and dangerous advertisement; stop their autosteer uncontrolled experiment.
A.) Pretty sure that statement was made to assuage fears that people would be purchasing an expensive asset that rapidly depreciates in value, only to witness it becoming obsolete in a matter of years because its hardware doesn't meet the requirements necessary for full self-driving functionality. Companies like Tesla tout over-the-air patching as a bonus to their product offering. Such a thing is useless if the hardware can't support the new software.
I think I actually sort of disagree with your reasoning in precisely the opposite direction. Specifically, you state the following: "The people operating the car can NOT be considered as perfect flawless robot.".
I agree with that statement 100%. People are not perfect robots with perfect driving skills. Far from it. Automotive accidents are a major cause of death in the United States.
What I disagree with is your takeaway. Your takeaway is that Tesla knows that people aren't perfect drivers, so it is irresponsible to sell people a a device with a feature (autopilot) that people will use incorrectly. Well, if that isn't the definition of a car, I don't know what is. Cars in and of themselves are dangerous and it takes perhaps 5 minutes of city driving to see someone doing something irresponsible with their vehicle. This is why driving and the automotive industry is so heavily (and rightly) regulated.
The knowledge that people are not save drivers, to me, is a strong argument in favor of autopilot and similar features. I suspect, as many people do, that autopilot doesn't compare favorably to a professional driver who is actively engaged in the activity of driving. But this isn't how people drive. To me, the best argument in favor of autopilot is - and I realize this sounds sort of bad - that as imperfect as it may be, its use need only result in fewer accidents, injuries, and deaths, than the human drivers who are otherwise driving unassisted.
Wow! I'm glad you pointed that out. It was subtle enough I didn't catch it. But perhaps we should consider this type of wording a fallacy, because with that level of weasel-wording, almost anything is possible! The catch is that it presupposes a non-existent piece of information, the software. And we don't know if that software will ever - or can ever - exist.
Misleading examples of the genre:
My cell phone has the right hardware to cure cancer! I just don't have the right app.
The dumbest student in my class has a good enough brain to verify the Higgs-Boson particle. He just doesn't know how.
This mill and pile of steel can make the safest bridge in the world. It just hasn't been designed yet.
Your shopping cart full of food could be used to make a healthy, delicious meal. All you need is a recipe that no one knows.
Baby, I can satisfy your needs up and down as well as any other person. I just have to... well... learn how to satisfy your needs!
All depends on how likely you think it is that self-driving car tech will become good enough for consumer use within the next several years.
If we were well on the way to completing a cure for cancer that uses a certain type of cell phone hardware, maybe that first statement wouldn't sound so ridiculous.
Yes, but of course the only thing that matters is whether or not the car can do it. That it requires hardware and software is important to techies but a non-issue to regular drivers. They buy cars, not 'hardware and software'.
And if by some chance it turns out that more hardware was required after all they'll try to shoehorn the functionality into the available package. If only to save some $ but also not to look bad from a PR point of view. That that compromises safety is a given, you can't know today what exactly it will take to produce this feature until you've done so and there is a fair chance that it will in fact require more sensors, a faster CPU, more memory or a special purpose co-processor.
I agree that since that statement is at the top of the /autopilot page may insinuate that that's what Autopilot is, but that statement is describing the hardware on the cars rather than the software. I think that's intended to be read as "if you buy a new Tesla, you'll be able to add the Full Self-Driving Capability with a future software update; no hardware updates required." It could be made more clear, though.
People will differ about whether the statement is worded clearly enough, but it is a bizarre thing to put on the very top of the page. It is completely aspirational, and there is no factual basis for it either. No company has yet achieved full self-driving capability, so how can Tesla claim their current vehicles have all the hardware necessary? Even if it's true that future software updates will eventually get good enough, what if the computational hardware isn't adequate for running that software (e.g. older iPhones becoming increasingly untenable to use with each iOS update).
The autopilot page needs to start with a description of what autopilot is, and then farther down the page, the point about not having to buy new hardware for "full" self driving could be made. This probably still needs a disclaimer that that is the belief of the company, not a proven concept.
But that's not going to happen, because Tesla wants to deceive some people into believing that autopilot is "full self driving" so they will buy the car.
That's what Tesla says but that's not how people are using it - and as people grow more comfortable with the autopilot, the less vigilant they'll become. I have this picture in my head where people are trying to recoup their commute time as though they're using public transport. I suspect we'll get there some day but today is not that day.
While the Tesla spokespeople are good at saying it's driver assist, their marketing people haven't heard - https://www.tesla.com/autopilot/. That page states "All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver." but as I noted above, we don't know how much of that safety should be attributed to the human. Tesla apparently knows when the driver's hands are on the steering wheel and I presume they can also tell when the car brakes, so they may have the data to separate those statistics. At a minimum, their engineers should be looking at every case where the autopilot is engaged but the human intervenes. They should probably also slam on the brakes (okay ... more gently) if the driver is alerted to take over but doesn't.
As an aside, just the name "Autopilot" implies autonomy.
"All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."
This is perhaps a case of purposely confusing marketing. All vehicles have the hardware for full self-driving capability but not yet the software. The full self-driving is to be enabled later on through an over-the-air software update.
They can't even honestly claim they've got the hardware, when a "full safe driving capability at a safety level substantially greater than that of a human driver" is at this stage something in the realms of hypothesis, and the software which gets closest to doing so relies on far more hardware than a Tesla to achieve this.
It's not merely purposely confusing. It's at best, not an outright lie only because they hope it's true.
It's amazing how many "basics" Tesla is missing with the statement that their cars have all the tech to be fully self driving. I have an Infiniti QX80 with the surround camera system, lane stay, and collision warning system. Unlike Tesla, they've not gone as far as to implement an autopilot feature and from what I can tell, for good reason. In the less than a year I've owned it, the side warning collision sensor / camera combination have falsely identified phantom threats due to road dust and debris and entirely missed real threats. The sensors simply don't have a self cleaning mechanism like our eyes which is one fairly obvious problem that's lead to a few issues with my QX80. In looking through Tesla's marketing white papers and what-not I see no mention of how they keep their sensor system clean. It seems like that should be a pretty basic concern.
I'm not even sure it's fair to describe it as purposefully confusing. If I'm in the market for a Tesla I want to make sure the expensive car I'm buying will be getting all the updates I'm excited about. Otherwise I might hold off buying for a few years.
Even HN readers are misinterpreting it, and there is nothing to prevent confusion from Tesla side on these marketing pages.
The question of purposefulness is mostly irrelevant. It is their responsibility to avoid ambiguity in this domain because this could be dangerous. They are not doing it => they are putting people in danger. Now a posteriori if somebody manages to sue Tesla after a death a bad injury, maybe the purposefulness will be studied (but it will be hard to decide), but a priori it does not matter much as for the consequences of their in any case mislead attempts (even if it was only for an advertising harmless reason in the mind of the people who wrote it like that in the first place).
To finish, that they are carefully choosing their words to be technically true over and over, yet understood in an other way by most of the people, is at least distasteful. That they are doing it over and over through all existing channel makes it more and more probable that this is even on purpose, of course there is no proof but when we reach a certain point, we can be sufficiently convinced without a formal proof (hell even math proofs are rarely checked formally...)
The statistics also includes all the cases where drivers are not paying attention as they should, and it's still safer than the average car (at least according to Tesla).
> it's still safer than the average car (at least according to Tesla).
This is Tesla's big lie.
In all their marketing, Tesla is comparing crash rates of their passenger cars to ALL vehicles, including trucks and motorcycles, which have higher fatality rates. Motorcycles are about 10x-50x more dangerous than cars.
Not only that, they aren't controlling for variances in driver demographics - younger and old people have higher accident rates than middle-aged Tesla drivers - as well as environmental factors - rural roads have higher fatalities than highways. Never-mind the obvious "accidents in cars with Autopilot" vs "accidents in cars with Autopilot on".
If you do a proper comparison, Tesla's Autopilot is probably 100x more fatal than other cars. It's a god-dammed death trap.
This is not a problem that will be solved without fundamental infrastructure changes in the roads themselves. Anyone that believes in self-driving cars should never be employed, since they don't know what they're talking about.
I agree that the comparison with all motor vehicle deaths is misleading, and that we ought to be looking at accident rates for Tesla cars with Autopilot on versus off. That Tesla hasn't answered the latter despite having the data to do so is concerning.
However, I don't see the evidence for the claim that "Tesla's Autopilot is probably 100x more fatal than other cars". The flip side of the complaint that Tesla hasn't released information to know how safe Autopilot really is, is that we don't know how unsafe it really is either. If this is merely to say "I think Autopilot is likely very unsafe" then just say so, rather than giving a faux numerical value.
As for the claim that self-driving cars can't work without "fundamental infrastructure changes" and everybody working on self-driving cars should be fired, I think you're talking way beyond your domain of expertise.
You're complaining about one side using wildly misleading and baseless stats, but then turn around and throw out a completely baseless and fairly absurd claim with no attempt to even back or source it, and then claim that because some cars have 0 fatalities that means something.
The truth is somewhere in between Tesla's marketing and your wildly absurd 100x more fatal claim, but its much closer to Tesla than you.
Tesla's statistics (i.e. real numbers, but context means everything) do involve a whole whack of unrelated comparisons (buses, 18-wheelers, motorcycles) that all server to skew the stats in various ways, we can ignore them claiming to be slightly safer than regular cars.
However comparing more like to like, IIHS numbers for passenger cars driver deaths on highways puts Tesla at 3.2x more likely than all other cars to be involved in a fatal crash (1 death/428,000,000 miles driven vs tesla's 1 death/133,000,000 miles driven).
Of course this too is an unfair comparison. A 133hp econobox/prius vs a sports car in terms of performance is considered equal in that comparison. If one was really interested in accuracy, a comparison of high power AWD cars in similar price ranges driven on the same roads by the same demographics would be needed.
So by even standards clearly biased against Tesla, they are no where near 100x more fatal than other cars. Tesla's own numbers claim autopilot reduces accidents, and supposedly NHS numbers back them up.
Its important and critical to not believe marketing hype and lazy statistics. If you want people to take you seriously, countering hype and bad stats with equal or worse hype and worse counter stats is not the way to do it.
What is the catchment range for new (inexperienced) drivers? 18-25? There are a lot of people who have been driving for a long time who are not good at driving at all and lack all kind of self awareness. For example, following the car in front too close.
I would also think that the average Tesla owner is less likely to experience constant stress, long commute hours, tiredness and possible mental health issues that can contribute to car accidents.
Drivers not paying attention in non-autopilot vehicles is an increasing problem with the prevalence of smart phones. In places where it's illegal to text and drive I don't think you're going to get out of a ticket by telling the police officer "it's okay because Tesla was driving".
I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash. But you're talking about the autopilot and it's statistically incorrect to say it's safer than the average car. It's merely safer than a driver alone - this should be no surprise as you'll find that cars with backup cameras and alarms don't hit have as many accidents while in reverse as cars without them.
I do believe "it's still safer than the average car" because there's not a big tank of explosive - I'm most curious to hear what caused such a massive fire in this crash.
How does a Tesla get such good range? There's still a lot of energy in those batteries, and damaging them is far easier to cause a fire than leaking fuel --- the former can self-ignite just from dissipating its own energy into an internal short, while the latter needs an ignition source. In addition, the batteries are under the entire vehicle and thus more likely to be damaged; a fuel tank has a smaller area and is concentrated in one place.
It is extremely rare for fuel tanks to explode in a crash.
I see how that is intuitively true, but it isn't really true in practice. Post crash fires with ICE are unusual, but not extremely rare. Similarly, post-repair car fires (leaky fuel lines) are not as unusual as most people think.
So far, experiential evidence with Tesla seems to be showing a lower than average risk of fires, though the breadth and nature of the battery leads to challenges in managing the fire itself.
All cases that I'm aware of proceeded at a slow enough pace to allow evacuation of the vehicle.
It is the opposite, LiIon/LiPo batteries are inherently dangerous and can cause chemical fire for various reasons (overcharging, undercharging, puncture, high temperature, etc.). These things are monitored/controlled in any modern application in normal usage, but in a crash you have to remember that you are literally few inches away from a massive stored up potential chemical energy. The fire burns very hot, the smoke is toxic and assuming somebody gets to you in time, it can only be extinguished reliably using special dry powder fire extinguishers (Class D)...
That's true but is against any good sense: cars don't allow you to play a video while driving, and allow you to have the false sense of security that somebody else is driving while instead you have to pay constant attention?
This morning I saw a minivan drifting all over the road. As we passed him, my passenger noticed that he had CNN playing on a phone attached by suction cup to the middle of the windshield.
That doesn't sound like something that has been extensively studied -- it's strange to me how Tesla keeps citing [0] miles driven by "vehicles equipped with Autopilot hardware", as if it couldn't estimate the subset of miles in which Autopilot was activated -- and it seems like something very hard to measure anyway. How can Tesla or the driver know whether an accident was bound to happen if the accident was prevented?
However, I would think that testing Autopilot-alone seems impractical. It's been asserted that AP has no ability to react to or even detect stationary objects in front of it. Can't we assume that in all those cases, non-driver intervention would result in a crash?
A lot of human-driven car accident victims have done nothing wrong at all.
Almost every driver thinks they're better than average.
Even when it's a stupid person dying from their stupidity, it's still a tragedy.
I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
Agree with your point #3. If Tesla autopilot only works in some conditions, its numbers are only comparable to human drivers in these same conditions.
>I really think data driven analysys is the way to go. If we can get US car fatalities from 30000 a year to 29000 a year by adopting self-driving cars, that's 1000 lives saved a year.
What this ignores is that self-driving cars will by and large massively reduce the number of 'stupid' drivers dying (the ones who are texting and driving, drinking and driving, or just simply bad drivers) but may cause the death of more 'good' drivers/innocent pedestrians.
So the total number could go down, but the people who are dying instead didn't necessarily do anything deserving of an accident or death.
I say this as someone who believes self driving cars will eventually take over and we need to pass laws allowing a certain percentage of deaths (so that one at case of the software being at fault doesn't cause a company to go under), but undeserved deaths are something people will likely have to deal with somewhere down the line with self driving cars. At the very least, since they're run by software they should never make the same mistake twice, while with humans you see the same deadly mistakes being made every day.
OK but if by saving 1000 lives a year required as a side-effect that you personally be among the fatalities, would that be OK for you? I hope not. Think of this as a technical corner case; so, the question is the soundness of the analysis—for example, the distribution of deaths and what that means for safety—and not letting various facile logic get in the way of that work.
Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
I think that’s a win, even if I now have an even statistical chance to be in the 1001 and no chance to be in the 2001.
Requiring that I be in 1001 is not ok, no more than requiring I donate all my organs tomorrow. Allowing that I might be in the 1001 is ok, just a registering for organ donation is.
>> Let’s imagine that auto-driving tech saves 2001 gross lives per year and kills 1001 people who wouldn’t have died in an all human driving world, for a net of 1000 lives saved.
You're saying that auto-driving would save the lives of 1000 people who would have died without it, by causing the death of another 1001 that wouldn't have died if it wasn't for auto-driving?
So you're basically exchanging the lives of the 1001 and the 1000? That looks a lot less of a no-brainer than your comment makes it sound.
Not to mention, the 1001 people who wouldn't have died if it wasn't for auto-driving would most probably prefer to not have to die. How is it that their opinion doesn't matter?
It saves 2001 (not 1000 as you said, or perhaps said differently, I'm exchanging the lives of 1001 to preserve the lives of the 2001).
It kills 1001.
Net lives saved = 1000.
> How is it that their opinion doesn't matter?
The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
It's a trolley problem[1]. Individual people have been killed by seatbelts, yet you probably think it's OK that we have seatbelts because many more people have been saved and/or had their injuries reduced. Individual people have been killed by airbags, yet you probably think it's OK that we have them. Many people have been killed by obesity-related mortality by shifting walkers and bikers into cars, yet you probably think it's OK that we have cars.
Right. And net lives lost = 1001. So, we've killed 1001 people to let another 1000 live. We exchaned their lives.
>> The 2001 who are saved by auto-driving were also most probably not interested in dying. How is it that their opinion doesn't matter?
Of course it matters, but they were dying already, until we intervened and killed another 1001 people with our self-driving technology.
Besides, some of the people who would be dying without self-driving technology had control of their destiny, much unlike the (btw, very theoretical) trolley problem. Some of them probably made mistakes that cost their lives. Some of them were obviously the victims of others' mistakes. But the people killed because of self-driving cars were all the victims of self-driving cars mistakes (they were never the driver).
>> Individual people have been killed by airbags, yet you probably think it's OK that we have them.
An airbag or a seatbelt can't drive out on the road and run someone over. The class of accident that airbags cause is the same kind of accident you get when you fall off a ladder etc. But the kind of accident that auto-cars cause is an accident where some intelligent agent takes action and the action causes someone else harm. An airbag is not an intelligent agent, neither is a seatbelt- but an AI car, is.
Let's change tack slightly. Say that we had a vaccine for a deadly disease and 1 million people were vaccinated with it. And let's say that out of that 1 million people, 1000 died as a side effect of the vaccine, while 2001 people avoided certain death (and let's say that we are in a position to know that with absolute certainty).
Do you think such a vaccine would be considered successful?
I guess I should clarify that when I say "considered successful" I mean: a) by the general population and b) by the medical profession.
That's not really a very good argument. If you change parameters in a complex system then the odds are that you are going to find pathologies in new places.
People claim seatbelts have caused lots of deaths, and I'm sure at least a percentage of these claims are fair ([0]). I still think it's safer to drive a car with a seatbelt rather than without.
The downside of a laissez-faire policy towards self-driving cars could fall on anyone, but so does the upside. Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random. Run over, rear-ended, T-crashed at an intersection, and could not reasonably have done anything to prevent it.
>> Again, a lot of human driven car crash victims have done nothing wrong and were victimized pretty much at random.
But all self-driving car victims (will) have done nothing wrong. Whethere they were riding in the car that killed them or not, they were not in control of it, so they 're not responsible for the decision that led to their deaths.
Unless the decision to go for a walk or a cycle, or to ride on a car makes you responsible for dying in a car accident?
Designing for safety means that you take into account human behavior at every level and engineer the product to avoid those mistakes.
We already know that there is an area between drive assistant and automatic driving where people just can't keep their attention level where it needs to be. Driving is activity that maintains attention. People can't watch the road hand on the wheel when nothing happens and keep their level of attention up when the car is driving.
The way I see it, the biggest safety sin from Tesla is having experimental beta feature that that is known weak point for humans. Adding potentially dangerous experimental feature, warning about it, then washing your hands about it is not good safety engineering.
The news story points out how other car manufacturers have cameras that watch the driver for the signs of inattention. Driving assistant should watch both the driver and the road.
You can't have a driving assistant that can be used as an autopilot.
I agree with most of your points however I'm not convinced by your problem number 2:
>you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
I think you overestimate the rationality of human beings. I commute to work by motorcycle every day and I've noticed that I tend to ride more dangerously (faster, "dirtier" passes etc...) when I'm tired, which rationally is stupid. I know that but I still instinctively do it, probably because I'm in a rush to get home or because I feel like the adrenaline keeps me awake.
This is an advantage of autonomous vehicles, they can be reasonable when I'm not. I expect that few drivers (and especially everyday commuters like me) constantly drive "slowly and with a lot of care". A good enough autonomous vehicle would.
How can anyone take a company serios that sells features such as "Bio-Weapon Defense Mode". It almost sounds like snake oil and not far from something like the ADE 651.
"Not only did the vehicle system completely scrub the cabin air, but in the ensuing minutes, it began to vacuum the air outside the car as well, reducing PM2.5 levels by 40%. In other words, Bioweapon Defense Mode is not a marketing statement, it is real. You can literally survive a military grade bio attack by sitting in your car."
I think they are not testing with small enough particles. In the article, they test with PM 2.5 particles which would be around 2.5 micro meters. However, if you look at the table on page 11 of
You will see that some of the biological agents can cause infection with as few as 10 particles. I doubt that the Tesla equipment could detect a concentration of 10 particles of these sizes.
This article is basically the biological equivalent of the I can't break my own crypto article.
>Bioweapon Defense Mode is not a marketing statement, it is real.
is false. Extraordinary claims require extraordinary evidence, and the evidence of Bioweapons Defense Mode working is entirely lacking
HEPA filters capture particulates. PM2.5 means particles above 2.5 micrometers in diameters. Good 0.2 - 0.3 micrometer HEPA filter is fine enough to catch bacteria like anthrax. Smallpox, influenza virus are smaller. You need carbon absorber to be safe.
Can you expand on this and provide evidence to support your claim? I would imagine Telsa would have throughly vetted this statement however I'm curious to hear how bio-chemical weapons differ from extreme pollution (they do also mention viruses)
Scenarios, indeed. Hollywood-grade threat models - riveting yet improbable. As opposed to such mundane threats such as "not driving into massive stationary objects".
While the marketing fuzz might have the tone of self driving car, I doubt the legal material in a Tesla model S as it relates to 'autopilot' has such language - its advanced steering assist and cruise control, its not an entirely autonomous car - but its marketed as an autonomous car.
Functionality that can _most_ of the time drive itself without human intervention and occasionally drives itself into a divider on the highway seems like a callous disregard for human life & how such functionality will be used.
Sure, everything is avoidable, if there's some expectation it needs to be avoided.
The whole point of autopilot is to avoid focusing on the road all of the time. So setting up circumstances under which humans perceive the functionality (autopilot) behaving as expected most of the time, then its highly likely they'll treat it as such & will succumb to a false sense of security.
My point: when a feature is life threatening your marketing fluff shouldn't deviate significantly from your legal language.
> you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care
You raise an interesting point, accidents are not normally distributed throughout the driver's day, or even the population. Your likelihood of having a crash with injuries is highly correlated with whether or not you've had one before. A substantial number involve alcohol, consumed by drivers previously cited for DUIs.
We keep using average crash statistics for humans as a baseline, but that may be misleading. Some drivers may actually be increasing their risk by moving to first gen self driving technology, even while others reduce their risk.
On the other hand, we do face a real Hindenburg threat here. Zeppelins flew thousands of miles safely, and even on that disaster, many survived. But all people could think of when boarding an airship after that was Herbert Morrison's voice and flames.
I have already heard people who don't work in technology mumbling about how they think computers are far more dangerous than humans (not because of your nuanced distinction, but simply ignoring or unaware of any statistics).
I worry we're only a few high profile accidents away from total bans on driverless cars, at least in some states. Especially if every one of these is going to make great press, while almost no pure human accidents will. The availability heuristic alone will confuse people.
> 1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
I'm not sure I follow you here. Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?
Those two statements don't connect for me.
I suppose that's partly because I am an automation engineer, and I deal a lot with inspecting operator-assembled and machine-assembled parts. If the machine builds parts that also pass the requirements, it's good.
It's nice if it's faster or more consistent, and sure we can build machines that produce parts with unnecessarily tight tolerances, but not meeting those possibilities doesn't feel like an ethical failing to me.
> Are you saying that because humans are known to be falible, but technology can be made nigh-infalible, that technology should be held to a higher ethical standard?
Yes, not infallible, but I believe that to take our lives in the hands of machines, the technology must be at least on par with the best drivers. To be better than average, but for instance to be more fallible than a good driver, is IMHO still not a good standard to be ethically ok to sell self driving cars, even if you have 5% less death per year compared to everybody driving while, like, writing text messages in their phones. If instead the machine will be even a bit better than an high standard driver, driving with care, at that point it makes sense because you are not going to care about the distribution of deaths based on the behaviors.
If a survey is conducted in almost any part of the world, I'd guess that most people (including me) would prefer to be hit by a human rather than an autonomous car. I'm not sure why this is, but having someone to blame and the fact that being a victim of an emotionless lifeless machine is subjectively way more horrifying than being a victim of a person are some that I can think of.
What about the fact you cannot do anything useful other than podcasts / music while driving? Btw your point sounds to me just that public transport needs to get better, not that in general is not a good idea to move towards this model.
> What about the fact you cannot do anything useful other than podcasts / music while driving?
To be fair I rarely was doing anything useful on my 40 min train commute either :) Mostly reading Hacked News on my phone. Now I'm at least looking into the distance, taking some strain from the eyes.
I totally agree that public transport has to get better, it's just that there always has to be a mixture of transportation options.
I'm sure that's a big contributor. I myself relied solely on public transport for years we lived close to the subway. Now that we moved outside of the city driving makes more sense.
What makes the "self driving" functionally that all the other brands are marketing different though? Is it only that Tesla was first? Volvo literally calls their technology Autopilot too.
Saying “When you drive you are in control” is fine, but you’re not always the only person impacted when you crash.
As we have no control over how others drive, statistics are more relevant.
As a pedistrian, I care about cars being on average less likely to kill me. If it means I’m safer I would rather the driver wasn’t in control of their own safety.
The ethical solution is the one with the least overall harm.
Of course being safer overall while taking control from the driver is unlikely to drive sales.
> Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
Do these car companies test their software by just leaving it on all the time in a passive mode and measuring the deviance from the human driver?
I'd think that at least in MV case, you'd see a higher incidence of lane following difference at this spot and it would warrant an investigation.
Something like this isn't easy but for a company as ambitious as Tesla it doesn't sound unreasonable. Such a crash with such an easy repro should have been known about a long long time ago.
4. This statistics must be conditioned. Not all drivers are the same. Some are careful, some are not. So in fact, the careful drivers will make their life miserable by using autopilot.
> A product that kinda drives your car but sometimes fails, and requires you to pay attention
So, cruise control? If people got confused and thought that cruise control was more than it really was in the 80s, or whenever it came out, what would we have done?
That is a long description of a problem that is much deeper than just marketing, IMO. The biggest issue that I see is that the AutoPilot system does not have particularly strong alertness monitoring.
I have now talked with two people who have autopilot in their model S's and both said the problem with autopilot is that it is "too good". Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail. If it beeps then to tell you to take over, you have to mentally get back into the situational awareness of the road and then decide what to do about it. If that lag time is longer then the time the car has given you to respond, you would likely crash.
Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
Relevant quote from an article about the Air France 447 crash:
> To put it briefly, automation has made it more and more unlikely that ordinary airline pilots will ever have to face a raw crisis in flight—but also more and more unlikely that they will be able to cope with such a crisis if one arises.
Airline pilot training is designed to handle this, as well as they have a co-pilot who prevents them from being distracted and is able to take over if they are distracted.
Tesla is just handing it out to anyone who can afford to buy a Model S/X (and now Model 3) with the absolute minimum of warnings that they can get away with.
I would not be surprised if Musk coldly responded that this is a known problem with no solution, but autopilot still lowers the death rate on average. In other words, for the proponents it's a trade-off.
And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers. Which of course will snowball when reliance on automation keeps making drivers worse than better, and at some point you're past the point of no return.
A high rate of road deaths isn't a fait accompli. Musk would have us believe that technology is the only answer.
The UK has the 2nd-lowest rate of road deaths in the world (after Sweden).
The roads in the UK are not intrinsically safe, they are very narrow both in urban and rural areas which means there are more hazards and less time to avoid them.
However, the UK has strict driver education programme. It is not easy to pass the driving test, with some people failing multiple times. It means that people only get a license when they are ready for it. Drink-driving will also get you a prison sentence and a driving ban.
Just a note. Switzerland ranks better than the UK. By inhabitants: Switzerland (2.6), Sweden (2.8) and UK (2.9). By motor vehicles: Switzerland (3.6), Finland (4.4), Sweden (4.7) and UK (5.1). By number of kilometres driven: Sweden (3.5), Switzerland (3.6) and UK (3.6).
I'd also note that most European countries are hot on the heels of the UK, Sweden and Switzerland by the above measures. By comparison, the US numbers are 10.6, 12.9 and 7.1, respectively. Most European countries are well below those numbers.
Particularly in Western European and Nordic countries, the driving tests are very strict. Even for all the stereotypes, France's numbers of 5.1, 7.6 and 5.8 are quite good, and they are moving in the right direction.
As someone that has been caught speeding, it's also worth mentioning that one of the big reasons why the UK has improved its road safety statistics is a reasonably new initiative where you get an option on your first offence to either take the points on your license or to attend a safety workshop.
IIRC, the workshop was about three hours, but it was surprisingly useful. The instructors treated you like adults and not children or criminals, and they gave fairly useful tips on driving and looking out for things like lights suddenly changing, ensuring you are in the right gear, how you're supposed to react if an emergency vehicle wants you to go forward when you're by a set of traffic lights with a camera, etc.
However, on the drink driving front, given the news with Ant from Ant and Dec I think it's safe to assume that not everyone gets a prison sentence for drink driving.
Out of curiosity, how are you supposed to react if an emergency vehicle wants you to go forward when you're by a set of traffic lights with a camera?
I would think to look carefully at all directions and, if visibility allows, pass the red light, then contest the fine with an "emergrncy vehicle passing through" defence. But what is the official position?
I am not sure the UK has traffic light cameras, but they do some places in Germany. And the official position in Germany, and in most of Europe I think, is that emergency vehicle decisions trumps everything else. If a police officer directs you to do something that would break the law, then you should do it, as a police officer's decision trumps regular traffic laws.
At least, that's how it works in Germany and Denmark. But I don't think Denmark has traffic light cameras. I've never seen them anyway. But I've seen them in Germany.
Of course, this is assuming you don't actually cross the entire junction, but rather just moves out into the junction, so the emergency vehicle can get through.
Yep, that's what we were told. It doesn't matter if you're doing the right thing by getting out of someone's way, you'll get a fine/points if you cross the line.
Although, if you are at a set of traffic lights and an emergency vehicle tries to get you to cross the line, what you should do is write down the registration plate and contact the relevant service to report the driver. The instructor on this course was ex-police, and according to him police, paramedics, and firefighters in the UK are taught to not do this under any circumstance, and if they are caught trying to persuade someone to cross a red traffic light then they can get in a lot of trouble.
The only case that trumps a red traffic light is when given a signal by an authorised person (e.g., police officers, traffic officers, etc).
I think under a literal interpretation of the law you are obliged to commit an offence if you are beckoned on across a stop line at a red traffic light; you can either refuse the instruction to be beckoned on (an offence) or you can cross the stop line (an offence). That said, there's plenty of habit of the beckoning taking precedent over the lights.
Basically the only time you see any police officer instructing traffic from a vehicle is when on a motorbike, typically when they're part of an escort.
That's the way I think it works - I had to do that on a set of lights I thought had a camera (turns out it mustn't have been on as nothing came of it), but quickly weighed it up in my head of "several hours of BS arguing it for me" vs "someone might die".
Police cars will have dash cams, not sure on ambulances or fire engines.
That being said, scariest thing I did on the road was going through a red light to let an ambulance through at a motorway off-ramp. You better hope everyone else has heard those sirens.
Drink driving rarely attracts a prison sentence. In the vast majority of cases it attracts a driving ban along with a significant fine. The sentencing guidelines have imprisonment as an option for blowing over 120 where the limit is 35 (in England and Wales, it is lower in Scotland now).
The UK went through a major cultural change relating to drink driving several decades ago, it isn't viewed as acceptable, the police get tip offs on a regular basis.
It's not too common to head to prison for a single DD incident. It's also worth noting that England&Wales and Scotland have different drink driving laws.
In Scotland, the BAC limit is lower than in England and the punishment is a 12 month driving ban and fine for being over the limit - no grey areas or points or getting away with it.
In England a fine and penalty points are common, repeat offenders can be suspended and jailed. The severity of the punishment can often depend on how far over the limit you are and other factors.
> However, on the drink driving front, given the news with Ant from Ant and Dec I think it's safe to assume that not everyone gets a prison sentence for drink driving.
Nope, I think his court case has been moved back. The court wouldn't say why, but it's believed to be because they want him to ensure he gets the most out of his time back in rehab.
Other innovations include an off road "hazard perception test" I'd be pleasantly surprised if derivatives of self driving software could reliably pass.
> The roads in the UK are not intrinsically safe, they are very narrow both in urban and rural areas which means there are more hazards and less time to avoid them.
Actually, paradoxically that means they are actually safer. People drive slower on narrower roads, which means that accidents are within the safe energy envelope that modern cars can absorb.
Very, very few people will ever die as a passenger or driver in a car accident at 25 mph / 40 kph. At 65mph / 100kph, the story is completely different.
You say that but people will happily drive at 50+ down a narrow country road. I think the "narrow = slower" only works for a limited period of time before people get normalised to it.
Had to thread a van through a temporary concrete width restriction the other day - when it's that narrow, even the Uber behind me wasn't giving me grief for going that slowly!
The country roads one has always dumbfounded me though - why some of those have national speed limits I will never know.
As far as I'm aware, they're national speed limits because they don't have the resource to work out the limit, or police them. I learnt on country roads and my instructor was very clear that although I could go at 60mph, I should drive to the conditions of the road.
Growing up driving in country roads in the UK you learn some tricks (dumb tricks you shouldn't do). One example is at night time you can take corners more quickly by driving on the wrong side of the road. If you can't see another cars headlights, then there are none coming.
The thought of doing this now scares me and I don't do this and suggest that no one else does either. But I know many people still drive like this.
> The country roads one has always dumbfounded me though - why some of those have national speed limits I will never know.
Why not? Even roads with lower speed limits you're required to drive at a speed appropriate for the road, the conditions, and your vehicle; the speed limit merely sets an upper-bound, and it's not really relevant whether it's achievable. Just look at the Isle of Man where there is no national speed limits: most roads outside of towns have no speed limit, regardless of whether they're a narrow single-lane road or one of the largest roads on the island.
If you set a limit, some people will drive it regardless. Even if you're supposed to drive to the road and conditions, there are enough utter morons out there who'll take a blind narrow corner at 60.
> It is not easy to pass the driving test, with some people failing multiple times. It means that people only get a license when they are ready for it.
And that's the way it should be. The driving test may not be easy, but it's not any more difficult than driving is. People should be held to a high standard when controlling high speed hunks of metal.
At the moment Tesla haven't shown that auto-pilot does lower the death rate. They've only released statistics about auto-pilot enabled cars rather than statistics for when auto-pilot was in control.
Any statistics released by Tesla should be compared against similar statistics from say modern Audis with lane-assist and collision detection.
Also, the cohort that purchases Tesla vehicles may be a lower-risk group of drivers than average.
This could happen because Tesla vehicles are more expensive than comparable conventional vehicles, less attractive to those with risky lifestyles, inconvenient for people who don't have regular driving patterns that allow charging to be planned, or more attractive to older consumers who wish to signal different markers of status than the young go-fast crowd.
You'd possibly want to compare versus non-auto-pilot Tesla drivers on the same roads in similar conditions, but the problem remains that the situations where auto-pilot is engaged may be different from those when the driver maintains control.
In sum, it's hard to mitigate the potential confounding variables.
> Also, the cohort that purchases Tesla vehicles may be a lower-risk group of drivers than average.
Yes - that's why I suggest comparing it to modern Audi drivers. Basically any of the BMW/Mercedes/Audi drivers are where Tesla is getting most of its customers. Those companies all have similar albeit less extensive safety systems.
Thats because all autopilot enabled cars, have the safety part enabled by default. Automatic Emergency Brakeing. Side collision avoidance, Lane detection etc.
Those systems are the great ones at the moment. But almost all modern cars have those.
They're brilliant because they augment humans by leaving humans to do all the driving and staying focused on that but then taking over when the driver gets distracted and is about to hit something.
Auto-pilot does it the opposite way round. It does the driving but not as well as the human but then the human can't stay as alert as the car so isn't ready to take over.
Actually despite your skepticism Tesla's PR spin has already beaten you.
"but autopilot still lowers the death rate on average"
That's not what they said, they said the death rate was lower than the average. And yet you can't help hearing that it lowered the death rate. I think it's very likely turning on autopilot massively increases the rate of death for Tesla drivers, but they've managed to deflect from that so skilfully.
The comment above you supposes that people who drive teslas have fewer accidents on average, even without autopilot.
Saying that autopilot "lowers the average" would mean that autopilot lowers the amount of accidents for tesla drivers, while "lower than average" could mean that while a tesla with autopilot is safer than the average, it is less safe than a tesla without autopilot.
Pretty complicated
I don't think that's the correct answer. Flight autopilots have lowered accident rates tenfold, but every time there is a crash while on autopilot Boeing/Airbus will ground every single plane of that type and won't allow them to fly until the problem is found and fixed. If I know that my car's autopilot is statistically less likely to kill me than I am to kill myself, I would still much much rather drive myself.
If Tesla were to respond in a similar vein - that is turn off auto-pilot each time there was a fatal crash until the cause was fully investigated and fixed then I'd feel a lot more comfortable.
From the videos in the article it's clear that auto-pilot should be disabled when there is bright low-light sunshine. Tesla should be prepared to tell customers there's certain times of the day when the reliability of the software is not high enough and turn it off.
These are all 'beta' testers after all so they shouldn't complain too much.
> Tesla should be prepared to tell customers there's certain times of the day when the reliability of the software is not high enough and turn it off.
Imho, they should detect that situation (by using cameras and/or the current time + GPS) and not allow you to switch it on. You should not give drivers the choice between safety and lazy (from which I assume that auto-pilot driving when currently feasible it SAFER than manual -- which I assumes, but read elsewhere in the thread is not yet properly demonstrated).
That's really not how it works in aviation. Aircraft manufacturers and the FAA don't automatically ground every plane when a crash occurs when autopilot was in use. They conduct an investigation first. And even if the investigation finds an autopilot fault they're likely to just issue an airworthiness directive warning pilots about the failure mode rather than immediately grounding everything.
> but autopilot still lowers the death rate on average.
I think the question of using self driving vehicles comes down to this. Do you want to be part of a statistics that you can control, or do you want to be part of one that you cannot?
When we go in trains or planes, we are already being part of statistics that we cannot control. But those things also are extremely reliable.
So there seems to be a threshold, where someone should opt for being part of a statistics that you cannot control.
The people who are pushing SDV's, by some sleight of hand, seem to hide this aspect, and have successfully showcased raw, (projected) statistics that implicitly assumes the rate of progress with SDV's, also assume that SDV ll continue to progress until they reach that capability.....
>And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers.
But there is always are risk of catastrophic regressions [1] with each update, right?
well, even if you are in control with your hands on the wheel, you can still get heart (and even die) in a car accident where you have no responsibility at all. Sure you can control your car, but you cannot control others'...
(this is even more obvious from a cyclist point of view)
Discussions of driving among lay people are generally useless and interminable, due largely to the supercharged Dunning-Kruger effect that driving for whatever reason produces. No matter where you are or who's talking, the speaker is always a good, careful driver, other drivers are reckless morons, and the city in which they all live has the worst drivers in the country.
Everyone everywhere says the same thing. It's information-free discourse.
I am Sorry. Not having control of a lot of stuff that happen in the world does not mean that you don't have some influence on if you will be in an accident or not, if you are driving the car yourself?
With half baked self driving tech, you have absolutely no control..
> And also the idea that an automatic driver is easier to improve over time (skewing the trade-off better and better) than human drivers. Which of course will snowball when reliance on automation keeps making drivers worse than better, and at some point you're past the point of no return.
I really hope it does get to the point were all drivers are automatic and interconnected. Cars could cooperate to a much higher extent and traffic could potentially be much more efficient and safe.
"Hello, fellow robot car, I am also a robot car, not a wifi pineapple notatall nooosir, and all lanes ahead are blocked. But hey, there's a shortcut, just turn sharp to the right. Yes, your map says you're on a bridge, but don't worry, I have just driven through there, trust me." Nothing could possibly go wrong and the idea of evil actors in such a network is absurd - right? Your idea reminds me of the 1990s Internet - cf today's, where every other node is potentially malicious.
There's no reason such a system would have to be engineered to be totally trusting. The car, in that situation, would likely be set up to say "nope, my sensors say that's not safe" and hand off to the driver or reject the input.
Use it just for additional input to err on the side of safety. If the car ahead says "hey, I'm braking at maximum" and my car's sensors show it's starting to slow down, I can apply more immediate braking power than if I'd just detected the initial slight slowdown.
Or, "I'm a car, don't hit me!" pings might help where my car thinks it's a radar artifact, like the Tesla crash last year where it hit the semi crossing the highway.
That last part is actually scariest: blame the victim, yay. Unless they can prove they had a functional, powered, up to date, compatible I'm-a-car responder, it's their own damn fault for being invisible to the SDV. How about "I'm a human, don't hit me"? Does that also sound like a good idea (RIP Elaine Herzberg)?
In other words, "all other road users should accommodate my needs just to make life a bit easier for me" is a terrible idea.
I think that's a needlessly uncharitable interpretation.
Having cars communicate information to each other has the potential to be an additional safety measure. It's like adding reflectors to bikes - adding them wasn't victim blaming, it was just an additional thing that could be added to reduce accidents.
Sure, I understand that you're proposing it as an improvement , and it would even be an improvement - but using this for scapegoating will happen, as long as there are multiple parties to any accident; we have seen this in the last Uber crash ("find anything pointing anywhere but at Uber"), or in any bike crash ("yeah, the truck has smashed into him at 60 MPH and spread him over two city blocks, but he's at fault for not wearing a helmet - it would have saved him!!!").
Also, what's to stop evil actors right now? There is a road near me with the bridge out. Anybody could just go remove the warning signs. Why don't they?
>I really hope it does get to the point were all drivers are automatic and interconnected. Cars could cooperate to a much higher extent and traffic could potentially be much more efficient and safe.
I think you can make some conclusions from the current software industry and see how many defects are deployed daily, I see Tesla and Uber have same defects as any SV startup and not as NASA, having this starups controlling all the cars on the road sounds a terrible idea.
> Airline pilot training is designed to handle this
It is certainly designed to do that, but even for airline pilots there are limits to what is possible.
You cannot train a human to react within 1ms; that's just physiologically impossible. Nor can you train a human to fully comprehend a situation within x ms, where x depends on the situation.
So the autopilot would have to warn a human say 2x ms before an event that requires attention, where it can of course not yet know of the event, so that amounts to 'any time there could possibly be an event 2x ms in the projected future'. Which is probably: most of the time. Making the autopilot useless.
The other big difference is that in a plane high up in the sky and relatively far away from any others, even if the autopilot demands the human take over, there is still a lot of time to react. Many seconds to even minutes, depending on the situation.
In a car, the requirement is fractions of a second.
I think you misunderstand the GP - he is stating that - even if pilots with their rigorous training can make such disastrous mistakes, with car autopilots it will be way worse.
AF447 really was not about automation. The plane’s controls have a crazy user interface. Most of the time you can pull on the stick as hard as you want and the plane won’t stall, but that feature was disabled due to some weird circumstances. This would be like if sometimes a car’s accelerator would cut out before you rear-ended a car in front of you but 1% of the time that feature was off. As an added bit of UI failure, once they were stalling badly enough, the stall alarm actually turned itself off, which may have made the first officer think he was doing the right thing. This was a UI failure.
And in the current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology", I think machine driving cars will have a hard time being accepted. Imagine the autopilot is 100 times safer than human drivers. Full implementation would mean about a fatality a day in the US. It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided. Maybe we should use our education system to help heal us from the safety first (do nothing) culture we seems to gotten our selves into in the US.
This is a similar situation to nuclear power. With nuclear, all your waste can be kept and stored and nuclear has almost zero deaths per year. Contrast to the crazy abomination that is coal burning.
Nuclear power plants, plane crashes, terrorism, school shootings, even some rare diseases (the mad cow disease led to the slaughter of millions of cattle, having killed in the UK less than 200 people cumulatively, whereas the flu can easily kill 10,000 people in a single year just in the UK).
I don’t think it has anything to do with technology and automation. People are just very bad at reasoning with probabilities. This is why lotteries are so popular. And it is fuelled by unscrupulous (and even less apt at grasping probabilities) media trying to make money out of sensionalist headlines. How many times a week am I told that x will cause the end of the world, replace x by flu pandemics, Ebola, global warming, cancer as a result of eating meat or using smartphones, etc.
Funny, Nassim Taleb, a worldwide (if controversial) risk probability expert, would say that you are the one which is being very bad at reasoning with probabilities, because you are putting Ebola and terrorism in the same class of risk as falling off a ladder:
I would agree with flu (which is a fat tail risk, eg the spanish flu), but not terrorism (these are small local events) and ebola (ebola has a very low risk of transmission, it is just that it is very deadly once transmitted).
But classifying risks is one thing. Worrying in your daily life to the point of opposing a public policy is another. An asteroid wiping out life on earth is an even fatter tail risk, but the probability is so low that it is not worth worrying about. People have not rational reason to worry about ebola, terrorism, or plane crashes.
It's 28% in 2016 according to NHTSA [0], 10,497 deaths.
Certainly auto-pilot is worth it if the driver is drunk. I wish that all cars were fitted with alcohol measuring devices so that the cars won't start if you're over the limit.
Drink driving is still a massive problem in Belgium where I live. Although you get severely fined (thousands of euros) it's up to a judge to decide if you should have your licence taken away [1]. Typically you have to sit by the side of the road for a few hours then you're allowed to continue. In the UK it is a minimum 1 year ban.
Good point. I wonder whether the ratio of drunks behind autopilot wheels is different than average. Could be lower (because richer) or higher (because richer).
There could easily be more than just a wealth differential, e.g. people who are more are of their emissions impact might also be be more aware of intoxication impact.
I'm any case, "statistically more safe" is a weak argument, e.g. we would be terrified of boarding a plane if they were merely statistically more safe than driving (by a small margin).
What that commenter is getting at is that the term "statistically more safe" is meaningless because it can be such a wide net of meaning, e.g. a very small margin (1%) or a large margin (50%).
Currently it is probably worse than the average driver. But here is a trolley problem for you. What is the ratio of people killed who volunteer to test out an auto-drive to the number of lives saved by that early testing? 1:1, 1:10, 1:100, 1:1000, 1:1,000,000? A million people die in vehicle wrecks each year.
The problem is that it's not just the drivers who are the volunteers getting killed. When you kill pedestrians as Uber did, they did not volunteer to be killed to test your software.
Yes automation is good, but Tesla is being needlessly reckless. They can easily be much more strict with the settings for auto-pilot but they're not.
If you are talking about the world a large part of that million are killed on the roads where no self-driving car would be able to drive anyway.
Also "normal" drivers do not usually get killed in crashes—unless it involves reckless driving. So probably self-driving is safer than drunk driving, reckless driving or driving on the roads with the extremely poor infrastructure, but compared to responsible driving?
Also, why is this so black-and white? The safest current option is technology assisted driving, but no much talk about it.
I'm just throwing something at the wall here: Just start with a purely assistive package like most big manufacturers have. Let buyers opt-in to a test program (giving them, say, a 3% reduction on the list price), where all their inputs and outputs are recorded and sent to the manufacturer. Manufacturer can now assimilate that data into a model of car and road and test their fully autonomous software on that model and search for situations where the software gave much different input compared to the human driver. Check these situations in an automated way and/or manually to see whether it's a software error or improvement. IMO this approach would have had a high chance to find the bugs that caused the two recent fatalities, as I'm convinced that most human drivers would have done better there.
After years of doing this (which seems to be close to what Waymo is doing if I understand correctly), the autonmous software should be way better than what is being pushed out now by Tesla and Uber (and probably a bunch of others).
Considering that tesla w/ autopilot enabled is less safe than tesla w/ autopilot disabled, I'm going to go with "no, if you control for those factors it doesn't reduce fatalities".
Anyone driving with it for an hour will probably encounter a situation they have to take over. Seems pretty obvious. It's a cool tech demo with bugs that can kill you.
That proves that a Tesla is dangerous if you enable the autopilot and then take a nap. It doesn't demonstrate that it's more dangerous if used as intended.
I'm not reading a position in what you've said, but there is an observation here to draw out for the crowd. It doesn't matter all that much if after controlling for those variables it is relatively less safe; in absolute terms it just needs to be as safe as the worst human drivers.
That is, we have an accepted safety standard - by definition, the worst human driver with a license. If a Tesla is safer than that, the rest is theoretically preferences and economics.
I'm not saying that the regulators or consumers will accept that logic - if airlines are any example to go buy, they'll take the opportunity to push standards higher - but I think the point is interesting and important. It is easy to smother progress if we don't acknowledge that the world is a dangerous place and everything has elements of risk.
I don't agree with this position. Autopilot is a new thing that governments need to decide whether it's allowed or not (since it is operating dangerous equipment, so the state has an interest in doing the right thing). If improved safety is not a demonstrably valid argument for allowing it, it IMO doesn't pass the bar for what should be allowed. At that point it's purely a money-grabber from corporations that want the cookie (first to market with fully autonomous vehicles) without paying for it (heavy investment in testing).
Since it would be one software being used many times over, I'd be fine if it could pass a large number of driving test with very low failure rate, considering driving tests are quite random in how they are applied. The question is however which driving test. My experience in Switzerland was that the practical driving test was quite hard to pass, I had to train a lot to satisfy them completely and they route you for one hour quite randomly around a European medieval town and its outskirts, i.e. a place that's quite hard to navigate correctly as it's not built for cars.
I think the issue with nuclear power is more the size of the blast radius. For example, the U.S. population within 50 miles of the Indian Point Energy Center is >17,000,000 including New York City [1]. The blast radius of a self-driving car is a few dozen people at best. It's not evident that nuclear technology is uniformly better than solar or wind, but it's probably expected-value positive.
> And in the current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology", I think machine driving cars will have a hard time being accepted.
Exactly this. People will not accept being hurt in accidents involving AI, but will accept being in accidents caused by human error. There is no way this will change, and car manufacturers should also have realized this by now.
As far as I can see, the only viable solution to the "not quite an autopilot" problem that tesla (and maybe others) has created for themselves is this: Just make the car sound an alarm and slow down as soon as the driver takes his eyes off the road or his hands off the wheel. The first car that doesn't do this should be one that is so good it doesn't have a driver fallback requirement at all - which I think is two decades out.
> It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided.
The flawed assumption behind this thinking is that all accidents have "accidental" factors. Alcohol and drugs and the poor decisions of young men are _huge_ components of that statistic. You also have to consider that motorcycles make up a not insignificant portion (12%) of those numbers.
It also completely ignores the pedestrian component (16%) which is due to a huge number of factors, of which, driver attentiveness is only one. So many pedestrians die, at night, on the sides class B highways that it's silly.
EDIT: To add, Texas has more fatalities on the roadway than California. Not per capita, _total_. This is clearly a multi-component problem.
And you also have older cars with less safety features, not sure how is in US but in other countries you can stil see old cars that are missing airbags.
So in fatalities statistics the fair thing is to compare Tesla with similar cars.
That’s a good point. Not just airbags. It starts to be common to have cars that apply the brakes automatically when they anticipate a collision. I think some research has been done on cars being designed to do less damage to a pedestrian in a collision, I don’t know if it has lead to design changes. Self driving cars may end up being only marginally safer than other new cars.
But safety is not in my opinion the main rationale for self driving cars. Convenience and maximisation of traffic are.
I think the convenience will mean more traffic since many who now use public transport will use their self driving car instead. So in total autopilots could be worse for the environment and require even more roads
I agree with the traffic increasing, though as we are moving toward electric cars, not sure this is an environment problem. At least not an environment problem in big cities.
Electric cars still impose a huge environmental cost since they need to be manufactured in the first place. And the batteries need rare earths which are, well, rare.
While there is ongoing research into novel types of batteries made from more common materials, I wouldn't be surprised if the next war is about lithium instead of oil.
Agree, I am not convinced it is a net benefit for the overall environment. Plus they will consume more energy because of inefficiencies in transport and storage.
But there is still an immense environmental benefit: they will pollute in places (industrial areas, mines) that are not places where people live (big cities). So the population will benefit a lot from moving where the pollution takes place. And it's not just air. Noise pollution, dirty buildings, etc.
> Plus they will consume more energy because of inefficiencies in transport and storage.
Is that so? The larger engines in fossil power plants are more efficient than a fleet of small ICEs, but I don't know how this added efficiency compares against those inefficiencies along the way that you mentioned (and of course, a complete picture also needs to take into account the energy cost of distributing gas to cars).
If we do not push for safety then the manufacturers would invent some bad statistics and cheap out on sensors and engineering after they pass the bad statistics percentage.
The traffic issue it won't be actually solved with this kind of self driving cars, you would need new infrastructure, like modern metros/trains.
The dream of fast moving self driving cars, like a sword, I don't think is possible without a huge infrastructure change and I think the sword of cars would need a central point of control.
You will increase traffic without material change to infrastructures if you have no car parked in the street (particularly in Europe where streets are typically narrow) and you have better traffic flow management (coordinating self driving cars), variable speed limits, etc.
What is the minimum time you guess it would take to have a city with only self driven cars that could just drive by following lines and other specific signals?
I do not think this will happen in an existing city for at least 20 years.
I can imagine scenarios where a glitch or something else could cause tons of traffic issues in a city with only self driving cars.
I am not against self driving cars, I don't like the fact that this startups arrived and push alpha quality stuff on the public roads, it will create a bad image for the entire field.
If all newly produced cars are self-driving cars in 10 years, I think it is reasonable to ban non self-driving cars 10 years after that. So 20 years sounds about right. It's not that long for such a fundamental change.
And yes it will create its own problems. Like if these cars get hacked, they will cause serious damages.
So if you think the timeline is 20 years at best then we need that present self driving cars be able to drive on present roads with human driven cars, pedestrians, bikes, holes in the road, some bad marked roads, roads without markings, roads with a bit of snow, heavy rain,fog.
Adding some extra markings, special electronic markings for this cars is not a solution, and in this 20 years since the traffic is mixed you can't consider lifting the speed limits or changing the intersection rules, so I don't see how it helps the traffic (except maybe on small streets if you consider that people won't want to own their car and will use any random car. this random cars should be super clean and cheap to make people not own their own)
Or maybe having a population with no hard challenges in their youth, like learning to drive and surviving. Seems to lead to people incapable of handling adulthood or developing judgement about risk.
> current climate of "all new things must be super extra safe to be used, even if it displaces much worse technology"
The "current climate" is a backlash to the cavalier "move fast and break things" externality-disregarding culture of the last 10 years or so. We should be extremely conservative when it comes to tools that can potentially maim or kill other people. Not valuing human life should be an aberration.
No it is not. But might be someday. Will that be good enough? My dad would spend 100k on a car that was 10 times less safe, if it let him keep using a car. I'm sure others would as well.
The proper way to prove "it's 100 times more safe" isn't to let it cause some number of deaths and then go "welp, we tried our best but we were wrong, turns out it's less safe. Shucks". But that's exactly what Tesla, Uber, etc. all seem to be doing. "We'll compare our statistics once the death tallies are in and we'll see which is safer".
The most we have to go on for a rough approximation of safety is the nebulous and ill-defined "disengagements" in the public CA reports. From what I can tell, there's no strong algorithmic or safety analysis of these self-driving systems at all.
The climate about these things is sour because the self-driving car technology companies seem to want to spin the narrative and blame anybody but themselves for the deaths they were causing, and just praying they'll cause less of them once this tech goes global.
For clarity on this point, "disengagement" has a specific meaning to the California DMV[0]:
> For the purposes of this section, “disengagement” means a deactivation of the autonomous mode when a failure of the autonomous technology is detected or when the safe operation of the vehicle requires that the autonomous vehicle test driver disengage the autonomous mode and take immediate manual control of the vehicle.
However, some self-driving car manufacturers have been testing the rules quite a lot by choosing which disengagements to report[1]. Waymo reportedly "runs a simulation" to figure out whether to include the disengagement in its report, but there's no mention of what the simulation is or how it might fail in similar ways the technology inside the car did! Thus, the numbers in the reports are likely deflated from being actually every single disengagement.
And even this pathetic and tootheless regulation was enough to drive Uber from California for a while.
With all the games they play with the numbers, Waymo still reports 63 safety-related disengagements for a mere 352,000 miles. This doesn't sound like an acceptable level of safety.
The surprising part is that it appears that Waymo is planning to start deploying their system in 2018. How can they even consider it with this amount of disengagements?
Inexperienced drivers cannot be avoided, but many states try mitigate the risk by limiting teenage drivers.
But my point is that whether to allow a vehicle that is not safer than a reasonable human driver should be not left to the car owner alone - there are other stakeholders whose interests must be taken into account.
You’re arguing from a horrible stance. Autopilot isn’t 100 safer than human drivers that is why people are concerned about safety.
It’s like if you said “I don’t get why people are concerned with every kid bringing a gun to school, if that would make the schools 100 times safer then shouldn’t we do it? Even if there’s an interim “learning period” where they are much less safe, doesn’t the end justify the means?”
I really can’t fathom the mindset of people who honestly believe that the only way forwards with self-driving tech is to put it into the market prematurely and kill people. In 10-15 years we’ll have safe robust systems in either case, let’s not be lenient with companies who kill costumers by trying to move to fast and “win the race”. Arguing from statistics about deaths without taking responsibilities into account is absurd. Would it be ok for a cigarette company to sell a product that eliminated cancer risks but randomly killed costumers as long as they do the math and show that the total death count decreases? Hell no.
> Imagine the autopilot is 100 times safer than human drivers. Full implementation would mean about a fatality a day in the US. It does not seem this would be acceptable by the public even though 30,000 annual human car fatalities could be avoided.
Your opinion will be irrelevant--the insurance companies get the final say. When that safety point arrives, your insurance rates will change to make manual driving unaffordable.
>your insurance rates will change to make manual driving unaffordable
I see people saying that. Why would insurance be more expensive than today--implying that manual driving becomes more dangerous once self-driving cars become common? It's already a common pattern that assistive safety (and anti-theft) systems tend to carry an insurance discount versus not having them.
Because the "manual" driver will almost always be at fault and the automated drivers will have the telemetry to prove it. For example, I recently got hit while backing out of my parking space--I'm legally at fault because of how the collision occurred. The fact that the other driver had tremendously excessive speed (a center mass hit on a Prius that literally lifted it onto the SUV sitting next to it) wasn't proveable by me--a self-driving car will not have that problem.
This means that the insurance companies with more "manual" drivers will be paying out more often and will adjust the insurance rates to compensate.
> There is additional controversy, it should be noted, about the proposed level 2. In level 2, the system drives, but it is not trusted, and so a human must always be watching and may need to intervene on very short notice -- sub-second -- to correct things if it makes a mistake. [...] As such, while some believe this level is a natural first step on the path to robocars, others believe it should not be attempted, and that only a car able to operate unsupervised without needing urgent attention should be sold.
i don’t see why it’s controversial. just look at drivers now. many are already on human autopilot checking their phone, eating, etc. if you give any amount of automation but that which still requires checking in from the human, humans will almost universally check out and crashes will be prevalent. autonomous driving will simply not work without full, unsupervised driving. i don’t understand why this is even a discussion amongst people.
Indeed. Since the SDV controversy started, I do catch myself at non-driving behind the wheel: "was I watching the traffic at all, or did I divert all my attention to the radio for several seconds? Was I too busy with the kids' fighting to watch the road just now?" And that's with me trying to actually drive the car safely, to be aware of its operating envelope, other traffic, hazards, navigation etc.
One solution would be to make the autopilot worse within the realm of safe driving. Make it so that it continuously makes safe errors that the driver has to correct. Like steering off into an empty lane. Slowing down. Etc.
They market their assistant as one that is not intended to drive on its own. So if you let go of the steering wheel, it magically continues to drive and will alert you if you check out for too long.
As far as I can tell there is no way to resolve this that will be effective. Google spoke about this problem publicly and why they directly targeted level 4.
Train drivers are the closest thing and they have a lot of equipment that forces them to stay alert.
Personally I think that autopilot should only take over when the driver is unwell or that it thinks there will be a crash.
Even when driving full time drivers get distracted. So when the car is doing the driving humans will get bored and won't be able to respond in time.
Autopilot however is always on and never distracted but doesn't work in some cases.
Uncanny valley of autonomy I guess. Google has noticed this early on, and their answer was to remove the steering wheel completely from their campus-only autonomous vehicles. Either it’s very close to 100% level 5 (you can pay no attention whatsoever and trust the car 100%), or it’s explicitly level 2 (advanced ADAS essentially), there’s nothing in between that’s not inherently dangerous.
You have to learn how to use it properly and pay attention. I use it a lot and it can drive from San Francisco to LA pretty much without stopping. But every once in a while it does mess up and you just need to make sure you're watching and ready to take over quickly. I agree that it's good enough that people might stop paying attention, but they just need to realize that they have to hold the software's hand in these initial stages. As a matter of fact, being in the drivers seat able to take control makes me much more comfortable in a self driving Tesla than in the back seat of a much more advanced Waymo self driving car
Trust me, it's still a huge relief. A day and night difference. Try it on a long drive, like LA to San Francisco and you'll see what I mean. It kind of feels like you're in the passenger seat rather than the drivers seat, but you can still grab the controls if you need to.
If it allows the driver to e.g. reply to a text message on his phone, then it's dangerous.
Either it requires the driver to occasionally take over and handle a situation on short notice, then the driver should have hands on the wheel and eyes forward (and the car should ensure this by alerting and slowing down immediately if the driver isn't paying attention).
Or the driver isn't required to pay attention and take over occasionally, and then it's fine to reply to that email on the phone while driving. I see no future in which there is a possible middle ground between these two levels of autonomy.
Regular manual automotive controls, in practice, "allow[] the driver to...reply to a text message". Illegal or not, it's done by millions of people every day, and for the most part, they don't crash. You'd have to actually add control instability to the car to force the kind of attention you want out of drivers, and nobody would buy a car with this deliberately annoying handling.
There is a double standard here: cars with autonomous features are held to the new standard, meat drivers are held to the old standard.
Just like we will probably never accept (neither socially nor legally) autonomous drivers that aren't at least an order of magnitude safer than human drivers, we will likely continue to accept drivers not paying attention to their manual controls - but we do not accept drivers not paying enough attention to take over after their AI driver.
> nobody would buy a car with this deliberately annoying handling.
Exactly. And since this is the only way of making a reasonably safe level 3 car, this is also why many car manufacturers have actively chosen to not develop level 3 autonomous cars (because they are either not safe OR annoying - and either way it's a tough sale)
This isn't a black and white scenario. I use AP for probably 50 miles of my 60 mile round trip commute. Here's how it breaks down for me:
In the morning, I leave at 5:30am. There is constant moving traffic at 60-80mph on the first leg of my trip (6 miles). I drive manually, or with AP, and I pay full attention, 100% of this time. If I'm using AP, it's because it's at least as reliable as I am staying in lanes.
I then change interstates. I do this in manual mode almost every time. AP isn't great at dealing with changing multiple lanes quickly and tightly like traffic often requires. Once I'm on the new interstate, I get into my intended lane (second from the left), put the car in AP, and we have what you'd consider stop and go traffic for a few miles. At this point, I open the can of soda I brought. I keep a hand on the wheel, and I pay attention, but I'm more relaxed than I was earlier, when I was doing 60-80mph. At this point, the only thing that I need to do is to respond to someone jumping into my lane and cutting me off (which the AP deals with, but I have more faith in my ability to slam the brakes), or road debris, which at this speed, is not a problem that needs less than a second of response time.
There's a slow steady 40mph drive that I'm in full AP for, drinking my soda and paying attention, and then the traffic thins out, and I notice that I'm starting to lag behind cars, because my AP has a set max of 70MPH from back where the speed limit was 65mph (even though I was only going 20-40). At this point, both hands on the wheel, full attention, and I increase the AP max to 75 or 80, depending on how much traffic I'm in. I switch it back over to manual to make the lane changes necessary to hit my exit, and I'm manual until I park my car at work.
On the way home, I'm in stop and go traffic for an hour. When I'm 'going', I'm going 5-10mph. I am in full AP mode 95% of this time, and I could take a nap at this point, and it wouldn't actually be unsafe. I'm safer with AP than manual at this point, because my attention fades if not and I could drift lanes, or bump the car in front of me. Which I've seen happen to other people countless times, and which just increases the amount of time everyone else is in traffic, too.
Even the most egregious lane jumper can't get into my lane too fast for AP at this stage of my commute. I just set the follow distance to 2, and listen to audio books while I browse twitter or facebook. I look at what's going on out the windshield, but it's virtually unchanging. Like the thousands of people surrounding me, I'm slowly creeping forward, waiting on the 20-30 miles to pass. For an hour and a half.
This is the same non-argument - people don’t want safer (in that case a crappy autopilot that’s only slightly better than a human driver would be a viable product) - they just don’t accept any notion of unsafety in new tech.
The bottom line is that people don’t accept any risk at all involving autonomous driving - regardless of whether the alternative/old tech was worse. So, to put it very bluntly, people accept being hit by a texting person not paying attention for 1 second. People don’t accept being hit by a person in a level 3 autonomous vehicle not paying attention for 10 seconds - and that’s regardless of the relative safety of the two systems.
Obviously if you use Autopilot in bumper to bumper traffic this is an improvement, and a huge improvement over texting while manually driving. But texting at highway speed is thankfully rare when manually driving and should be just as outlawed with AP.
You could achieve the same level of comfort today on many common cars with adaptive cruise control. Autopilot gives you a false sense of security when it works great 90% of the time.
Again, you're making the same mistake op talked about - that the Autopilot works 80-90% so you'll keep letting it drive you while you relax, which means it's just a matter of time until you'll crash.
I don't trust you to be able to snap out of a distracted state and take control of a car in a situation you may or may not have been paying attention to. I don't trust you to do that, I don't trust the driver behind me to do that, and I don't trust the drivers to the left and to the right of me to do that.
I should not need to trust you -- if I trusted you, why would we need autopilot at all? Humans are either qualified to drive cars or they aren't. If they are qualified, we don't need autopilot. If they aren't qualified, autopilot should never require human intervention. What we have now is a half-measure that assumes that neither humans nor autopilots can be trusted, but that some combination of those two untrustworthy parties can somehow be trusted. It doesn't make any sense.
Please. You should see me when the car isn't on AutoPilot. I speed excessively, weave through LA traffic switching lanes frequently and generally exhibit unsafe driving behavior. I can't help it, it's like I get bored or something. I'm a decent driver and have never gotten in a crash but have gotten a lot of tickets. When I use AutoPilot, the car automatically maintains a reasonable distance at a fixed speed. You need to trust people driving other cars today, just as you always have -- that hasn't changed yet. AutoPilot related accidents are new, but accidents aren't. So yes people will die using AutoPilot but the solution is to know the systems limits and also maybe enhanced attention detection systems in the car, not banning AutoPilot or anything like that. I do believe AutoPilot can already reduce the number of crashes today.
The 3 Autopilot deaths so far involved 3 relatively young men, both in relatively elite professions: former Navy SEAL and an Apple engineer -- the third was a son of a Chinese business owner [0]. They fit the profile of men fairly confident about driving and tech, perhaps too confident. 3 fatalities over 320 million miles for Teslas equipped with Autopilot hardware is not much better than the 1.16/100 million miles fatality rate of all American drivers and vehicles.
I know, it definitely gives me pause to see people dying and is definitely a reminder to be safe. I remember when I was using AutoPilot on the 101 in the bay area this weekend, I noticed I was in the far left lane and decided to switch to a middle lane. So it's not that I don't think AutoPilot could kill me, as a matter of fact there have been several times when AutoPilot was definitely about to kill me but I took control in time to course correct. I think I've used the system enough to get a sense of what it can and can't do and feel that I'm less likely to get in a collision due to not paying attention while AutoPilot is on than I am to get into a collision because I was driving.
Also, in response to the statistics you cite: I wouldn't expect the statistics to be that far off the average because AutoPilot is limited in it's possible use cases at the moment. Even in cars equipped with AutoPilot people are always still driving the car for at least some part of the trip. Therefore I wouldn't expect the impact of AutoPilot to lead to a significant deviation from the average. Plus, I'd speculate that the crash rate per mile is probably higher on a Tesla then an average car -- I'm thinking of like a Honda Civic. Faster cars probably get in more crashes, right? Maybe not, who knows. Regardless it should be possible to control for this and assess the effect of AutoPilot on per mile crash rates by comparing rates for Teslas with and without AutoPilot. This is somewhat complicated by the fact that even Teslas without AutoPilot have automatic collision avoidance, but should shed some light on whether AutoPilot is making people crash more or less.
What about non-fatal accidents though? Where Teslas (and other assisted driving vehicles) prevent stuff like hitting pedestrians, random bikers, animals, parked cars etc.?
1.3 million people a year die in car crashes. Humans are woefully unqualified to pilot heavy machinery on a daily basis. Tesla’s Autopilot reduces crashes by 40% according to the NHTSA, so I can’t agree with your binary argument.
Whether you trust others is immaterial; statistics will be the final arbitor. If Autopilot still causes fatal accidents, but fewer fatal accidents than humans alone, how could you argue against such a safety system? What of the lives saved that wouldn’t have been if we demand an entirely fault proof system prior to implementation? Who are you (not you specially, but the aggregate) to take those lives away because of irrationality?
> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA
This claim was immediately called into suspicion when it was first published, and the NHTSA is currently facing a FOIA lawsuit for refusing to release data to independent researchers.
As noted in your citation, Tesla requested the data they provided to be confidential (which is not an uncommon request), and the NHTSA granted the request. Whether the statement from the regulatory agency can be independently verified is immaterial.
OK, and all I said in my comment was that the NHTSA was asked for elaboration and proof -- because its findings seemed curious with respect to other study results -- and so far they have declined further explanation. This may be relevant information for anyone who sees you using the NHTSA's claim as a premise.
I think I normally would have given Tesla the benefit of the doubt. But after the misleading, weasely-worded data they discussed to defend AutoPilot in light of the recent fatal accident [0], I think the onus is now on them to provide more concrete proof.
>> Tesla’s Autopilot reduces crashes by 40% according to the NHTSA
So does any car with automatic emergency braking and/or or forward collision warning (links below).
The problem here is, this Tesla drove head-on into the gore point. And previously, it drove right into the side of a huge truck.
So... can it be trusted? Your call.
I am human, and any product built for me will have to take into account my idiosyncrasies. This includes my unwillingness to drive on the same road with a car that might at any moment swerve into me because of weird software.
It may be irrational, but I can forgive a human, I cannot forgive an AI.
> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
I suspect the unfortunate reality is people die on the journey to improvement. Once we decide this technology has passed a reasonable threshold, it now becomes a case of continual improvement. And as horrible as this sounds this is not new. Consider how much safer cars are than 20 years ago and how many lives modern safety features save. Or go for a drive in a 50 year of sports car and see how safe you feel. And in 50 years we'll look back at today's systems in the same way we look at the days of no seat belts.
I honestly don't understand why, if the driver does not take control when the car has sensed an issue, it does not just throw on hazards and roll slow or stop. Seems like the safest way to keep people from dying.
Can you think of any reasons or situations where that logic fails? I can think of tons.
The very challenging, and somewhat unusual given the life/death stakes, aspect of this domain in general is that general 'easy' logic works for 99% of the scenarios. But in the remaining 1% -- and I mean every single possible last scenario, whether probable or not -- the "what to do" is so much more difficult. Enough time to do it? Can be done safely? Etc.
The reason the car turns control back over to the human at all is because the vehicle presumably does NOT have any maneuvers left which are high-confidence -- whether that's because the sensing equipment failed, the software failed or failed to interpret the data satisfactorily, or there's just no "good move" left, or any other number of reasons, etc...
slowing down should be the first thing a car does in these scenarios. the confidence factor matters less as speed approaches zero.
seriously, i slow down when i am unsure of the conditions, i surely have never evaluated every last scenario, and i have never run into a concrete barrier.
> OK so you have a speeding loaded truck behind you that would not be able to break for whatever reason still want to slow down?
Mental experiment: which is preferable - colliding with a static barrier at an unchecked 70MPH or getting rammed from behind by a truck at a relative speed of 30MPH (assuming that's how much you've braked by)?
That said - slowing down =/= emergency stop - that truck behind you should leave enough room in front of it to come to a complete stop - unless you've cut in front of it. I'm probably ranting, but I'll repeat this as I've witnessed it far too many times: do not cut in front of loaded semi-trucks. The gap in front of them is intentional and it wasn't meant for you - it's their braking distance - they have ridiculous momentum and they can squash you!
My Subaru has a "lane assist" feature which helps keep you in your lane while driving. It only gives a little bit of pressure in the direction you should be moving to help you out and if you don't do anything the car will still move out of the current lane. This certainly helps me stay more aware as I can't ever rely on the car to fully steer, but if my concentration lacks during a turn on the freeway I get a reminder that I need to turn if I'm not paying as much attention when I come upon a curve in the road or just drift to one side a bit. So it definitely helps me remember that I'm always in control.
OTOH, the car has probably saved me from at least one accident where traffic ahead of me suddenly slowed down right as my attention relaxed. The _only_ correct way of using these systems is to treat them as an extra level of safety on top of your responsibilities as a driver. An "extra set of eyes" to help you avoid an accident while leaving you as the primary driver of the vehicle.
I use the Subaru adaptive cruise control quite a bit. It turns off when it is unable to see well enough, which usually only happens in scenarios where I shouldn't be driving and can't see (like during a blizzard).
Human+automation is very powerful! However, I don't think we are far from self-driving beating human alone.
Best solution is to keep the driver engaged, IE still holding the wheel and showing they are paying attention. It's how all the other cars with lane assist do it. Cant go more than 30 seconds of not touching the wheel before it complains.
It's not perfect but it sure beats the driver being so used to it they start doing other things like watching movies on their laptop.
No that's not the best solution. The best solution is to not have it at all until it's actually good enough to self-drive you.
30 seconds is a HUGE amount of time to react on a highway. You'll be long dead by the time another 25 seconds pass if the accident happens just 5 seconds after you took your hands off the wheel. And this is exactly what happened in the recent crash.
This. I don't know I feel ambivalent about autopilot right now, as most on HN I always crave for new technologies, but the fact that something very dangerous can be very good most of the time and very bad occasionally make me weary of using it altogether. I know that's irrational because most often then not it probably saves you from your own mistakes.
Also, if Tesla intends to shield itself behind a beta status for their autopilot system until it is perfect, I think this beta status will remain even longer then the time GMail was in beta. At least, in this case they should own this problem and somewhat hardcode a meaningful warning at such intersections or do something.
It's not irrational at all. I drive our family 2018 Honda Odyssey, and it's already "saving me from my own mistakes" by beeping angrily when I signal for a lane change and there's someone in my blind spot. Works really well. It also has a notification light on each side of the car to indicate blind spot object presence, so I have to miss that first.
The question seems to be which is better:
1) A car very intelligently helping a human drive better.
2) Or a car mostly driving well itself but needing humans to very rarely override fatal crashes.
I prefer 1)
I'm not sure there are any studies comparing these things, since "human driving but lots of extra tech help" is harder to quantify and doesn't have all the data going back to the mothership like Tesla.
I think that 1 and 2 still can entail the same problems though. If it helps you check your blind spot you might occasionally stop checking yourself and rely on it to check for you. Then you implicitly start relying on that feature to work. It's different degrees of handing over control though.
Because no one is stopping them from. Even here, in HN, which is supposed to be filled with tech people, this is rarely asked. Then it is not surprising the common people and non-tech savvy authorities are complacent...
Oh, it is asked all right. But then the business side interrupts with time-to-market and diminishing returns and the cost of settling out of court vs. probability thereof - and voila, "move fast and kill people".
> Specifically it works completely correctly like 80 - 90% of the time and as a result, if you use it a lot, you start to expect it to be working and not fail.
Different environment (and stakes), but I observed the same thing happening a couple of times during IT incident response. The automation crashed/failed/got into a situation it could not automatically fix, and the occurrence was rare enough that people actually panicked and it took them quite a while to fix what in the end was a simple issue. They just didn't have the "manual skill" (in one case, the tools to actually go and solve the issue without manually manipulating databases) anymore.
My first drive with auto pilot tried to follow into the back of a stopped car in a turning lane and randomly had phantom breaking issues like going under overpasses.
Not to mention there are things most people do defensively while driving that autopilot doesn't: anticipate a vehicle coming into my lane by both looking at the wheel position and the lateral movement. Autopilot ignores those pieces of information and data until it's in basically in your lane.
Personally I feel I have to be more on guard and attentive with it on because I know their are fatal bugs.
As someone with a car that has a much weaker system (ProPilot), I can see where that would be a problem. Its not really tempting to let ProPilot do it on its on, as it regularly wants to take exit ramps and occasionally pushes towards the middle of the road too much.
It seems like AutoPilot users take their hands off the wheel regularly, and I just don't see how that is safe with this type of system.
If people are using it, not paying attention, with the expectation that it will beep to tell you to take over that's a big problem. In situations like this divider issue it won't beep, it thinks everything is fine right up until it rams you into a stationary object. I think people may not be fully aware of all the potential failure modes of this tech?
> said the problem with autopilot is that it is "too good".
Isn't it true that the thing cannot detect a stationary object in its path if the vehicle is traveling at above 50kph? If that is true, then I think this is an extremely dangerous situation that the owners of these vehicles are in...
"too good" -> "completely correctly like 80 - 90%". I'm unable to make the two sentences make sense in the same statement, since 85% looks extremely low to be considered too good, given that the outcome is to crash if you do not pay attention in the 15%.
I expect it is the nature of the problem. Autopilot fails when the road conditions are such that it cannot function correctly. Imagine that you drive your commute from point A to point B and use autopilot all the time and it has never failed you. Then you use it when driving from point A to point C and because there are road conditions on that route that it cannot handle it tries to drive you into a ditch. Your experience may be giving you more confidence in its ability than it deserves. This is made worse when it has worked fine from point A to point B until one day a painter had a bad day and one of the cans of paint they forgot to secure fell off the back of their truck and put a big diagonal splash on the lane. And on that day your autopilot drives you into the car next to you.
IMHO this is kinda of a mental trick. How good it is an hammer that allows you to place a nail 90% of times? Percentage of failures is not usefulness, or a value of perfection, like: I'm 80% handsome, that is a lot handsome. Something that has 20% of failures rate instead is terrible.
Indeed. If I have to be sober and paying attention when riding in a robot car, I might as well just drive it myself just to keep me from falling asleep. At which point, I'll just not bother with the robot car at all.
> Makes me wonder how this gets resolved without jumping through the gap to 'perfect' autopilot.
The solution is easy, but probably not what many would like to hear: ban all "self-driving" or "near-self-driving" solutions from being activated by the driver, and only allow the ones that have gone through very rigorous and extensive government testing (in a standardized test).
When the government certifies the car for Level 4 or Level 5, and assuming the standardized test has strict requirements like 1 intervention at 100,000 miles or whatever is something around 10-100x better than our best drivers, then the car can be allowed on the road.
Any lesser technology can still be used in other cars, bit instead of being a "self-driving mode" it should actually just assist you in the sense of auto-braking when an imminent accident is about to happen, or maybe just warn you that an imminent accident is about to happen, which could still significantly improve driver safety on the road until we figure out Level 5 cars.
If your point is "Autopilot only works on perfect roads" then they might have well pull the plug right now.
Dirt, snow, rain, fog, dark, and other common conditions could obscure even correctly painted lines, and it just isn't realistic to expect perfect roads throughout such a large network anyway (in particular as human drivers can infer the line is meant to be there from the neighboring lines).
At its core, a system like Autopilot is meant to stand in for humans. Plus in this video where they reproduced the accident there was a full chevron pattern so the argument is flawed/wrong anyway: https://www.youtube.com/watch?v=6QCF8tVqM3I
Man, that's incredibly bad. I drive that route all the time and it's not even remotely difficult to follow the highway there.
Considering Tesla's must have driven through there hundreds of thousands of times, they should have data to make sure the car at least follows the highway. I don't crash in to a barrier when a paint stripe gets scuffed in between my Monday morning commute and my Tuesday morning commute.
It's not so simple - you can't simply follow the highway as you have done hundreds of thousands of times, because if the paint stripes have changed since Monday, then these might also be new (or temporary) stripes painted by a road construction crew overnight and you now must follow the new stripes.
Now imagine all people have Tesla, and someone makes a prank on the middle of a bridge, they paint a line that makes all traffic plunge down, imagine now that they are so confident they removed the driver seat, you have all those passengers looking on the windows on how their cars are driving them to their deaths and there is nothing they can do.
So should this cars follow some line blindly?
I think that this tech should stil be called a drive assist, and assist the driver, Tesla rushed it in enabling the autopilot thing, I am sure if you compare the stats with similar cars(expensive,safe and with drive assist) the stats will be against autopilot, I do not understand why they had to rush the autopilot and not keep it as a driver assist, to warn you and prevent some crashes)though it seems it can't protect from frontal collisions with static objects)
You could say the same for trains though - they follow a track in a similar sense to Tesla's autopilot. Someone could perform a "prank" by putting something on the track in order to derail a train. How often do you see that happen? Painting a line on the road is obviously easier to do, and could also happen by accident, but I think the chances of any danger are extremely low.
Hopefully when we start removing the driver from cars with autopilot, they will be following some sort of under-road NFC-like guidance system rather than painted lines.
There’s a theory out there that a penny on a rail can derail a train. You wouldn’t think people would do such a thing, with such disastrous consequences... but they do. Thousands of times a year.
Actually derailing a train takes serious effort; but people still try constantly. Derailing a Tesla seems somewhat easier, so I guess we’ll see.
On youtube there are old black and white films produced to show the result of rail sabatouge. Even with a foot cut from the track the train did not derail. Quite impressive. I used to flatten coins as a kid all the time by leaving them on the tracks.
It is much harder/expensive to place a big truck on heavy object on a train track then painting a line. The trains also have a driver that will try to stop the train so reduce the impact.
This self driving cars can be tricked by bad conditions or some optical illusions, like you have a t-shirt with some pattern on it and the Tesla car will coma and hit you
I'm curious if they have data about how often users need to take over from autopilot in that area. It seems like this shitty behavior was easily reproducible, so...
Indeed. His Tesla was clearly following the clearest left-side white line. Which unfortunately pointed the car right at the gore point.
But what amazes me is that Tesla autopilot apparently won't swerve or stop when it sees an unmoving object straight ahead. Whether it's a semi, a fire truck, or a gore point. But I gather that trying to detect that stuff generates too many false positives. And that overall, it would arguably cause more accidents net.
Edit: I have faced that ambiguity myself. When driving, perhaps foolishly, in fog so dense that I could only see the lines on the road.
But what amazes me is that Tesla autopilot apparently won't swerve or stop when it sees an unmoving object straight ahead.
It won't even brake unless the thing in front looks like the back end of a car or truck. Even then, if it's stationary, it may still get hit. It won't swerve, ever; it has no logic for that. Four times that we know of, a Tesla has plowed into a vehicle partly blocking a lane.
Putting something on the road which pretends to self-drive and can't detect solid obstacles is criminal.
If I was walking, I sure would not trust it to. But their feature is for the highway, where peds are not expected. I am sure they expect to shift blame by that reasoning.
Both Tesla and Waymo are at too low an abstraction level for resilience. Cars need to understand roads/walls as concepts and make common sense inferences.
They're not comparable at all. Google/Waymo uses lidar and has been at it since 2009 and per https://waymo.com/faq they are "fully automated vehicle" but alas not for sale yet.
Tesla's self-driving vaporware being sold since Oct'16 is camera only and should be activated any day now ?
Waymo’s approach is also brittle, e.g. to changes in the road since the last map pass.
Tesla also uses radar and ultrasonic. Lidar isn’t necessary, just makes depth measurement easier. It’s lower res than cameras, low range, power-inefficient, conflicts with other lidar. Humans drive fine in sunlight, without a coherent IR scanning beam.
How do you know how much Waymo's approach rely on maps vs. vision? My expectation would be that maps are really only used for "macro" navigation but anything beyond that (lanes, avoiding obstacles, etc.) would be done on the spot.
Do you have any public source detailing this or do you just make this claim based on your own assumptions?
(Disclaimer: I work at Google but have no idea how our self-driving cars work.)
> Of course our streets are ever-changing, so our cars need to be able to recognize new conditions and make adjustments in real-time. For example, we can detect signs of construction (orange cones, workmen in vests, etc.) and understand that we may have to merge to bypass a closed lane, or that other road users may behave differently.
Doesn't sound like the cars blindly trust potentially stale maps.
>> They're not comparable at all. Google/Waymo uses lidar and has been at it since 2009 and per https://waymo.com/faq they are "fully automated vehicle" but alas not for sale yet.
If it's already fully automated, then why is it not for sale, yet?
My advice to the company BOD & shareholders: have him take the Udacity's self driving car online course and after the first project, he'll clearly understand how limited detecting lanes by CV only can be. To Uber's ex-CEO's point: LIDAR is the SAUCE.
Musk's central point is that if humans can drive with two cameras, so can a machine. And he's right. Why wouldn't we do just as well as the visual cortex?
Humans have superior intelect, even if specific performance is lower. We have experince. We learned to drive, in the particular area where you operate your car generally, with all its idiosyncorcies. Then consider eye contact, nonverbal communication, bias, and personal investment in outcome that computers are incapable of. Its not as simple as better sensors and reaction time.
But isn't the whole point of autonomous cars that humans are pretty shitty drivers? If I could augment my vision with a 360º setup of cameras and LIDAR you better believe I would!
I'd say people are actually pretty good drivers. I'm more interested in autonomous for the time savings than I am for the potential safety improvements.
Tens of thousands of people are killed every year[1] in just the United States, humans are awful at driving. If autonomous vehicles are able to make driving as safe as flying it will be like curing breast cancer in terms of lives saved.
our eyes are a lot better than cameras in a lot of ways. Eyes have better dynamic range, better sensitivity in low light, extremely high resolution in the center, and an extremely wide field of view. The nerve cells in our retina are also wired up to do a lot of processing in real time at extremely high resolution, eg. things like motion/zoom/edge detection.
And that‘s not even taking into account that we actually understand what we see and can reason about unexpected input and react accordingly.
> If your point is "Autopilot only works on perfect roads" then they might have well pull the plug right now.
It's even worse: wait until someone does something malicious to an autopilot car to cause it to crash/kill.
I hate to be so cynical, but if you're developing this stuff, you have to imagine worst case scenarios. And there are much much worse scenarios when malice is involved.
Most people aren't murderers. On the other hand, con artists when the expected risk to marks is minimal...now we're talking!
I've been had by local traffic cops in $COUNTRY, just a few km after crossing the border: two lanes in one direction, separated by a solid line ("no changing lanes"), running parallel for about a km. Left lane suddenly ends - asphalt gradually narrows from the left until there's only the right lane left. Your options are a) crossing the solid line into the right lane, and...wait, that's all, there's no way to go off the road. And guess what, traffic cops conveniently positioned right there: "you have crossed the solid line, that's a ticket and XYZ in local currency" "but there's nothing else I could have done!" "we don't care, you crossed the solid line!" Repeat until ticket is paid - note that this is obviously targetted at people who are not likely to fight the ticket (and presumably a mistaken/illegal road marking) in local traffic court.
I could imagine a very similar scenario set up to trip up SDVs (and fleece their presumably affluent occupants). The problem comes when con men underestimate the risks, and accidentally kill people.
I have been a consistent critic of Autopilot; so much so that my last post with that very same video got nuked hard for awhile.
The same suggestion as before, autopilot needs to be turned back into learning only. It never should have left. There were way too many videos just on youtube showing how bad it could be. It should simply be recording what it would decide to do versus what the driver did and flag every exception. It certainly can see barriers and its routines could save every instance where it might have hit something.
The NHTSA really should demand that Autopilot features be disabled. There are even cases of it failing to park without hitting objects.
Musk should be lauded for his EV push, it opens doors to all sorts of new technologies being incorporated in cars and will one remove the need for gasoline/diesel as the primary drive in all cars. However he needs to learn to step back on technologies that are clear cutting edge and not ready for prime time.
At this time features like "AP" should be used not to take over for the driver but instead assist the driver by keeping them from being unsafe. This means doing similar to what blind spot monitors to and lane drift systems work on.
(iii) That there doesn't seem to be a continuous path between where it is headed immediately and where it will have to go 2 or 3 seconds later. If your immediate future can't be harmonized with what will happen afterward, something is wrong.
(iv) That it has (literally) been down this road before and had to revise its estimate of what it thought it was a lane (i.e. jerk back into the lane once it realized).
(v) That other Teslas have been down this road and learned that.
(vi) That the lane is suddenly getting wider than a freeway lane should, and that doesn't normally happen.
(vii) That the paint is worn, which indicates poor road maintenance, which necessitates caution.
(ix) That it saw a freeway exit sign a short while back, which logically indicates there must be a V-shaped spot ahead somewhere that it shouldn't drive into, and this could be it.
I'm nerding out here, but roads are complicated and it actually requires a qualifier!
Assuming no exit-only lanes, there is a V-shaped area you can drive into: the beginning of the new exit lane. And then it's followed by another V-shaped area that you can't: the resumption of the shoulder.
One way you can tell the difference is lines, if they are painted properly. But you can also tell from context. If you know (from experience and/or road signs) that only one lane peels off for this exit, and if you have already seen the V-shaped area for the exit, then the next V-shaped area is not OK to drive into.
Right, I fail to see how an obstacle as large as a divider is not seen in time. If this is the case, then the system is inadequate to see many types of debris that could be on the road.
It's mostly due to limited sensors and immature algorithms.
If you think about a sharp left turn with a wall on the right side of the road, there is a "stationary object" (the wall) in front of the vehicle throughout the turn that will be detected by radar and cameras. In order to navigate a situation like this, an autonomous vehicle has to have additional decision-making in place to effectively override "stationary object in front of me = stop" logic. But once you do that, it gets tricky to determine when you should ignore the "stationary object" and when you shouldn't.
On top of this, Tesla opted not to include LIDAR as a sensor in its vehicles, so it does not get a 3D point cloud to help identify what an object is and what to do with it. Instead, there's a high reliance on the cameras and radar, but these have some limitations. For example, it's difficult for a camera to distinguish between two objects of similar brightness and color, so for example if there's a white semi-trailer in front of a white wall 100ft away, the camera might not "see" the trailer. This can create situations where sensors are reporting conflicting data, which is bad - for example in the trailer situation, the camera would be saying "there's nothing in front of me until a wall 100ft away", but the radar would be saying "no there's something right in front of me". Or the radar might not "see" an obstacle that is a few feet off the ground while the camera is saying "there's something in front of me". Resolving these conflicts is hard.
> But once you do that, it gets tricky to determine when you should ignore the "stationary object" and when you shouldn't.
That's not tricky. You never ignore it. A stationary object means that you will have to either stop or turn. You don't know which one yet, but you definitely know you'll have to take an action. If you haven't decided by some close point, you just pick one.
As I understand it, what's hard is determining whether that stationary object is directly in front, or off to the side. Cars are always passing stationary objects on the sides.
As I understand it, Tesla autopilot can't distinguish objects on the sides (such as road signs) from objects directly in front. And it must ignore road signs etc. So it ends up ignoring all stationary objects.
"Ignore" was probably the wrong word, but it is tricky. Remember that a vehicle is evaluating the environmental state at a given point in time to make decisions. Let's take the example of everyday driving on the freeway where you are following a car going the same speed as you. From your car's radar and camera perspective, the vehicle in front of you is stationary. It's not until you combine that information with additional information (your own velocity) and infer a more complicated state of affairs that you get a more complete picture. The camera and radar are both screaming "there's a stationary object right in front of us!", but you're still traveling at 60mph directly toward it.
Combine that with the normal event of a garage wall or building directly in front of you while parking and we have three different "stationary object in front" situations that all have to be handled differently. In all three, the radar and camera are saying "stationary object ahead". But in the left-turn-with-wall situation you need to turn instead of stop. In the following-another-car situation you do nothing, and in the parking situation you need to stop instead of turn. And that's all without even starting to account for things like rain (opaque to some LIDAR and cameras = "we're literally crashing right now!" while radar reports nothing amiss). In order to decide what to do you have to pull in additional environment information, but then that's going to have similar caveats and conflicting reports to account for, so you need additional rules and information, etc. and now you've snowballed into a highly complex model with corner cases and potential bugs everywhere.
If it was easy we would have had self-driving cars a decade ago.
Don't make it sound like subtracting your own velocity is anything but trivial. Your whole first paragraph is playing semantics about the definition of the word "stationary". If the camera says you have zero relative velocity, that is not it saying "stationary", and you are not ignoring a "stationary" object.
> Combine that with the normal event of a garage wall or building directly in front of you while parking and we have three different "stationary object in front" situations that all have to be handled differently. In all three, the radar and camera are saying "stationary object ahead". But in the left-turn-with-wall situation you need to turn instead of stop. In the following-another-car situation you do nothing, and in the parking situation you need to stop instead of turn.
If the object is stationary, you turn-or-stop. If it's another car, it's really easy to measure that it's not stationary. If your sensors can't figure out the distance or velocity for multiple seconds in a row, your equipment should not be allowed to drive.
> highly complex model with corner cases and potential bugs everywhere
That's all in object detection, though. And none of that noise looks at all like a solid object at a specific location.
> If it was easy we would have had self-driving cars a decade ago.
I disagree. Even if you have a system that competently avoids stationary-object collisions, you're nowhere near a full self-driving car.
And while the overall problem of avoiding collisions has hard parts, none of the hard parts are in the "What do we do about a stationary object?" logic.
Subtracting your own velocity from what? I've been talking about individual sensor reporting with conflicting information and the complexity of decision making that results. A radar return from a rear bumper at relative zero miles an hour is exactly the same whether the car is actually in motion or not. Reference frame and velocity relative to the ground is up to your software algorithms to determine, not the sensors, and I've been trying to explain why that's non-trivial and "stationary" is always relative to something. You seem to be treating the entire system as an always-in-consensus whole, which it isn't.
I feel like we're just going in circles here. I could keep giving you examples of cases where sensors will report conflicting information that makes decisions like turn or stop difficult or false-positive prone, and you'll keep insisting that it's just object recognition and you should always know to turn or stop. I'll stop here and stand by my original point - sensor limitations and immature algorithms are the reason for issues like the ones Autopilot has; completely preventing these issues is not straight forward.
You said the camera/radar would be reading another car as "stationary", which is a velocity number of zero. Add/subtract your speed and you get the real speed.
A non-defective sensor suite is going to give you either distance or velocity, and you can use distance measurements to calculate velocity really easily.
> A radar return from a rear bumper at relative zero miles an hour is exactly the same whether the car is actually in motion or not.
Doppler effect.
> "stationary" is always relative to something
Once you have a velocity number, no matter what it's relative to, it's trivial to convert it to any reference frame you want.
> I feel like we're just going in circles here. I could keep giving you examples of cases where sensors will report conflicting information
That's because such things are irrelevant to my argument. I'm only talking about how algorithms should handle objects that have already been found.
> I'll stop here and stand by my original point - sensor limitations and immature algorithms are the reason for issues like the ones Autopilot has; completely preventing these issues is not straight forward.
I fundamentally disagree that the navigation algorithms are difficult here. Sensor issues are huge, but your original scenario was based on the sensors working and navigation failing. That specific kind of navigation failure is utterly inexcusable. It is actually easy to force a car to either slow down or turn to avoid a known obstacle. That specific part should never ever ever fail. It doesn't matter how hard other parts are.
Maybe I'm not being clear on my core argument. sensors give conflicting information in different scenarios. One sensor might read an obstacle to be avoided while another shows a clear path. Deciding what to do in these conflict situations is difficult. Saying it's easy is effectively saying that the hundreds of very intelligent engineers working on these systems are idiots because it looks like it should be a piece of cake. You're using similar logic to non-technical people who get angry when a developer can't give an accurate estimate of time needed to implement a new feature - (s)he has the code and requirements right there, it should be easy, right?
I'm quite familiar with how vector math works. Your comments above seem to still be missing that I am talking about situations from the perspective of individual sensors, which have no ability to make judgments about different frames of reference. That only applies at the level of the full driving system. For example, to a radar sensor alone, there is no difference between a car parked 10ft in front of you while you are motionless relative to the ground and a car 10ft in front of you while you are both moving at 60mph relative to the ground. Both are "stationary" as far as the return signature is concerned. There will be no doppler shift in either case. The driving system has to combine this with other sources of information for it to be useful, and that's where problems creep in.
> That's because such things are irrelevant to my argument. I'm only talking about how algorithms should handle objects that have already been found.
We're arguing two perspectives of the same higher-level opinion (algorithms are insufficiently developed to be safe enough for autonomous driving). What I'm trying to say is that there are a number of fuzzy steps to get from sensor readings to actual object detected, and then from actual object detected to "known obstacle" classification, as you put it. I don't think I'm going to be able to argue this case clearly enough in the comments here, so I'm going to add this to my longer article writing list. Thank you for the constructive debate on this.
> Saying it's easy is effectively saying that the hundreds of very intelligent engineers working on these systems are idiots because it looks like it should be a piece of cake.
Saying one very very specific thing is easy is not the same as calling engineers idiots. It's bad to conflate "the system is hard" with "every single piece of the system is hard"
> The driving system has to combine this with other sources of information for it to be useful, and that's where problems creep in.
There are many places where integrating information can let errors creep in.
The car adding its own ground velocity in is just... not one of them.
> What I'm trying to say is that there are a number of fuzzy steps to get from sensor readings to actual object detected, and then from actual object detected to "known obstacle" classification, as you put it.
Agreed. But while some steps are fuzzy, some are easy. The chain from start to finish is fragile and difficult. But some of the individual links are rock-solid.
> I'm going to add this to my longer article writing list. Thank you for the constructive debate on this.
I'm not sure if we really got anywhere productive here, but good luck!
There's some confusion about what "stationary" means. I was using the external reference frame. So I called road signs etc stationary. But in the vehicle's reference frame, road signs etc have closing velocity.
Vehicles in front moving at the same speed aren't stationary in the external reference frame. They're stationary in the vehicle's reference frame. If one stops, though, it becomes stationary in the external reference frame, and has closing velocity in the vehicle's reference frame.
You're not being pedantic - it's an important distinction. That's why I've been putting "stationary" in quotes the whole time. An individual sensor has no concept of a reference frame, only the broader ADAS/autonomous system does. That's why acting on object detection is non-trivial - individual sensors report "object" with no context and it's up to fallible algorithms to make sense of often conflicting information.
But then the car will have a lot of false positives and it will stop now and then for every mundane thing that flies before the sensors. Every one of your customers ll be pissed, because the car cannot differentiate between debris on the road and a wall..Showing the true limitation the thing they call "Autopilot"...
Tesla seems to have figured that, It is better to kill some of your customers then pissing every single one of them off...
If the car has the slightest ability to follow lanes, it will choose "turn" and not stop until it gets hopelessly lost. At which point it only stops because it was avoiding certain disaster.
The dichotomy between ignore and stop is a false one.
I can see how cameras are basically useless. And also radar, because it just reports relative velocities. And without angular resolution, how could the system distinguish stationary objects on the sides from those straight ahead? Maybe there could be multiple highly directional radar "guns", operating at different frequencies.
To simplify, radars can see how far away objects are and how fast they're moving but only dimly in what direction they are. So if you don't want your car to brake every time you pass under an overhead sign or over a metal grate you need to filter out non-moving returns to some extent. At a certain distance the cone of the radar should only be intersecting things that are actually a danger and you can start braking. But it's really good at noticing if a moving object has changed speed, like the car in front of the car in front of you braking.
On Teslas theoretically the camera is there to prevent stuff like this but vision AI isn't all there yet.
Other companies use lidar which gives pretty accurate distance and direction but not object speed. Pretty useful for avoiding obstacles but current automotive grade lidars are very expensive. Much cheaper solid state lidars seem to be imminent.
I think the reason is obvious: the hardware and software tech is just not there yet.
The hardware that may be actually required to make this work well may still cost tens of thousands of dollars, which means car makers will avoid using it and will use cheaper options like cameras and radar.
While on the software side, we may be 80% of the road there, but the other 20% and solving every single edge case out of potentially millions will take another decade or more.
And that's without even mentioning software bugs that could make the whole system crash. I think someone mentioned recently that there are on average 15 bugs every 1,000 lines or code or something. Either way, we should expect a lot of bugs to be in this software. Remember Toyota's "spaghetti code"? And now we're just going to assume that Toyota's self-driving cars are going to drive us safely within 3 years?
And of course nobody wants to talk about how these cars will be hacked remotely.
Based on (1) my experience seeing that people in non-Tesla vehicles tend to cross late from the 85 ramp to 101, (2) the wear pattern in that photo, (3) the number of Tesla vehicles travelling that route, and (4) the numerous examples of Tesla autopilot being total shit at handling that situation from the #2 lane, I wonder how much of that paint fading is due to Tesla autopilot being completely terrible.
Holy shit that’s a good idea, we should just fucking do it ourselves.
This reminds me when I was living in the Midwest and there was this one spot where I had to do a u turn in my commute which was very dangerous because of some tree branches obstructing view of oncoming traffic. One day I almost got into an accident and since it shook me quite a lot, I wrote an email to the city that the tree branch needed to be trimmed so I could see better. Sure enough I got an email back a few days later from a woman saying they “understand my frustrations” or something like that. I thought I did my part and went on with my day. Fast forward about a month, and I almost get in an accident at the same spot, man I was fucking furious that my communication with city was just “ohh we understand” —- instead of employing a sea of men and women telling me that they “understand” why could they have just not just trimmed the tree and improved safety.
If I was still living there I think I would have just gone with my chainsaw and done it myself.
Back in 2001, Richard Ankrom[0], installed his own Interstate 5 exit sign to an existing official Caltrans sign in Los Angeles, in order to better inform drivers of where the exit was.
The sticker, was made to Caltrans and DOT specifications, and affixed in broad daylight. Since the initial installation, Caltrans replaced the sign, but now adds the I-5 signage themselves.
Sorry, this doesn't scale. Yes, this was a benevolent alteration, and sort-of-approved ex post; once this would become a widespread modus operandi, most edits would be neutral at best, but likely to be malevolent - "I want my ad on the sign!" "I don't want the X-bound traffic to go by my street, let's move them over to the next exit!" "your edit is worse than my previous edit!" The traffic signage is not Wikipedia, for multiple reasons.`
Seems to. If this becomes widespread, cities could just abdicate attempts at road work- do it yourself, citizen. (But not the taxes, obviously, those still need to be paid in full.)
Degraded paint does not a bad road make. Roads are optimized for human drivers that can leverage additional context to make their decisions. Many countries don't even bother to stripe their roads at all.
Chicago doesn't stripe most of the roads. Most of the roads that have stripes in Chicago are administered by IDOT.
Chicago even has this codified in the traffic code; while the rest of the state has "improper lane usage" as a violation, Chicago has "failure to maintain a lane".
Even on roads with painted markings they get scraped off by snowplows almost every winter. I'm reasonably sure that's what happened to that gore.
Huh? Which developed countries don't have road markings on highways?
Paint is just one aspect. There are also plenty of cracks and uneven pavement as you can see from the street view.
Exactly! To you its crystal clear, but if the other party were additionally considering all the non-highway accidents, then the distinction may not be so clear to them until you provided the additional metadata.
Now consider how you may feel about a car which has decided based on all available data that this strip of road is a lane. Its crystal clear to that car until its suddenly not.
You reacted with what appeared to me to be very restrained aggression (typical and acceptable, no offense intended) as you struggled to figure out if they were just being dense on purpose or were actually ignorant. I dont feel either were accurate but your message conveyed that was what you felt to me.
A car simply says “hey i dont know what im doing anymore. Help!”
Human drivers made the same mistake, though. That's why the crash attenuator had been destroyed and why the Tesla driver died: a human had made the same mistake and used up the crash attenuator, thus making the Tesla accident fatal.
When I lived in California, I was shocked at how bad the roads are and how poor the signage was (since many of the signs get stolen and are not replaced for months or years). A moderate thunderstorm knocked out power in some parts of LA for weeks. It really seems like there are some parts of the government and utilities that are sclerotic.
Alameda St, just north of I10, is in such poor condition that it’s practically third-world. I’m usually so concerned with damage to my suspension that I avoid it.
Los Angeles has a very interesting set of problems. Chief among them is the repercussions from closing roads/highways for repairs. It’s often times better, I think, to let them suffer in disrepair than it is to close them for any length of time. With semi-autonomous cars, however, I wonder if there will be greater pressure to keep them in repair.
Freeways are constantly repaired. They just do it at night. They don't 100% shut it down, they close down all but 1-2 lanes. I used to make a lot of night trips from San Francisco to San Jose, and I ran into them pretty often.
It’s kind of by design. America, compared to a lot of other western nations, has structured society to funnel more wealth into corporations, individuals and the military, and less into most other government functions. There are plenty of advantages to this strategy, but well maintained roads aren’t one of them.
Wealthy individuals/corporations but poor roads (and other government infrastructure/services) isn’t unexpected, it’s more or less the goal.
Imagine what the roads look like in areas that aren’t nearly as wealthy. 11% of all roads in North Carolina west of Asheville are unpaved. Some counties have 200 miles of unpaved secondary roads. The primary roadway surfaces are far from perfect too.
One problem which plagues my state: the money spent on fighting fires last summer had to come from somewhere, and that somewhere was a portion of the budget for road maintenance.
California had a lot of fires it had to fight last year. Where did that budget come from?
It seems pretty consistent in the other videos they're showing, too. The left-hand lane marking splits into two lines ahead of the gore point. If the further of those two lines from the car is significantly stronger than the other, then the car will choose that as its new 'left lane' (rather than always choosing the nearer one) and follow it straight into the gore point. I don't think this is a mysterious bug, so much as a significant corner case where their system fails.
That's a bit disconcerning, i mean, obviously not all roads are going to be painted properly so you would assume that they have a myriad other ways to figure out that this is a road bifurcating and not far away there's a huge (1m+) obstacle. I 've been researching driving car literature but i find there's surprisingly little detail about the kind of data used and safeguards taken. Makes you wonder if the entire field is really as mature as it claims to be. I mean , if it fails in the perfectly large american roads, how is it going to perform in chaotic traffic in some old european cities and beyond where almost nothing can be taken for granted? Does anyone know of a comprehensive review of the technology that goes in an AV from the past 1-2 years ?
The system needs to be able to look more than 30 feet ahead. 30 feet of faded lines shouldn't affect it so much that it leaves the lane and collides with a clearly visible wall. What's it going to do in snow? Or in low-light conditions with a blown headlight?
At the very least, the system should slow down significantly and warn the driver if there's not high confidence in where it should be going.
Week after the fatal accident Musk continues to promote Autopilot as safe by retweeting stuff like
https://twitter.com/Teslarati/status/980476745106231297
Tesla Model S navigates one of Vienna’s ‘crappiest’ roads on Autopilot
IIHS '17 study puts Autopilot at only reducing 40% accidents which is the same as their '16 study which had the same 40%
reduction for any car with Auto Emergency Braking.
He definitely has a penchant for exaggeration if not outright dishonesty. It bothers me that people respect someone who can say so many blatantly false things and that an engineer can be so loose with the truth.
That's his shtick though, promise unbelievable things, then when you get halfway there, people don't care you were wrong and/or lying. He understands how to market himself and his ventures.
My guess is he's got too many fans and money behind him. If things ever go truly south, that's when they'll really go after him, because people really care about their money.
>It bothers me that people respect someone who can say so many blatantly false things and that an engineer can be so loose with the truth.
Why does it bother you though. "People" are like that, right? If you know your target demographics, and thus know what strings to pull, then you can make your target do anything. In this case, invest, financially and emotionally in what ever he is doing. Pretty basic advertising.
Elon Musk's target demographics is the "tech" inclined (administrators and layman alike. America is the greatest country in the world!, Yay!), and what he is doing is to exactly cater to that target. What you see is just a manifestation of that.
Again, pretty basic advertising/marketing tactics. But I think it is for the first time we are seeing PR tactics that are tailored to the nerds/techies. That it why this seems so effective..
In someway, I think Elon Musk is to tech what Christopher Nolan is to movies..
You're asking why exaggerated claims which have led to deaths are distasteful?
It might be basic advertising, but that doesn't make it right nor effective. If Musk would just do what he has without all the exaggerated promises, why's that not good enough? The claims make him look worse, not better.
Denying reality is great until it comes crashing down. Just a few examples. Forgive this first title. The important part is the quote:
Tesla reported first-quarter earnings last week, and while they were better than Wall Street expected, the earthshaking news that emerged was that Musk is taking the carmaker in a new strategic direction.
Tesla, which delivered about 50,000 vehicles last year, is aiming to deliver 500,000 in 2018. The production target isn't new, but the timeline is. Tesla had previously said it would build half a million cars a year by 2020.
This did not happen, which led him to Tweet last July that they could deliver 20k cars per month by Dec. This did not happen either. Investors were relieved they managed to hit 2k per week this past week (this is down from the 2.5k they were targeting).
He's got a shaky attachment to the truth. I don't know if he really believes himself (this is called delusion, usually) or he's a liar. Either way it's kind of odd for a guy who doesn't need to exaggerate to be impressive. Not sure what it has to do with Nolan. He's never promised a 20 hour movie.
Break the law? One surprising thing to me there is a ton of media attention on the SV right now (almost all negative) and that does not look like a coincidence. I disapprove of what FB is doing but on the scale of problems that Big Pharma, Health Insurance, big banks are causing thats not even a blip and yet most media attention and good amount of public outrage is fairly skillfully directed toward tech and SV.
Or perhaps that SV is vocal, excessively so: promises miracles, yesterday, at will, almost for free. In other words, massively overpromises, then underdelivers - that amounts to painting a huge sign "KICK ME" on its collective butt. While other enterprises might do that as well, even to a criminal extent, they do not brag about it. This might be all the explanation that's needed, no conspiracy necessary.
I would guess "we'll give you social network, not a total-surveillance tool" would qualify. I don't think the second part would have been a reasonable expectation.
I don't see any (although "FB is free and always will be" has plenty of wiggle room). Also, I don't see any explicit mention that Tesla's Autopilot is actually an autonomous Level 5 system - and yet people assume that. Are you saying that people should only be angry about things they were promised explicitly? (If so, I would agree - but 1. people are not 100% rational, 2. expectations are important for PR, even if not legally binding. Going from "SV doesn't put this in big bold letters on the front page" to "it's a conspiracy against SV, not because they're an easier media target" is a bit too much of a leap for me.)
" Are you saying that people should only be angry about things they were promised explicitly?" I am def. in no position to direct how people should feel about something. To me expecting something beyond what is actually promised just sounds unrealistic. If I hired you to write software and you promised to write software but I was also expecting you to clean my pool that would pretty weird.
Indeed. But if it would make a clickbaitable story, would it seem more lucrative to cover this vs. yet-another-corporate-embezzlement cause? In other words, media attention is also eyeball driven, to the point of sensationalism.
> scale of problems that Big Pharma, Health Insurance, big banks are causing thats not even a blip and yet most media attention and good amount of public outrage is fairly skillfully directed toward tech and SV.
It really shouldn't surprise. "Big pharma continues shitty but legal business tactics," "Big banks continue to play games with people's money without significant oversight," and "American health care system overly expensive for poor outcomes marking continued national shame for leading nation" they're all old stories so there's not much new happening there. There's still lot of people angry out there about these things but there's nothing really new coming out so it doesn't get coverage. News is naturally biased against news that doesn't change, it just gets boring to most viewers.
I agree with you with one exception "Big pharma continues shitty but legal business tactics" I doubt the legal part they get constantly fined for various illegal activities.
Yeah that depends a lot on what you're mad about wrt pharma. I was referencing the generic high price of drugs issue. They're definitely doing actually illegal things for sure.
That may be correct as IIHS took few months and concluded in Jan'17. People are still waiting for in-house Autopilot aka AP2 to reach feature parity with mobileey AP1 if you follow https://forums.tesla.com/forum/forums/enhanced-ap-hw2-qa-fir... so i'd suspect AP2 to be worse-off depending on when it gets tested.
Currently, the Tesla autopilot page has this as its top headline:
> Full Self-Driving Hardware on All Cars
Followed by this:
> All Tesla vehicles produced in our factory, including Model 3, have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver.
Getting past the arguments of whether Tesla should be calling this "Autopilot", is its claims about hardware true? Is the failure to detect stationary objects like a firetruck or a gore point solely as a result of currently-inadequate software? Because it doesn't sound like the sensor suite is anywhere near the AVs of Waymo or Uber.
Their claim is that all the sensing capability is there for the car equivalent of the proverbial sufficiently smart compiler to achieve full self-driving capability. But said sufficiently smart software running sufficiently smart algorithms capable of sufficiently speedy processing of the senser data doesn't exist at the moment.
The claim is falsifiable if and only if one can prove the minimum sensor capability required for full self-driving exceeds what they have. But even proving the falseness at a later date doesn't prove malice or fraud unless Tesla knows right now that their claim is not true.
> Getting past the arguments of whether Tesla should be calling this "Autopilot", is its claims about hardware true?
Tesla actually hasn't called that Autopilot -- they refer to it as "full self driving" and not in the same category. But...is it true? It's vaporware right now, so we can't really say.
Obviously, in a purely hypothetical landscape, even with cameras you should be able to emulate what humans can do and have 100% attentiveness. That is to say, humans don't have lidar and radar, yet humans do drive. So, if you can achieve human levels of perfection and have 100% attentiveness, it should be possible to beat out crash statistics based on the fact that many are caused by failure to respond on the human side accurately or fast enough.
This is all hypothetical, though. All of it comes down to computer vision/software, which is really hard to do. It's possible, but not yet done.
Humans don't have lidar and radar, but humans permitted to drive have optical hardware which is in all respects except simultaneity of inputs from different directions superior to that which Tesla is claiming is sufficient for full safe driving capability.
Much harder to imagine software surpassing alert human driving capabilities if it's not equipped with sensors offering better depth perception that could compensate for lack of general intelligence to process unfolding situations, visual anomalies etc. Harder still if the optical sensors are objectively worse than human ones in many lighting conditions.
"Autopilot" is the name of the page that danso referred to, is in the title of that page and is used on the page 9 times. How would you possibly not recognize this as name of the feature?
>Obviously, in a purely hypothetical landscape, even with cameras you should be able to emulate what humans can do and have 100% attentiveness. That is to say, humans don't have lidar and radar, yet humans do drive.
Which camera has the same dynamic range as the human eye?
> is its claims about hardware true? Is the failure to detect stationary objects like a firetruck or a gore point solely as a result of currently-inadequate software?
It might be true, but it's also true that most self-driving researchers prefer to use lidar, not 2D images, because lidar gives better results.
I want to ask: why is Tesla's advanced suite of cameras and ultrasonic sensors unable to detect a big, yellow and black stationary barrier?
In combination with chevron markings that mean "never ever drive here". Either should be a globally overriding signal to emergency break or turn, not "keep following the white line".
If the answer is "during morning light, it doesn't see it", then they need to take the feature offline until the car is fitted with an appropriate LIDAR; or other technology that can detect stationary obstacles.
>> I want to ask: why is Tesla's advanced suite of cameras and ultrasonic sensors unable to detect a big, yellow and black stationary barrier?
The answer to that is that sensors detect, but don't make decisions. The decisions are made by the car AI that takes input from the censors and outputs control signals.
So the pertinent answer is "why is Tesla's AI unable to make the decisions necessary to avoid a big, yellow and black stationary barrier?".
Unfortunately, nobody can answer that, including Tesla engineers, because their cars' AI is trained with black-box machine learning algorithms that are extremely resistant to any attempt to understand how they make decisions.
But, you know. I'm pretty sure they're really safe /s
> Unfortunately, nobody can answer that, including Tesla engineers, because their cars' AI is trained with black-box machine learning algorithms that are extremely resistant to any attempt to understand how they make decisions.
What is the justification for allowing anything we don't understand to be on the road? I really want the NTSB to ask specifically what decisions the Tesla was making and not accept "black box learning" as an answer.
Because we have a right to travel in most countries and cars are the mechanism that we use and we can be held responsible for our actions. This black box AI has none of those attributes.
Apparently their radar can only see objects that move.
So, to see this barrier, you'd have to guess its distance from 2D images. That's a hard problem. I think this would be called "obstacle detection using stereo vision" or maybe even "monocular vision" if they're using one camera.
There isn't as much research into this field as there is in systems that use lidar because lidar has been proven to work, and academics probably don't care as much about building their products at massive scales, so the cost of lidar doesn't matter as much.
Searching Arxiv for that phrase [1], the top result is a paper called A Joint 3D-2D based Method for Free Space Detection on Roads [2] that uses Lidar. The next result that doesn't mention lidar is Free Space Estimation using Occupancy Grids and Dynamic Object Detection [3].
Following that is Failing to Learn: Autonomously Identifying Perception Failures for Self-driving Cars [4]. In that paper's conclusion, it states its next plan is to use lidar to improve data quality,
> we plan to incorporate free space computation from the path of the vehicle and from active sensor returns like LIDAR to identify false positives to further improve our assessment and understanding of modern object detectors at the fleet level.
I've seen a lot of Tesla apologizers talk about how it's such a bad design, but people seem to be able to dodge it successfully and so should Autopilot. It seems like this feature should require hands on the wheel until it has enough training data that it doesn't do things like smash you into an obvious obstacle in whatever lighting conditions are possible at any time.
> but people seem to be able to dodge it successfully
are they? so why was the barrier crushed in the first place before the crash? and why was it designed to be easily rebuild? its almost like someone knew people will have trouble and will eventually drive into it.
> Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.
Which might be technically true, but conveniently ignores other, very similar crashes.
my question which is the same as others, does Tesla register events where the driver overrides the system? If they register it how do they account for it?
if they are not even considering that the driver overrode the system because it was going to crash then all their claims are suspect.
Why? The car isn't driving by itself. What matters is whether Autopilot is a net safety improvement, which it currently appears to be. Why would you add the weird requirement to the system that it be better without human input at all?
I'm going to wait five to ten years before I get my robot car. Let all those early adopters be the guinea pigs we need to reach that level of robot cars being safer then human drivers.
It seems tons of upper middle class to wealthy people are lining up to pay dearly with their wallets and lives to be a billionaire's crash test dummy.
We are nothing like crash test dummies. Crash test dummies are intentionally slammed into barriers at 60+ mph.
Testing of vehicle safety features, and safety features of many products, is most often done in mass markets. Automatic safety features are being rolled out in essentially every single auto brand. Automatic emergency braking (AEB) is set to be standard in numerous makes in a couple years. Lane control, lane reminders, blindspot detectors, these features are all being rolled out on the roads, to varying degrees of success.
This is a less-safe version of the way drugs are brought to market. First, small-scale tests with cells only, then mice, then healthy people, then target patients, and finally post-market surveillance.
Cars indeed have a MUCH less regulated rollout for their safety features, yet to say we are dummies in their tests is not even close to reality. While drug testing may be more stringent, we also don't have 30-40k people a year dying from "not taking a drug" accidents, which is the situation on our roads today.
Ultimately, it comes down to lives saved. Every time you delay a safety feature, you have to ask the question: is this delay going to lead to loss of life or saving a life? That's the moral quandary the automakers are in, and it seems they go with "release it" much more often than not. That will lead to some EXTREMELY well publicized accidents, and yet we will NOT hear of the countless cases where everything went nominally, and who's to say precisely how many accidents were prevented?
Come to think of it, whatever happened to Tesla's self-driving effort? They claimed to be shipping Model 3 cars with all the hardware needed for self driving (not including a LIDAR). They produced videos. Most of this was 1-2 years ago. But they never got to deployment, or even to the point of demoing it for car magazine writers.
Was it really that much of a setback? It was my understanding they used some of their own software in combination and had already planned to take over the role Mobileye was serving.
Maybe I bought too much into their PR spin but it sounded as if it was barely a delay.
From reading owner discussions it seems like the general consensus is that they only just recently reached parity with (and slightly surpassed) their previous Mobileye implementation with their own software/sensor stack.
Two years ago, Tesla claimed self-driving in two years. Last December, Musk was claiming they need 3 more years.[1]
Waymo is currently the only credible player in self-driving. Uber crashed. Volvo backed off. Tesla has problems. Cruise just had one of their cars pulled over and ticketed in SF.[2]
Waymo's cautious approach is paying off.
I wonder if this will be the case that really tests Tesla's blanket legal defense in court. (That accidents are due to less than "fully attentive" drivers.) The reaction time needed in that video to correct for the autopilot is downright scary.
It's a situation where if you were piloting yourself you're a lot more likely to have mentally mapped that you continue in a straight line at that point, so listing to the left is going to feel WRONG even when suddenly blinded by the sun. But if autopilot takes you there, and you're "lizard brain level" paying attention but didn't have that mental navigation map in your head, and the sun's in your eyes, you're going to take a few seconds to try to figure out how to correct so as not to make things worse. That's enough to hit the gore point in that video.
For all those who are clamoring for autopilot to go back into the lab consider this: I have autopilot, and I use it the way people are instructed to. I consider it a companion to my driving, a backup co-driver that has better reflexes. I am in control, and mentally steer the vehicle with my hands on the wheel with the machine. We are driving the car together and I overrule the machine when we disagree. Often the machine catches things before I am able to react. I am 100% convinced that my family is safer with "us" driving them then just me driving them solo.
What do you say to people like me, who, if you took the feature away, would be forced to reduce their safety? It's easy to call for its removal when you only focus on the subset of people who will have their safety increased by removing it (ie, the people, often through no fault of their own, who come to rely upon it more than it is designed for.) What about everyone else that currently benefits from it? Will you force them to go back to driving "solo" to protect those people?
It's not so simple as "Tesla is reckless by releasing this" -- this is providing real safety to people who are using it the way it is intended. Also, the release of autopilot as-is also makes it possible to accelerate the development of full autonomy, which will save lives that otherwise would be lost to accidents if it were to be created later in time. Any analysis which does not account for this dynamic is missing an important piece of the ethics of removing it.
The main question that is worth asking is if there is more Tesla could do to ensure autopilot is used as a co-driver. The name is a poor choice to start.
> What do you say to people like me, who, if you took the feature away, would be forced to reduce their safety?
I would say we should go by the numbers. How many accidents and deaths are there per mile, on similar roads (divided), from vehicles of similar class (new, luxury), in similar weather conditions (fair)? Compare that to miles where autopilot is enabled and we have something.
As yet, I don't believe Tesla is required to report this, and they probably won't voluntarily do it. At best, we would get results from certain states. We would never get national or world level data.
So, we'll have to wait for the NTSB and NHTSA reports and look to their recommendations.
I agree that the data will be helpful, but it's also important to remember that "Autopilot" itself is a moving target. Tesla has already made changes that are designed to increase the degree to which drivers are forced to pay attention when the system is engaged.
In my view, the ultimate goal should be to ensure that Autopilot is used in the way I described, and is disabled for drivers who are unable to use it that way. That, in theory, would provide the most benefit for the most people. Perhaps that is a fool's errand, either because its technically impossible, very few people use it that way, or literally nobody is able to and anyone who thinks they are is being fooled by their own bias (as I'm sure many readers of this thread assume I am.)
But it seems a goal worth pursuing before giving up and throwing it out because the net result of say, reducing accident fatality likelihood by 50% 5 years ahead of full autonomy results in something like 75k saved lives (in the US.)
Moving target, sure, but Tesla has been making statements claiming using AP is safer than not since AP's inception. You can't have it both ways. Even Tesla couldn't be certain whether certain changes lowered the accident rate or raised it. We should look at total accidents, not miles driven since the latest version.
Perhaps the best comparison is right within Tesla's data. Compare miles driven in the same model with AP activated and with it turned off on the same roads.
> In my view, the ultimate goal should be to ensure that Autopilot is used in the way I described, and is disabled for drivers who are unable to use it that way
Actually there has been some work in determining when drivers are distracted [1]. This seems to be what you suggest, something that can be used to disable AP under certain conditions. Would you support Tesla installing such a system?
Why not the other way around? You do the driving, and the computer is the backup in case you mess up?
Humans are extremely bad at monitoring monotonic things, even if we are trained to do them. That‘s why trains have elaborate dead man switches to ensure the operator is paying attention.
From these accidents it‘s clear that Teslas aren‘t good enough at making sure the driver is paying attention.
And it‘s also pretty clear that these autopilot victims weren‘t aware how distracted they really were. I‘m pretty sure they thought they had everything under control, right up to the point when their car did something stupid.
> Why not the other way around? You do the driving, and the computer is the backup in case you mess up?
Unfortunately having the computer takeover has the same issues. If it thinks you're following the wrong lane it could choose to steer you into a barrier, for example.
Chris Urmson's team at Google ran user tests at the start of their autonomous vehicle program and found that users misbehave [1] when behind the wheel of semi-autonomous vehicles, despite having been given instruction to pay attention. We're seeing that scenario play out in slow motion in Teslas and perhaps other driver-assistance programs that get less press.
Auto-pilot will put massive pressure on states and municipalities to improve road maintenance - specifically lane indicators and fixes to "peculiar" roads/intersections etc. This will increase budgets, which will in turn create massive opposition to self-driving autos by those not wanting to pay more taxes for other people's fancy cars.
I travel/drive a good deal in the summer and for a few years now have silently (mostly) muttered to myself that if auto-pilot cars are going to use "painted lines" to navigate - they're "gonna have a bad time".
Poorly painted lines aside, think of how many "wtf" moments you have on the roads around your town - just down the road from my house is a road that just... ends (due to poor planning years ago). It's obvious to human drivers what to do (you're supposed to exit the paved road and drive over a hump to the dirt road that continues), but for auto-pilot? Not so much unless they make use of "data sharing" that would reveal the human solution to this screwed up piece of road.
> Wednesdsay, a Tesla spokeswoman told the I-Team, "Autopilot is intended for use only with a fully attentive driver," and that it "does not prevent all accidents - such a standard would be impossible - but it makes them much less likely to occur."
Actively veering towards stationary barriers in otherwise perfectly safe conditions is not what I would describe as reducing the likelihood of accidents.
In this one specific case sure, the AP was confused about the lane markings and followed the wrong path. But in plenty of other cases the AP is responsible for preventing accidents (no, I have no link to back up my statement). Assuming that is the case, then that is what I would describe as reducing the likelihood of accidents. You cannot focus on a specific accident (or even a handful of accidents) and use that to refute the systemic benefits.
Vehicles with autonomous features will save lives. Fewer people will be injured or killed in cars that have these features. However, some people will still be injured or killed, and those people will comprise a different set than would otherwise have been harmed. They or their families will seek restitution and hence, the way forward will be determined by the courts.
The thing is that most driving deaths don't occur on the highway with dry-well lit roads, and current vehicles can't drive for shit in bad weather. So they shouldn't have autopilot on at all, they should just have the collision avoidance features on.
Agreed - I might test using it, but I would ensure that I have 100% mental focus. Now I'm wondering, if I have to work hard to pay attention that a computer is operating correctly, what advantage am I gaining by using it?
AutoPilot is a human assisted utility. I think many people here, and in general, expect too much of it and are being too critical when there is an incident.
I do think Tesla needs to communicate better what AutoPilot is and isn't because we've now lost lives due to operator negligence, and by negligence, I mean the operator is still responsible for the vehicle despite relinquishing control to a computer.
Just got a model S after driving my toyota camry for the last 10 years. Autopilot is a fun toy, but I can't understand for the life of me why people actually use it like its a real thing. I have it off on mine.
When i look at the video, I would probably drive wrongly for the first few seconds: The road lining is not there anymore, causing you to make mistakes. But people can correct quickly when there is an anomaly. Weird that autopilot didnt see the divider.
Biggest thing I gathered was the public safety barrier was damaged from a previous crash, 12 days earlier, so the driver basically hit a hard wall. This has more implications for the US’s infrastructure than Tesla’s safety IMO.
> CalTrans finally sent ABC7 News a statement later Thursday"confirming" it's their policy to fix broken safety barriers within seven days, or five business days ... but storms delayed the work. As Dan Noyes reported Wednesday, the family told me they believe Walter Huang would have survived, if CalTrans fixed that safety barrier in a timely fashion.
Until the barrier is fixed that part of the road should have some sort of marking to indicate that there is an extra danger there, that's what cones were invented for.
I'm sure a hitting series of traffic cones taking up the space where the safety barrier was could have alerted even a modestly distracted driver to brake.
Plus the genius driver had a number of issues with autopilot in the same spot. He had complaints in with the Tesla dealership that his car tried to steer him into the barrier in that location, prior I believe. Something like 7 complaints filed?
Maybe he was trying to get out of his loan but didn't know the impact absorption barrier had been removed.
My former employer makes a camera-based lane departure warning system. Although I was not involved with its development, I do know that the amount of testing that they do on these systems is very extensive (hundreds of people involved in the test process).
I would be very interested in seeing a side-by side test of the LDW systems of major manufacturers. This might give us an idea if the problem is fundamental to the technology or is simply a deficiency in the engineering approach of one (or some) manufacturer(s).
The root problem (as described by Taleb in "Skin In The Game", is: not all statistical probabilities are "ergodic" -- have no risk of Ruin or Extinction.
The problem with saying that "our automatic driving system is much better than the average driver" and thus you should be good with allowing our system to drive, is that you are not exposed -- serially -- to the whims of some average driver. One "average driver" mistake and you're dead. No replays.
Therefore, the compound "odds" are INCOMPUTABLE. Just like being paid $1,000,000 per trigger-pull to play Russian Roulette: you are not earning an average of $833,333.33 per try! On "average", over a pool of random 1-time players, sure. But you? You're guaranteed to be dead in a small number of plays. Non-ergodic and ergodic probabilities are incomparable.
So; since automatic driving systems are non-ergodic, they must be dramatically better (ie. orders of magnitude better) than the "average driver" to be considered viable.
I live in a part of the world where even cruise control is unusable for 6 months of the year. Automatic driving systems? Not even thinkable, except for perhaps a few days of the year, on a few simple portions of a few roads...
> So; since automatic driving systems are non-ergodic, they must be dramatically better (ie. orders of magnitude better) than the "average driver" to be considered viable.
The risks from human drivers are also non-ergodic, so that suggests that if automatic driving systems are even minimally better than the average driver you should demand that every other driver be replaced with them without delay, because you will die sooner with human drivers on the road.
Of course, you won't want to be replaced unless the automatic driver is better than you, and human drivers all think they are far above average. And that’s the real problem.
The real problem is that, yes, human drivers ARE better than computers — for some unknown subset of driving. It’s that small subset of lethal mistakes you avoid by not letting your computer drive that makes all the difference.
The computer must not just be better on average — people must know that it’s better overall, across all possible weird events that a human just might throw a Hail-Mary and survive!
> It’s that small subset of lethal mistakes you avoid by not letting your computer drive that makes all the difference.
The small subset of lethal mistakes you avoid by not letting yourself drive matter too...
Here's what I think is really going on. There's a transition from "not good enough to trust with a human life" to "approximately as likely as a human to make a fatal mistake, even if in different ways" to "clearly less likely to cause a fatal accident than even a good human driver". But there's also a huge pot of money for whoever wins this market. So at least some players are pretending that they're further along the transition to "better than human", hoping that the market believes them.
A second point: when we're in the state of "approximately as likely as a human to make a fatal mistake, even if in different ways", the humans look at the ways that the computer makes, and think: "That's a really stupid mistake to make. How could it be so dumb?" But just because the mistakes aren't ones that a human would make very often, that doesn't mean that the computer isn't on par with humans in terms of overall death rate. But the computer still looks bad on the mistakes it makes.
> human drivers all think they are far above average. And that’s the real problem.
This is often a joke, and there's some truth to it - people are often overconfident. But there certainly are individuals who think they are below average. It might be rare and still a market.
>> The company tells Dan Noyes they have made it clear -- Autopilot is a driver assistance system that requires you to pay attention to the road at all times.
The question is why it is Tesla, in particular, that has to remind its customers that they have to actually pay attention when they drive its cars. Is it possible the company somehow managed to convince people that its cars are equipped with self-driving software, rather than a "driver assistance system"?
When the self driving revolution started, conservative people claimed that one accident would stop the whole business. Well there are many accidents already and people are still willing to risk. One reason may be our tendency to trust something which we do not understand. We feel that if the feature is already in the car, it must be safe. Well it is not. And even worse, it is driven more by marketing and money than by safety regulations.
Self driving cars have absolutely no business being any road/lanes that is not purposely made/reserved for self driving cars. Period. Self driving cars need an ecosystem made for self driving cars. It's frankly madness to think otherwise.
Absolutely. It shouldn't even be that hard, with the billions that are being thrown at the tech. They should be spending time and money to upgrade specific highways and certify specific routes as "safe for automation", rather than trying to test their cars on the entire North American road network with millions of crazy edge cases. Over time, these certified routes will grow into their own network.
If they are unable to avoid blatantly obvious yellow and black barriers / safety cushions and their algorithm is a simple "follow the whitest white line", then they should not be on shared roads.
So aside from the fact that I can not afford these cars I would also give these at least 10 years to mature and get further than (what really does seem) at least the beta.
Right at the start I was very much wondering how these even allowed this feature on the road. Just from a regulatory point of view. It very much sounded and looked like a complete auto-pilot.
That said, people go through a drivers test, get their license and then are allowed to drive any vehicle really. Can you just solve this by giving the driver assistant a test and seeing if it passes? This needs some kind of filter or regulator to make sure it is safe.
I have to believe that, at this point, given the markings on the road, that California Dept of Highways (or whoever is responsible for lane markings) is just trying to kill people.
Just my opinion, but I don't think the local bay area ABC I-TEAM news can be trusted as a realiabe primary source. Seems like they are "cashing in" on a national story.
1. Ethical. One thing is that you do a stupid thing and die. Another is that a technology fails in trivial circumstances that in theory are avoidable. A doctor can make also errors, but a medical device is required to be very safe and reliable in what it does.
2. Of wrong comparisons: you should check the autopilot against a rested and concentrated driver that drives slowly and with a lot of care. Since in the statistics otherwise you do not account for the fact that when you drive, you are in control, and you can decide your level of safety. Such a diligent driver that goes to crash because of an immature software is terrible.
3. And lack of data: AFAIK there is not even enough data publicly available that tells us what is the crash rate of Teslas with auto-pilot enabled vs Teslas without auto pilot enabled, per kilometer in the same street conditions. That is, you need to compare only in the same streets where the auto pilot is able to drive.
Autopilot will rule the world eventually and we will be free of using the time differently (even if this was possible 100 years ago already stressing on public transportation instead of spending the money to have a car each one... this is sad. Fortunately in central Europe this happened to a degree, check for instance North Italy, France, ...). But till it is not ready, to plug it as a feature in a so immature way just to have an advantage over competitors, is terrible business practice.