Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Agree.

The "skill level" of an autonomous vehicle is continuous, not binary. I could totally see people buying a car based on how trustworthy it is (crash statistics).

Perhaps there will be some legally standard terms or discrete statistical thresholds makers will have to meet in order to legally claim certain levels of autonomy.



It will probably be "collisions per million miles" or "fatalities per million miles" or something of the sort.

Right now those numbers are roughly tabulated by insurance companies, but they're not really published as part of the boilerplate posted on the car where miles-per-gallon are often shown.

If people knew they were buying a death trap they'd probably give it some sober second thought.


> I could totally see people buying a car based on how trustworthy it is (crash statistics).

The problem with such an idea is that even if you sell 100 000 cars in the first year after launch (really good), you'll have to wait for about five years before you have the statistics to claim you car has any better than average safety record. But after those five years, your model is going to be near end-of-sales, so it won't matter.

And that's with safety records today. As cars keep getting safer, this gets worse and worse. Remember, NHTSA averages today are over the entire car population. Today that population has a significant tail of cars from the late 80s or early 90s without basic safety measures like airbags and crumple zones. These will have a much higher fatality rate. But it ten years, when those are mostly gone from the roads, it's going to be ludicrously difficult to make statistically valid statements about car model safety while that model is still being sold.


What is stopping from robotcars inc. from claiming "no autonomous driving AI from our company has been at fault or caused an accident ever." There is safety of the vehicle in an accident and safety of the autonomous driver which are getting conflated. You don't need to wait for the glacially paced NHTSA to post your accident statistics to brag about your AI avoiding accidents.

>But after those five years, your model is going to be near end-of-sales, so it won't matter.

I don't get this reasoning at all. Brand reputation matters a great deal. For example, Toyota put out a new Tacoma this year but people already assume it will be reliable and safe based on Toyota's track record. They don't have to start all over again and win all the customers back.

While models change and structural safety will change slightly model to model The brain of the car will continue to improve and we won't be starting over because they change the body lines a little bit.


Its not the NHTSA that's glacially paced. It's the simple fact that at current accident rates, statements about safety after one year of a model being on the road are statistically unfounded. It's simply not physically possible to say "this car is safer than average", because on average you would expect less than one fatality. Having zero fatalities in the first one or two or even five years after launch proves nothing.

Saying "our AI avoided four potentially-fatal accidents" is also a vacuous statement, since a) you don't know whether an accident would be fatal or not, since accidents are ridiculously chaotic systems and b) there is no way to say that a human driver would not also have avoided all those accidents.

In fact, most accidents are caused by people driving under the influence of alcohol or drugs, or falling asleep, or being on their phones. Using AI to detect and deny these people from driving would be orders of magnitude easier than autonomous cars, and provide 95% of the potential safety increase with none of the drawbacks. Why don't we do that? Consumer backlash is the answer. Even alcohol-preventing locks, mature tech today, is not being made mandatory because consumers would be mad.

I agree that brand reputation matters a lot. But people think we are going to see huge paradigm shifts in autonomous driving over the next ten years, and then your track record is for something completely different. Meanwhile the Tacoma is just a slight update and restyling of mature technology.

Adding to this is the fact that one freak accident will completely shatter a company's records. Imagine if a Tesla with next-gen Autopilot filled with seven passengers is driving through the Alps in October at 2°C: while the car is inside a tunnel, subcooled rain starts falling, covering the road in an invisible layer of super slippery ice. (This actually happens). The car goes off a cliff and kills all seven. Yes it's far-fetched, sure. But it will still be seven fatalities caused while the AI was driving, and that suddenly puts the AI car on par with a 2002 Honda Civic.


>In fact, most accidents are caused by people driving under the influence of alcohol or drugs, or falling asleep, or being on their phones.

Which can't ever happen with AI driving. That's a huge sales pitch to consumers in itself without even having to wait for the data.

>Meanwhile the Tacoma is just a slight update and restyling of mature technology.

Wrong!

>A: What is carried over from the previous Tacoma in this new truck?

>S: Not much. Maybe a seatbelt bracket! The base frame has been modified. The base frame rail design is similar but it’s not carryover. The body shell and body structure are new. For the structure, we use hot-stamped steel.

http://www.automobilemag.com/news/q-a-with-2016-toyota-tacom...

If that's a slight update new models of autonomous cars can also be slight updates.

Average consumers aren't sitting around arguing statistics either. They will be marketed as safer. It's plain to see how autonomous cars will be safer in many respects already (Drunk driving or distracted driving for example). The convenience factor is by far the biggest selling point though. Sleep/work/read on your commute! Rent your car out as an autonomous taxi to earn more cash!


Sure, incremental changes have been made to the new Tacoma, and you can argue "nothing has been carried over". But there is nothing revolutionary, on the same scale as a car not even having a steering wheel.

Because that's what you're talking about, steering wheel-less autonomous cars. It surely isn't plain to see that they will be safer. We've already seen the first fatality on Autopilot. And I, for one, don't think these will be in the hands of consumers for at least another decade or two. Simply because of costs and liability.


never dismiss the power of spin:

>Tesla says Autopilot has been used for more than 130 million miles, noting that, on average, a fatality occurs every 94 million miles in the US and every 60 million miles worldwide. The NHTSA investigation, Tesla says, is a "preliminary evaluation" to determine if the Autopilot system was working properly, which can be a precursor to a safety action like a recall.

so according to this, autopilot is already safer than the "average" driver.

http://www.theverge.com/2016/6/30/12072408/tesla-autopilot-c...


130 million miles is not a statistically significant enough distance to make claims about safety if the national average is one fatality per 94 million miles. With so little data on autopilot compared to other vehicles (a few fatalities vs thousands), this could just be natural randomness or a small effect caused by drivers being more cautious of autopilot.


Given that there have been at least 3 Autopilot-related fatalities this summer over approximately 130 million miles driven worldwide, that is an average of roughly one fatality per 43 million miles, or significantly less safe than even the worldwide average.


One of the problems with this is that it inevitably leads to creativity with the definition of "at fault". For example, a few months back an autonomous car lost control, steered towards a parked cars, and crashed into them. The spin in the media blamed the driver for not managing to seize control and stop it from crashing. Of course, what really matters is whether autonomous cars make driving safer, not whether we can blame all the crashes on the driver or a perfect driver would've avoided them.


I think the problem in that example is with the definition of autonomous.


I hope these autonomous vehicles refuse to drive if the vehicle isn't meeting certain safety standards. The last thing a company needs is headlines being made because some jackass took their car out when the brakes were worn down to nubs or the tires were completely bald.

Incidents of that sort with a human driver are easily explained. If that happened with a self-piloting car it could scare people off the very idea.

The cars should have some kind of basic self-preservation tests.


As I said in the comments above, the single most effective measure for reducing accidents would be checking for alcohol- or drug-influenced drivers. Those cause much more accidents than mechanical failures.

Yet even though "alcohol-locks" have been mature tech for five years, no car manufacturer nor politician dares to make it default, simply because there would be public outrage.

Also, stuff like brake pads or tires completely worn down already comes with basic safety features: both will cause the car to make relatively loud, screeching/rumbling ominous noises.


As a person who almost never drinks alcohol (that stuff tastes nasty) an alcohol-lock would just be a big annoyance.


Well, current-gen tech is. But an AI/camera based version that quickly checks your pupil dilation when you put the engine in gear could work just fine. It could also catch drugs other than alcohol.

Or you could do near-infrared spectroscopy built into the gear lever (this is already being pushed by the NHTSA, and will probably be in cars in a few years).

Or it could even have a selective alcohol-lock using current-gen tech that only triggers a check if alcohol is detected in the air circulating in the car.


Probably the most advantageous part of fully computerized cars will be DRM and part management. Like your printer which refuses to print black because the yellow cartridge (which is half full) is past its expiration date, your car will in the future refuse to drive because the dealer hasn't changed its oil with the DRM oil filter in the past 1500 miles.

Mandatory adherence to the OEM's maintenance schedule (and every appointment dealer-only!) is every car maker's dream.


Although you're right, they probably will just force maintenance at particular intervals, it would be nice if the car could do a quick self-test that, if it fails, it'll refuse to drive, or at least drive quickly.

Low tire pressure? Breaks not operating outside of acceptable parameters? Tires behaving strangely? It's manual or nothing at that point.


They would probably refuse to drive autonomously if it detects the maintenance period has been exceeded without being reset or there is some sensor or mechanical issue detected. But you will always be able to drive it manually in case of emergency.


In the case of Apple, they will have plenty of early adopters willing to buy those batches of cars.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: