Hacker News new | past | comments | ask | show | jobs | submit login

We can compare this statistically by looking at accidents per 1,000,000 miles driven of autonomous cars vs humans. Over long mileage, the rate of prevented accidents shows up in the data as a lower overall accident rate.

I don't have the stats at hand but Waymo does great on this stat.

Now, that's not to say that Waymo would avoid any specific accident better than a human. But from a public health standpoint, that's not really the right way to think about it anyways.




IMO these are a slightly fake stat. You're comparing to overall stats, that include the drunk, the high, the sleep deprived, and the crazy wild aggressive wreckless.

If you compare to my mother driving, the injuries per mile are higher in autonomous.

You can argue that I'm making an unfair distinction since those people exist, but I'd say the people driving under those conditions are commiting a crime and shouldn't be counted the same.


You're making a classic is-ought fallacy here. Drunk, high, sleep deprived or "crazy wild aggressive reckless" human drivers ought not to exist, but they do, and thus it's absolutely fair to compared autonomous drivers against them.


If GP's mother switches from driving for herself to having Waymo drive for her, her risk of injury will increase. Therefore, if her goal is to avoid injury, it would be counterproductive for her to make that switch. That conclusion does not require any is-ought fallacy.


All self driving > Reckless drivers only self-driving > current situation > small random population only self-driving > GGPs-mom-likes only self-driving.

Unfortunately, identifying people like GGP's mother and excluding them from requirement for self-driving is prohibitively expensive (and icky in all kinds of ways), and so would be predicting reckless driving behavior (approximately every driver is reckless at some point anyway, GGP's mom included). So it's all self-driving or no self-driving, and all self-driving is strictly better for all of us, even if GGP's mom has to briefly accept slightly elevated risk.


> identifying people like GGP's mother and excluding them from requirement for self-driving is prohibitively expensive (and icky in all kinds of ways)

Not sure I follow the thinking here. We already have ability tests in driver's license exams.


Yes, and every mad or reckless driver passed them with flying colors, kind of by definition, otherwise we'd be talking about drivers illegally operating road vehicles. Approximately all traffic accidents involving drivers is involving drivers with valid license, who passed ability tests.


A simple Google search reveals that the number of accidents caused by unlicensed drivers is ~20%, not ~0%: https://usclaims.com/educational-resources/non-licensed-driv...

Moreover, the requirement to receive a license is only to pass, not to pass "with flying colors".


Those tests are dangerously insufficient to keep unsafe drivers off the road.


If your mother is a decent driver, then almost all the danger she faces while driving comes from other people being shitty.


That may be true, but I expect there's no scalable way to assess drivers for GGP mother-ness. So for public policy purposes we should encourage and allow self driving. And possibly even ban manual driving if we can't distinguish GGP mothers from poor drivers.


Exactly. I know that my odds are different from the average.


So are you one of the 88% who know they are better than average, or one of the 12% who are worse?

https://www.smithlawco.com/blog/2017/december/do-most-driver...


>Obviously, not everyone can be above average. Exactly half of all drivers have to be in the bottom half when it comes to driving skills and safety.

This page is confused about the difference between mean and median. The 88% of drivers who think they are above average are not necessarily correct, but there would be nothing logically inconsistent about them being correct.


This has nothing to do with being "a better driver" than average. If you simply don't drive while intoxicated, extremely tired, or distracted with your gadgets, you're already beating the average when it comes to collisions and injuries.


It's mostly that. I don't think I'm more skilled or something. Also, using a smartphone while driving is illegal, but somehow using a 17" touchscreen that mirrors your smartphone and shows incoming texts is OK and common in newer cars.


> If you simply don't drive while intoxicated, extremely tired, or distracted with your gadgets, you're already beating the average when it comes to collisions and injuries.

It still does not make you a good driver. I've personally been in situations where I'm on the right lane, going slightly below the speed limit, when the vehicle (usually a pickup truck) passes me from the shoulder on the right, often blaring it's horns in disapproval as if they were on their way to save lives.

> Rules for Using the Shoulder. The laws for using a shoulder vary in each state, but it is illegal in all states to use the shoulder to bypass traffic or to pass another driver.

I can't wait to make it illegal for humans to drive on public roads.


Being better than average is easy because the average is dragged down by people who drive drunk/tired/etc. An average safety robocar would be a downgrade for most responsible people.


Yeah, I have 0 accidents, so nobody is going to convince me the AI is safer. Meanwhile my car's ESC has tried to send me off a cliff before.


In no possible future would we see an immediate change from human drivers to 100% self-driving, so self-driving ought to co-exist with human drivers, both the better than average, and the drunk ones.

I’m absolutely not convinced that an AI would fair better at handling a drunk driver’s driving style compared to a competent human. The statistics help AI due to them being good at handling monotony, but that doesn’t necessarily translate to safety in complex situations.


In a few places, driving too passively (as AI cars seem to do) will cause aggressive drivers to create dangerous situations for you. They don't want to be stuck behind someone who's letting people in. I also wonder if the AI cars go the speed limit on freeways, cause sometimes that's worse than matching traffic that's speeding a bit.


Others have made better points but I want to add another thing to point out you can also be making the same "is-ought" mistake yourself:

Impaired drivers (drink, drugs, sleep), bad drivers, dangerous drivers, etc. who ought to use the self-driving function are exactly the kinds of people who won't and continue to drive badly.

What this means is the safer drivers leave it off (because they outperform the system), while dangerous drivers also leave it off because they mistakenly believe they outperform the system. The traits that it's meant to protect people from are the very traits that ensure it can't do its job.


When it is good enough, we should remove the ability for humans to drive cars manually in all new cars after that point.


Or even migrate the testing schema to more frequent re-checks and a harder pass condition.

If people don't have to drive to thrive in society, we can make driving far more of a privilege. Contrast what a private pilot has to do to get certified.


You're in the write direction but I think you don't have the right words to describe what you're trying to describe. I made a sister comment, but if I correctly understand you, then it would be more accurate to say that the averaged statistic removes important variables for making accurate conclusions. We need not look at your mother, but we can consider different environments. I think most of us would say it would be silly to compare accidents on the highway to accidents to those in an urban street, but an "average by mile" is doing precisely this. We'd call this "marginalization" and it is why you should be suspicious when anyone discusses averages (or medians!). (On that note, the median and average are always within one standard deviation of another. It's useful because if you have both and they are far apart you know there's a lot of variance). I hope I accurately understood you and can help make a clearer message.


> It's useful because if you have both and they are far apart you know there's a lot of variance

ummm. I would refrain from using nonparametric skew to make a comment about the magnitude of variance.

Essentially, the gap between mean and median will always be bounded by 1 sigma. The ratio abs(mean-median)/sigma is nonparametric skew. It is atmost 1, for any distribution( hence nonparametric, no distributional assumption required).

For unimodal distributions, especially symmetric unimodals, this ratio is 0. As the gap between the mean and median grows, the data gets more spread out, and the ratio captures that spread and consequent nonsymmetry. But you are using the value of this upper bound to make a comment about s^2. Which is very clever, but inaccurate. Say you standardize the rv and you have a nonsymmetric dist. Then mean 0, say median 100. Then stdev can be atmost 100, so variance can be atmost 10000. Which looks like “a lot of variance”. But is it really? Variance has a scaling problem, precisely why we take the square root, so the stdev remains in the scale of the mean. So at best one can say the stdev can be as big as the median. But that’s not very informative- because if the mean is -50 and median is +50, we are left with the same absolute gap of 100, so the same statement applies to the stdev even now.

I guess if I had to compare the variance of some sample X to another sample Y to make some claim that variance of X is much larger than Y, I would use a standard F test. Cooking up a test based on the gap between mean and median in a single sample seems somewhat shaky. It is very creative though, I grant you that.


Perhaps I gave the statement too much strength, far more than intended. But I don't view any metric as anything more than a guide. The reason I use parametric skew in this way is explicitly for a quick and dirty interpretation of the data. Essentially trying to understand if I should take someone's data at face value or not. It's about being a flag. The reason is because when going about the world in an every day fashion I am generally not going to have access to other data like variance (which if we did, we wouldn't need this hack) and can't really do an F-test on the fly. Usually you're presented with the mean and it can still be hard to find a median but it is usually more obtainable than the variance or any other information. So I get your concern and I think you are right for bringing it up because how I stated things could clearly be mistaken (I'll admit to that) but wanted to assure you that no strong decisions were being made using this. I only use it as a sniff test. I do think it helps to give people a bunch of different sniff tests because it is hard for us to navigate data and if you're this well versed I'm sure you have a similar frustration in how difficult it can be to make informed decisions. So what tools do we have to can set off red flags and help us not be deceived by those who wish to just throw numbers at us and say that this is the answer?


> why you should be suspicious when anyone discusses averages

I like to say that the average human being has one testicle and half a vagina, which is not very representative of anyone around.

> On that note, the median and average are always within one standard deviation of another

Oh really? That's cool.


Haha yeah that is accurate. The right language is situational though haha. People are generally overconfident in their ability to mathematically describe things. There's a clique "all models are wrong" and like all cliques it is something everyone can repeat but not internalize lol


you should not forget the second part: all models are wrong, some are useful


Yes, but unfortunately the second part is usually employed by people who want to put under the rug the fact that their model is dubious.

All models are wrong, only some of them are useful, and only when handled with care.


The second part is the obvious part that often doesn't need restating. Models can be incredibly powerful tools.


But your mother is still better off with the drunks not driving (they may crash into her car). So she may still have less accidents in a world where everyone goes self driving.

This will still leave us with the random and dangerous behaviours of cyclists and motorcycles though.


But one of the points of self-driving cars is to remove those bad drivers from the driving seat.


Maybe in theory, but then you have a bigger social/psychological/economic problem because these populations are necessarily the ones that are going to be compatible with the business model for autonomous cars.

And if you get a bigger proportion of “good drivers” in autonomous cars than there is in the overall population, you're on fact increasing the overall number of accidents.


I agree that a very probable outcome is that the real bad drivers will, for one reason or another, keep using non-self-driving cars way longer than good drivers. But also a good self-driving car should have faster reflexes than a good human driver, so it might mitigate anyway the damage done by a drunken driver.


I have never once heard self-driving cars sold with the argument "they're great for sleep deprived and alcoholics".

Maybe they should be.


I have always wanted a self-driving car so there's no more "designated driver" hassle with going out.


Same, there are few places I go to I may not come back drunk!

This, and freeing up the streets from parked cars. If cars are self driving they can go park themselves into some big, far away car parks instead of clogging the streets. In europe it can easily multiply by 2 or 3 the throughput of most big cities.


"Going out" is synonymous with drinking? You could just... not drink.


There's a cultural dimension.

In general south europe culture it is normal to drink wine to food, and enjoy it, but also embarrassing to get publically drunk.

Whereas in northern europe, though, it's all beer or spirits and the aim is to get as drunk as possible as quickly as possible.

These generalisations are based on my own observations. There's doubtless a lot of variation.


How? There will be multiple decades of self-driving — human driver co-existence. I’m not convinced that this phase would be safer than the human-only one, as AI might very well handle bad human drivers much worse (as it is an edge case).


Unfortunately, we can't replicate your mother and put her behind the wheel of every vehicle in America. But we can crank out thousands of autonomous vehicles with thousands of identical copies of highly tuned safety algorithms.


If you had a new born to take home from the hospital and were 20 miles from home.. would you take an autonomous car or have you or a family member drive you home?


The closest I've come to killing another human was when I was driving to the hospital at night to pick up my wife & newborn, sleep-deprived and high on adrenaline, and came this >< close to taking out a cyclist I completely failed to see.


With emotions running high surrounding this newborn, me, or a family member, is likely to be stressed, under-slept, possibly drinking. Why wouldn't I want this hypothetical autonomous car to take us all home safely?


Because the math shows that currently it drives about as bad as a sometimes high, drunk, sleepy, angry person. If you had a family member that cared about you available to drive, they would be much safer than the typical ai car right now.


> Because the math shows that currently it drives about as bad as a sometimes high, drunk, sleepy, angry person.

This is surprising. Got a cite for that?


The problem will all of these stats (although some like Waymo are better than others like Tesla), is that the bare number is quite misleading because they compare apples to oranges. Firstly Waymo (and others don't often don't operate in places that they have excluded or they don't know. Humans drive in unfamiliar places (which I'd bet changes the chances of accidents). Moreover, waymo might decide to not operate in some places at all because they deem the traffic/road conditions to dangerous (and I think that's a good thing), however human accidents in those areas (which are by the condition are more accident prone) still go into those statistics.

It's similar for weather, self driving cars might refuse to operate in some weather conditions (I don't think it's by accident that most companies mainly operate in the relatively warm and sunny places of the US), human accidents under bad conditions are still part of the statistics.

And again drunk/impaired drivers also go into the statistics, if we disregard them and humans become safer, than this is not an argument that self driving is safer than humans, but an argument that there isn't enough enforcement around riving impaired.


That's terrible statistics. Let children drive and the self-driving cars would be, on average, even more safe!

Comparisons should be made 1) with the median, not the average, and 2) under the same conditions.


> We can compare this statistically by looking at accidents per 1,000,000 miles driven of autonomous cars vs humans.

This is extremely limited and really not relevant to the topic at hand. You've marginalized out the type of accidents. Most miles driven are on highways and this is a different environment than urban. The information you've marginalized out is essential for making reasonable conclusions about safety. It isn't important to just know the ratio of TP/TN/FP/FN but more specifically where and when these errors happen. The nuance is critical to this type of discussion and a simplification can actually cause you to make poor decisions that are in the wrong direction rather than naive decisions but in the correct direction.


I highly recommend reading through Waymo's own publications that addresses these exact concerns: https://waymo.com/safety/

Specifically "Framework for a conflict typology including causal factors for use in ADS safety evaluation" and "Comparison of Waymo Rider-Only Crash Data to Human Benchmarks at 7.1 Million Miles"

It may not surprise you to know that they have given a LOT of consideration to these factors and have built a complex model that addresses these to demonstrate their claims.


I can guarantee you there are more accidents per 1,000,000 miles driven in dense urban areas, and that's where Waymo has been driving as well. Last I checked they're not even operating on freeways.


If we are going to use flawed statistics of autonomous cars vs humans, we should first look at even better examples then waymo. Im pretty sure Mercedes Level 3 driving automation for the Autobahn is safer, as well as autonomous cars that park cars at airports. Their accidents per 1,000,000 miles should be 0, a statistic which is hard for humans to beat.

The more restricted and environment controlled we can make it, the few accidents we see machine controlled cars do, and the worse a adaptive human driver does in comparison (barring extreme situations).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: