Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

One thing I hate about Musk saying things like "They have fewer accidents than people" is that it is only able to be engaged in the easiest of scenarios. So per mile, self driving on the easiest roads barely has less accidents than a human in adverse conditions?

That doesn't seem to actually be better than people yet.



https://digitalassets.tesla.com/tesla-contents/image/upload/... (from [0]).

This is all miles traveled, but clever data analysis uses aggregate data as they've done here. It's not "lying" in that the vast majority of miles are traveled on the highway instead of on city streets or even suburban access roads.

Also note:

> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed. (Our crash statistics are not based on sample data sets or estimates.)

Of course, Tesla's official guidance in-car via warning messages is that Autopilot is not safe for city streets, only highways, so technically it shouldn't be used on pedestrian-adorned streets anyways.

This also doesn't take into account FSD Beta.

0: https://www.tesla.com/VehicleSafetyReport


It's important for metrics to be measurable. Throwing in a bunch of subjective criteria like how tricky a road is will only make the analysis less meaningful.

Yes, it's important to understand the limitations of the metrics, but the existence of limitations doesn't mean that the metrics should be thrown out entirely.


Measuring metrics that are easy to measure doesn't help if the metric doesn't mean anything.

Comparing collision rates between human drivers in all conditions and Tesla automation in conditions where it allows activation and is activated is simply not a usable comparison.

Certainly, there's not an easy way to measure the collision rate for all humans all cars in only the conditions where Tesla automation can be activated, and that makes it hard to find a metric we can use, but that doesn't mean we have to accept an unusable metric that's easy to measure.


No, that is what any competent researcher would do. Here is an example from Waymo:

https://storage.googleapis.com/waymo-uploads/files/documents...

But if you are like Tesla and can not even be bothered to put in a methodology section... or any analysis... or any data... really anything other than a soundbite conclusion then maybe you should not get a free pass on driving vehicles when you only demonstrate a level of safety analysis unfit for even a grade school science fair.


Not defending Tesla or anything, but don’t we want to take the W where we can?

If a self-driving feature is safer than humans in nice conditions, don’t we want it enabled under those conditions? What’s the issue?

Of course, the risk is that it’s misapplied in worse conditions where it fares worse.


The issue is that it isn't safer, seemingly under any conditions. If it had lower accident rate per mile, that the stat for humans includes adverse conditions while the stat for tesla only includes ideal conditions. Presumably humans are safer than their average on highways...

Also the original quote is flat out wrong as they are deemed 10x more likely to crash according to this: https://prospect.org/infrastructure/transportation/2023-06-1...


It might be, or not, but no one knows, because the data is secret.


Only engaged on the easiest of roads? I haven't been on a road my tesla wouldn't enable lol WTF are you talking about, shit mine works in pooring rain, it complains but drives fine.


> is that it is only able to be engaged in the easiest of scenarios.

what? it works in basically all conditions and situations though not flawlessly of course




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: