Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Von Ohain and Rossiter had been drinking, and an autopsy found that von Ohain died with a blood alcohol level of 0.26 — more than three times the legal limit — a level of intoxication that would have hampered his ability to maintain control of the car, experts said.

The details make this seem less of an autonomous driving issue and more of an incredibly irresponsible operation issue.



These two were idiots, but this highlights the dangers of L2 driver assistance systems (misleadingly marketed as "Full Self Driving") that requires driver attention at all times to prevent accidents. It gives you a false sense of security and there's no guarantee you'll take over in time to prevent an accident.

If you give dumb toys to people, they will use it in dumb ways. This is why Waymo/Google abandoned their driver assistance efforts a decade ago and jumped straight to driverless. That turned out to be a masterstroke in terms of safety.


I totally agree with what you say. Certainly “Full Self Driving” should be illegal marketing for level 2 autonomy.

That said, cars are dumb toys to give people. Certainly access to cars is dangerous for people that are going to use them while drunk. “Sense of security” or not, why were these guys in control of a vehicle?

While autonomous vehicles are not ready, this event just cements the need for them in my mind. I expect that they have already saved more lives than they have cost. Some continue to insist that replacing human drivers is a high bar. In edge cases and for the best drivers, it is. Collectively though, evidence suggests that the bar is very low and that we have already surpassed it.

At this point, it is more of a “fairness” or “control” issue who gets hurt by autonomy than it is aggregate safety by the numbers. In this case, thankfully, it sounds like it was fair.


>evidence suggests that the bar is very low

There is no real evidence (not biased or cherry picked) that replacing all drivers with Tesla's would actually be better/ I am aware of Tesla's numbers but those are invalid statistics , not comparing apples to apples. We would need something like how many miles does a car drive before a driver takes over, not comparing number of accidents since those are comparing human vs computer+human


Why is it invalid to compare which of human vs. computer+human will save more lives?


It is inherently cherry picked data, because the computer can disengage at any moment, but the human cannot.

For example: all of the worst driving conditions are inherently in the 'human drivers' bucket because the computer won't engage in those situations.


I don't drink alcohol at all. It is possible that because of that I'm better than computer+human while the humans overall are worse.

I sometimes must drive on icy roads where the computer won't engage and all - these are more dangerous conditions and by refusing to do anything they ensure the computer does work. It is possible for humans to be better than computers+humans in all situations that computers work, but because of those situations where the computers don't work end up overall worse.

The above are just two issues that I can think of, and I'm not even a researcher who would know all of those special cases. Which is why everyone wants transparent data: someone independent needs enough information to account for factors like the above and allow us to have a real debate.


>Why is it invalid to compare which of human vs. computer+human will save more lives?

I think it is not relevant for real self driving. I can "invent" also a computer that will not let drunk people start the car, it will have good statistics, the issue I am not a bilionaire to get people install my invention in their cars.

if we the society want to reduce the deaths we could have already done a lot more for preventing drunk drivers, bad drivers, speeding etc.


> I totally agree with what you say. Certainly “Full Self Driving” should be illegal marketing for level 2 autonomy.

The driver was literally a Tesla Inc. employee. Do you really think that they were fooled by marketing into believing the system was more capable than it was? No, they were just drunk and made a terrible decision.

I mean, I'm tempted to just agree. Let's call it something else. Will that stop these ridiculous arguments? I really doubt it.


I'm confused by the question. If anything I think it's more likely that a Tesla employee could be tricked. Other companies like Waymo have cars that literally drive themselves, and Tesla routinely has demos which purport to show the same functionality. It doesn't sound ridiculous at all that an employee might see lots of videos like [https://www.youtube.com/watch?v=Ez0A9t9BSVg], where a Tesla car drives itself, and conclude that their Tesla car with "Self Driving" can drive itself.


Did you read the article?

> Von Ohain used Full Self-Driving nearly every time he got behind the wheel, Bass said, placing him among legions of Tesla boosters heeding Musk’s call to generate data and build the technology’s mastery. While Bass refused to use the feature herself — she said its unpredictability stressed her out — her husband was so confident in all it promised that he even used it with their baby in the car.

> “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”

Seems like he was indeed fooled by marketing.


They were idiots.

> Before enabling Autopilot, the driver first needs to agree to “keep your hands on the steering wheel at all times” and to always “maintain control and responsibility for your vehicle.” Subsequently, every time the driver engages Autopilot, they are shown a visual reminder to “keep your hands on the wheel."

They read and accepted the warnings informing them how to use the system, and then did the opposite. They willfully and knowingly used a feature in an unsupported manner.

Sorry, this is not being misled by marketing.


Ugh, the point is this "feature" shouldn't be allowed to use in an unsupported manner. It's not a button on a website, it's a safety critical system. There's a reason why real self driving companies are super conservative and use a geofence.


> There's a reason why real self driving companies are super conservative and use a geofence.

What reason is that? If they're safety critical systems, why are they putting them on the road at all?


Because they are validated extensively inside that geofence and any operations inside that area are "supported". They don't let the vehicles go anywhere they want and they don't ever allow you to use it in an unsupported manner. You want a Waymo stop right in the middle of a busy intersection? The vehicle will refuse and keep going until it finds a safe place to stop.


I thought we were talking about FSD, not autopilot, how is autopilot relevant in this conversation?


> > “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”

That's a frightening attitude. Something I keep in mind every time I see another Tesla on the road. And I'm driving one myself.


Wouldn't "bought into the marketing" only apply to something like "I got in this Turo'd Tesla, turned on Autopilot, then 10 seconds later it crashed into another car on an unprotected left-hand turn"? You literally can't believe in the marketing after having used it for as long as it sounds like he has; the car does too much driver monitoring to leave you with the impression it works flawlessly and without human intervention. The entire Twitter/X community that talks about FSD reiterates that the current iterations feel like a 15-year-old driving.


Elon Musk: "FSD is already safer than humans, see our (misleading) safety report. If you use it and give us data, it will be a robotaxi by end of the year."

This dude: "Sounds good, let me put my baby in it. It's jerky, but I'm contributing for a bigger cause."

This counts as being misled by marketing to me.


Elon: "FSD is safer than humans."

This dude: "Neat, I'm going to use it!"

FSD, upon activation: "Full Self-Driving (Beta). Full Self-Driving is in early limited access Beta and must be used with additional caution. It may do the wrong thing at the worst time. Do not become complacent. ... Use FSD in limited Beta only if you will pay constant attention to the road, and be prepared to act immediately, specially around blind corners, crossing intersection, and in narrow driving situations. Do you want to enable FSD while it is in limited Beta?"

This dude: "Elon said it's safe, I'm going to drink and drive!"


> Elon: "FSD is safer than humans."

Well, this is false. So it's misleading right off the bat.


Is it? I'd heard Tesla was being too tight lipped for any conclusions to be made at this point, but if you've got a study or source I'd love to see it!


Their safety report is a joke and their methodology is full of holes. There are a bunch of replies to this comment that explain why: https://news.ycombinator.com/item?id=39359437

As far as proving how unsafe it is, no one can do it unless Tesla is transparent with their data. Deliberately hiding that data doesn't automatically mean it's safer than humans, so yes, their claims are indeed highly misleading.


I see a lot of folks casting doubts and asking questions like you in the linked thread, but nobody seems willing to dispute the signal itself, just it's relevance. From the top comment:

> You're essentially telling us that drivers driving Teslas with active autopilot (i.e. limited to great weather conditions on high quality roads) have fewer accidents than those without active autopilot (e.g. driving in poor weather conditions, or on bad roads). That's not much of an insight.

FSD refusing to engage in unsafe driving conditions does not compromise the system's safety. In fact, it's the safest possible option, and something I wish more humans (myself included) would do.

> As far as proving how unsafe it is, no one can do it unless Tesla is transparent with their data.

Right, so how are you confidently discounting Elon's claim without data?


> FSD refusing to engage in unsafe driving conditions does not compromise the system's safety.

That’s not the relevant part here. It’s that other humans do drive in unsafe conditions that contribute to overall human crash rates and those variables are not controlled for in the comparison.

You can’t just drive in areas your system deems safe, but then compare to humans who drive everywhere. You understand how that skews the comparison, right? It’s pretty basic statistics.

> Right, so how are you confidently discounting Elon's claim without data?

Elon’s claim isn’t supported by data either. That’s how I can discount it.

When you say “safer than human”, you better support it with data that holds up to scrutiny. The burden of proof is on you. Otherwise, you’re asking us to prove a negative.


You were asked for any evidence that it's unsafe, and you don't have it. The linked article is the only fatal accident you can even point to, and even that one is (1) only a suspected FSD accident based on previous driver statements and (2) really obviously a DUI anyway.

Look, these are the most popular vehicles on the road. Where are the wrecks? Where are the bodies? Surely if these were significant there would at least be a bunch of anecdata showing suspicious behavior. And there isn't. And you know that, which is why you're arguing so strongly here.

Just chill out. The system is safe. It is not perfect, but it is safe. And no amount of whatifery will make it otherwise.


No one can tell with a wreck if FSD was active or not. No shit, no one can point you to actual evidence because Tesla doesn’t release any data. You realize what a ridiculous, circular argument you’re making, right?

It’s telling that you want to rely on anecdata and news reports as proof rather than asking Tesla to be forthcoming.

To call a system safe and more specifically safer than human, you need data and you don’t have it. Plain and simple. The burden of proof is on you.


Is the burden of proof really on me? I mean, I look around and don't see any accidents. Seems safe. That's good enough. And it's good enough for most people. You're the one shouting like crazy on the internet about how everyone is wrong and making explicit general statements that the system is clearly unsafe. And that doesn't match my experience. And when asked for evidence of your own you admit it doesn't exist.

So... it's like that Money Printer Go Brrr... meme. Shouting loudly about something you believe deeply doesn't make it true. Cars aren't crashing. QED.


Precisely what I expected from you. You’re exactly Tesla’s target audience. The irony and projection in your comment is comical.

“It’s safer than humans.”

“Proof?”

“Just look around, bro. Trust me, it’s good enough. No data needed.”

Thankfully, some of us don’t lack critical thinking.


This seems out of hand. My perception is exactly the opposite: you're all over these threads claiming in decidedly certain terms that this system is unsafe. And all I'm saying is that it's clearly not, since at this scale we'd have if nothing else extensive anecdata showing AP/FSD accidents that simply don't seem to exist in the quantity needed to explain your certainty.

So, yeah. Occam tells me that these cars are safe. Which matches my 2.7 years of experience watching one of these things drive me around. So I'm comfortable with that, no matter how angry that makes you. I'm absolutely willing to change my mind with evidence, but not because you yelled at me on the internet.


I said they are not safer than humans like Tesla likes to claim. The methodology is extremely dubious and it’s very easy to debunk it for anyone with a basic understanding of statistics.

You claim it’s true because have anecdotes and personal experiences. You also say there are no crashes. But there are crashes. We just don’t know if FSD was engaged at the time because, again, Tesla doesn’t reveal it. Go through the NHTSA public database, there are dozens and dozens of Tesla ADAS crashes. Those are just the reported ones.

You are also conspicuously silent in this whole thread when data transparency comes up. Not once have you admitted Tesla should be more forthcoming about their crashes, and that in itself is very revealing.

You want to argue data or methodology? I’m here. But you’re being intellectually dishonest and resorting to repeating the same things over and over again. That ain’t gonna convince me.

And cut the shit about being angry or yelling at you. No one’s doing that. You’re clearly projecting.


If you don't like Tesla's methodology, what data do you need to determine how much more unsafe Teslas are compared to humans? You sound like you have the data:

> Go through the NHTSA public database, there are dozens and dozens of Tesla ADAS crashes. Those are just the reported ones.

Why hasn't anyone else, or you, directly published conflicting data to show that it's less safe? "debunking" some statistics might be correct but it's not convincing, and definitely won't result in any action you might want to see from regulatory bodies.


> what data do you need to determine how much more unsafe Teslas are compared to humans?

Tesla should start by first reporting disengagement data to CA DMV like every other self driving company. It shows FSD's rate of progress.

Then they should take all their crashes, normalize for different factors I mentioned earlier and then make a comparison. See how Waymo does it: https://waymo.com/blog/2023/12/waymo-significantly-outperfor.... They have multiple white papers on their methodology. Go through it, if you're interested to see what apples-to-apples comparison looks like.

> Why hasn't anyone else, or you, directly published conflicting data to show that it's less safe?

Because Tesla redacts every single reported crash to point that it's useless. Is FSD enabled or Autopilot? Not reported. FSD version? Redacted. Injury? Unknown. Description? Redacted. Good luck trying to glean any information from it. This is by design to prevent independent analysis of their performance.

Be transparent like everyone else. You know it's fishy when they're actively trying to hide things.


By the way, if you want to see missing data in action, I did some legwork: https://news.ycombinator.com/item?id=39375581


Totally agree. As a driver with a car with advanced "driver assistance features", I honestly don't understand how people get value out of most of these. That is, I'd rather just use a system where I know I have to be in control at all times, vs one where "You only have to be in control 1% of the time, oh and if you miss that 1%, you're dead."

For example, I recently tried using the adaptive cruise control/lane centering features on a long road trip, and it was maddening, not to mention pretty terrifying. The exits from this highway weren't well marked, so at most of the exits the car tried to stay it the middle of the rightmost driving lane and the exist lane (i.e. directly into the oncoming divider). I get that other systems may be more advanced, but I don't see the benefit of an automation level that every now and then takes on the driving characteristics of a drunk toddler.


> Totally agree. As a driver with a car with advanced "driver assistance features", I honestly don't understand how people get value out of most of these.

I've got radar cruise control, lane keeping, and auto braking on one of my cars.

I just drive it like it's a normal car, and the computer helps me out from time to time. That's where the value is for me. I don't do a lot of driving with cruise control, so radar assisted isn't very helpful, but lane keeping nudges me when I drive on the line, and auto brake helps out in some situations (and prevents me from backing over flowers, and some other over reactions).

For my car, lane keeping doesn't nudge too hard, so it's not hard to push through, but it helps a bit if I lose attention.

I'm considering replacing this vehicle, and I'd get these systems again. If I replace my other vehicles, which don't get to go on long drives, it wouldn't be as big of a priority.


I found it invaluable one time driving at night through Northern Ontario (Canada) (which is actually southwestern Ontario, but shrug). This is a really bad idea because of Moose. If you hit one they are tall enough that their body tends to go through your windshield and kill you.

Self driving watched the road, I watched the ditches. I didn't need to avoid a moose that time, but it may have saved a coyote's life.


I would settle for being able to see out my back window. It's one of the reasons I still have my crappy little truck from 2006. The windows are all low enough that I don't need a backup camera to back up. I can just turn my head around and see everything through the rear windshield.


>I honestly don't understand how people get value out of most of these. That is, I'd rather just use a system where I know I have to be in control at all times, vs one where "You only have to be in control 1% of the time, oh and if you miss that 1%, you're dead."

A lot of accidents happen from a split second of lack of attention. Lane keep prevents you from inadvertently drifting over the centre line into traffic. Radar cruise control prevents you from rear ending the vehicle in front of you. Both of these simple features are awesome.


I prefer my Tacoma's approach, which is just audible and visual alerts if these start to happen.


This is more than Tesla and it's claims of "full self driving" though. People publish videos about doing reckless stuff with lane centering and adaptive cruise control tech in other vehicles as well.

This is a large issue that will take more than action against any individual manufacturer to solve.


Your contention is that having FSD the car makes accidents more likely because people will rely on it when they shouldn't be driving at all. So... does it? The statistics don't seem to bear that out. This is the first significant accident of that type we've seen, where there have been multiple "driver passed out and the car stopped on the road" incidents. Seems like the truth is the opposite, no? Tesla didn't invent DUIs, but it seems like autopilot is a net win anyway.


There are no third party statistics since Tesla lawyers actively force NHTSA to redact information from any reports they do make.

Even ignoring the fact that Tesla habitually lies and acts in bad faith to consumers and investors alike, the “statistics” their marketing department publishes are worthless. They present no analysis, methodology, or even data to support it. It is literally just a conclusion. The level of scientific rigor they demonstrate is unfit to even grace the hallowed halls of a grade school science fair.

Even ignoring that, this is not the first significant incident. By Tesla’s own admission there were already ~30 known incidents a year ago at the beginning of 2023. Unfortunately, I can not tell you which ones specifically because they, you guessed it, demanded NHTSA redact which system was active from their public disclosures.

Even ignoring that, their reports are suspect in the first place. Of the over 1,000 crashes they admit to being involved in they did not report the injury severity in ~95% of cases. Of the ~5% of cases where they do report the injury severity a third party (news, official complaint, etc.) published that information which compels Tesla to report the same information under penalty of law.

Of the tens of confirmed fatalities, Tesla only discovered a single one on their own (as of 2023-09). Their telemetry systems, which are responsible for detecting 90+% of their reported incidents detected fewer than 40% of their reported fatal crashes. A full 30% of known fatalities were undetected by both telemetry and media and are only recorded due to customer complaints by surviving parties who knew the system was engaged. The amount of missing data is almost certainly staggering.

So no, the data does not bear it out since there is no credible positive safety data. And, as we all know, in a safety critical system we must assume the worst or people die. No data, no go.


There's more statistics in your comment than in the Tesla safety report :)

I'm curious where the data is about their telemetry systems' failure to detect incidents. It seems very fishy.


I'm genuinely curious what you're citing here? You're being really specific about this stuff but not actually linking to (or even mentioning) any sources. And as a Tesla owner who's been steeped in this complete shitfest of a debate here on HN for three years, I'd expect to have been exposed to some of that. Yet this is all new.

Come on, link your stuff. I mean, what's the source for telemetry detecting different fractions of fatal vs. non-fatal crashes? Are you sure that's not just confounded data (fatal crashes are more likely to involve damage to the computers and radio!) or outliers (there are very few fatal crashes known!)?

Basically, the red yarn on your bulletin board looks crazy thick to me. But I'd genuinely love a shot at debunking it.


Given that I said reports to NHTSA it should be obvious that I am talking about the NHTSA SGO database [1].

But thanks for boldly calling me a conspiracy theorist when I quote data from official Tesla reports to the government.

As to your other questions, go ask Tesla. It is not my job to speculate for Tesla’s benefit when Tesla has all the data and chooses to act in bad faith by not only refusing to disclose it, but even forcing NHTSA to suppress unfavorable information.

As to “debunking” anything I am saying, whatever. It is not my job to thoroughly analyze Tesla systems from the outside to prove they are unnecessarily dangerous. It is Tesla’s burden to present robust data and analysis to third party auditors to demonstrate they are safe.

Debunking what I am saying does not somehow magically prove Tesla’s systems safe. It just means I, a random person on the internet, could not prove they were definitely unsafe. I probably also can not find the flaw in a random perpetual motion machine someone presents on the internet, but that does not make it work. Though if you are convinced by Tesla’s brand of bad faith safety analysis, then I have a perpetual motion machine powered on dreams to sell you since you can not prove it does not work.

[1] https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...


That's just a spreadsheet. Can you link me to the analysis that pulls out the specifics you're claiming? I mean, yes, I can do it myself. But my experience is that when people point to raw data and not analysis when challenged, it's because they're simply wrong and trying to hide confusion and obfuscation.

> Debunking what I am saying does not somehow magically prove Tesla’s systems safe.

No, but it's still a blow for good faith argument and worth pursuing.


You claimed Tesla systems are safe stating: “Your contention is that having FSD the car makes accidents more likely because people will rely on it when they shouldn't be driving at all. The statistics don't seem to bear that out. This is the first significant accident of that type we've seen…”

You have presented exactly zero analysis or data supporting your claim that machines that have demonstrably killed people are in actuality safe. The burden of proof is on you to present evidence, not me.

In fact, I have even presented you a new data source filled with official data that you apparently have never seen before that can bolster your point. So how about you engage in good faith argument and support your positive claims of Tesla safety instead of demanding I prove the negative?

Note that quoting unaudited statements by the Tesla marketing department are not support by the same token that official statement by VW about their emissions or Philip Morris about the safety of cigarettes are invalid. You should also not point to haphazard “analysis” derived from those statements.

Also try not to argue there is a absence of evidence that the systems are unsafe. That is only applicable before somebody dies. A death is sufficient evidence to meet the burden of proof that a system is unsafe. The burden of proof then shifts to demonstrate that the rate of death is acceptable. If there is a absence of evidence to demonstrate the rate of death is acceptable, then we must conclude, based on the burden of proof already established, that the system is unsafe.

That is your real burden here. Demonstrating the available data is sufficiently unbiased, robust, and comprehensive to support your claim. Good luck, you’ll need it.


It's raw data, just put the spreadsheet into Google Sheets or Excel and have at it.

Here's what I found in my quick analysis after filtering all Tesla crashes:

  Total Tesla crashes: 1048

  Crashes with 'unknown' injury severity: 997
  Percentage of crashes with 'unknown' injury severity: 95.13%

  Total number of reported 'fatal' crashes: 27

  Number of fatal crashes detected by telemetry: 11
  Percentage of fatal crashes detected by telemetry: 40.74%

  Number of fatal crashes reported only by 'complaint/claim' source: 7
  Percentage of fatal crashes with only 'complaint/claim' as source: 25.92%
Matches up to parent comment's numbers. Incredible amount of missing data!

My experience is that when people repeatedly ask for analysis in the face of raw data being presented, it's because they're afraid to find out what's in it and hope to sweep it under the rug.


Good grief:

   Number of fatal crashes detected by telemetry: 11
   Number of fatal crashes reported only by 'complaint/claim' source: 7
Yeah, that's what I thought. Reasoning from outliers. Now for extra credit, compute a confidence interval from these 18 lines you cherry picked from a 1000-entry data set. I mean, really?

Yeah, I declare this debunked. This nonsense is only a tiny bit better than trying to declare a product dangerous based on one DUI accident.

(Also, I'm pretty sure you're making the argument in the wrong direction. Wasn't the contention upthread that there were too *FEW* telemetry-reported accidents, as if to claim that Tesla was suppressing them? This seems to say that Telemetry is a more reliable reporter, no? Meh. Not even interested in the specifics anymore, there's literally nothing a data set this small is going to tell us.)


Your argument is:

“Ha, Tesla actively suppressed and concealed 95% of the evidence so you do not have enough evidence to prove them wrong. Checkmate.

Tesla just hides it because it is too vindicating. So you have no choice but to believe their unsupported and unaudited claims.”

Again, you have not presented a single claim supported by any auditable data or analysis. You demand others present conclusions with a confidence interval when even the Tesla safety team is unable to do so even though Tesla is the one pushing a system conclusively known to kill people. It is their duty to collect sufficient, comprehensive, and incontrovertible evidence that their systems do not incur excess risk and subject it to unbiased audits.

So, present your comprehensive, incontrovertible claim with a confidence interval based on audited data. That is the burden of proof to support killing more people.


Cherry picking from raw data of 1000+ crashes? Yeah, you're not here for a good faith discussion after vehemently asking for sources and assuming "confusion and obfuscation". You just want to shout down "I declare this debunked" with absolutely nothing to support it. This is gaslighting at its finest.

The original claim was this:

> Their telemetry systems, which are responsible for detecting 90+% of their reported incidents detected fewer than 40% of their reported fatal crashes. A full 30% of known fatalities were undetected by both telemetry and media and are only recorded due to customer complaints by surviving parties who knew the system was engaged.

So no, I'm not making the argument in the wrong direction. Perhaps try re-reading it? The numbers match it almost exactly.

I'm done with your nonsense.


We don't have apples-to-apples data in the public.


That's not a license to believe whatever anecdotes you want, though. I'm just saying that if anecdotes are sufficient, autopilot wins the argument. If they're not, we need data and shouldn't be freaking out over the one drunk driving incident in the linked article.


We're not responsible for that data, Tesla is. You should be calling for them to be transparent and not just settle for misleading "safety reports" because it has a narrative you want to believe.


Well, there's many, many more cars without "FSD" than with it (especially if you include historical data), so the rate fold change would have to be astronomical for FSD cases to outnumber old-fashioned cases.


This is a question with intersecting margins, and those are fiendishly hard to answer.

Here's what I mean. There are people who will drive drunk. At the margin, there are people who will not drive drunk, but who will drive drunk using FSD. But how many? How much more drunk are they willing to be?

On the other side, driving drunk with FSD is safer than doing it without. Criminally irresponsible, yes, but I think reasonable people will agree that it's somewhat safer, FSD has a better track record than drunk drivers. But how much safer?

Depending on how the margins intersect, FSD is either more dangerous or less so. I suspect the answer is less so, that is, there aren't that many people who would only drive drunk with FSD, and FSD is good enough at what it does to lead to fewer accidents. But I can't prove it, and reasonable people might come to a different conclusion. Practically speaking, it's impossible to get data which would settle this question.


> FSD has a better track record than drunk drivers.

We don't know that.


We do, in fact, know that.


You only get "recorded" as a drunk driver if you crash or cause a situation, drive erratically, cops are called, breathalyzers etc. Obviously I'm not condoning drunk driving, but you are looking at an extremely skewed set of circumstances, much like driving during daylight on California streets with no weather for FSD.


How do we know that?


It's not just Self Driving but Full Self Driving! Because today, self driving means not actually self driving at all.


I self drive every time I get behind the wheel. Who else is going to do it? Maybe they should call it "Someone else Driving" but that would just be Uber/Lyft.


Sorry but everything after "the driver who's responsible for their decisions was drunk" is moot.

But otherwise, yes, your statement otherwise about giving "toys" to "dumb" people stands no matter the technology.

And knives can be used to cut steak or stab people.


You could be perfectly sober and still not be able to intervene in time to prevent a crash. Systems like this encourage inattention, but still expect drivers to turn on attention in a fraction of a second. That's the entire point. So no, it's not moot.


How about alcohol odour detection required in all vehicles?

How about attention-distraction monitoring, and perhaps first penalty or safeguard is forced reduced speed, where attention-distraction level determined determines the maximum speed the vehicle can go - whereby reducing the need for as fast of a reaction time?

Any other possible solutions?

I think in general coddling people is more harmful than good, and is lazy to not find nuanced solutions - just because blanket rules are "easier."


It depends on the discussion you are having. I agree that it is not “moot” in either case.

Your point is that these systems need to be save even in the face of incapable drivers and that, despite the misleading marketing, they are not that ( yet ).

The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

Encouraging over-reliance is a problem but not as big a problem as having to rely on the human 100% of the time. This statement is just a fact given the statistics.

Given the above, while it is too extreme to say that operation by an impaired driver is “moot”, it is fair to suggest that the most significant source of risk is the impaired driver themselves and their decision to operate a vehicle of any kind. The biggest offer to this risk are the additional safety features of the vehicle. The degree of over-confidence caused by the vehicle type is a distraction.


There are no statistics that categorically prove cars with these features are less dangerous. Tesla's own "safety report" is extremely misleading and has no controls for geography, weather, time of day, average age of cars, demographics, etc.

If you're developing an autonomous system, you have a moral and ethical obligation not to roll it out when it's half baked and unsafe. You can't give an unfinished safety critical system to the general public and then blaming them for misusing it.


Don't all Tesla vehicles have the highest safety ratings ever?

I guess maybe though something being less dangerous isn't the same as something being relatively more safe?


Those safety ratings don't assess FSD performance.


You're missing or avoiding my point?

Comparing similar accidents regardless of FSD or not - a Tesla's occupants are arguably kept safer, would fair better than being in any other vehicle, right?


You’re making an entirely irrelevant point. They’d fare better if they were in a bus too.

We’re talking about cause of the accident here, not what happens after one.


Naw, you're just dismissing my valid point because it has weight to it - you want it to be irrelevant, it's not.


Then please tell us how crash ratings of a vehicle is helpful in assessing FSD performance. The entire discussion is of who is responsible for causing the crash. Automated system collisions are counted regardless of severity.


> The other point though is that people already have access to cars and, according to the evidence, cars with these features are LESS dangerous.

I don’t think we can say anything of the sort for Tesla FSD (beta).


> Sorry but everything after "the driver who's responsible for their decisions was drunk" is moot.

It's not for the larger discussion about safety of these sorts of systems. Any such system _has_ to consider the dumb human that's in the loop, because to not do so is almost inevitable to lead to problems. If your driving assist system makes people more likely, and more able, to drive drunk, then it's not just a problem with those people. Sure, they shouldn't be doing that, but ignoring that some people will is irresponsible.

Any system that involves humans that doesn't consider the human element is doomed to fail eventually. We humans are far too good at making that a certainty.


So the solution is dumbing down the curriculum to appease to the dumbest - which has externalized costs, of course.

But all this thread is great evidence towards requiring people of a certain ineptitude to be required to only use self-driving vehicles in the future, once they're proven ~100% safe, right?

But really what you're arguing is people needing to be raised to be responsible, as the root cause-problem.


Knives don't come with self-guided-propulsion-and-steering while requiring you to be ready to stop them within seconds.


Fair point. So would you argue for attention-reaction time testing to make sure a person is fast enough? We'd certainly be able to gather data for the situations that end badly to very horrifically - with harm or injury - and determine first at least if there is a threshold that everyone should need to meet; currently are there any thresholds, like I believe handicap drivers with only one hand/arm - who at first thought, and maybe I'm wrong, won't be able to react as quickly or strongly in any situation - though they may in general be far more cautious because of their state.


I don't think we specifically need a reaction-speed test for Level 2 driving systems, but maybe we could for driving in general.

The pernicious combination that makes L2 unsafe is that it encourages the driver to let their focus wander, and then unexpectedly demands that you have full attention. When 95% of your highway time requires almost no focus, your brain will naturally wander off and pay much less attention to the situation on the road. So when the 1-in-1000 dangerous situation occurs, you're less prepared to react to it than an unassisted driver would be.

Edit to be more clear: A reaction-speed test measures your best-case reaction time, because you know it's coming. Normal driving situations test your "normal" reaction time. Situations under L2 test your worse-than-normal reaction time.


That's not how proximate cause works in a negligence case.


Can you explain?


Sure. If it's a fact that being sober wouldn't have prevented the accident from happening, then being drunk could not have been a proximate cause.


>seems less of an autonomous driving issue and more of an incredibly irresponsible operation issue.

if the car is truly autonomous, then no, its failure of a life critical system

if the car is not autonomous, then calling it "Full Self Driving" is a wilful misrepresentation of it's capability. To a normal person (no ignore the driver's employment history) full self drive, means that, it drives it's self, safely. You and I know that's not the case. However the vast majority of people out there don't know it.

Everyone is rightly chewing Boeing out for lax safety standards, but that door didn't kill anyone. If you are going to be annoyed at Boeing, then you need to be absolutely fucked off at tesla.


> Everyone is rightly chewing Boeing out for lax safety standards, but that door didn't kill anyone.

The door was only part of the issue. 346 people died in a pair of crashes:

https://en.wikipedia.org/wiki/Lion_Air_Flight_610

https://en.wikipedia.org/wiki/Ethiopian_Airlines_Flight_302


You are indeed correct, I should have been more specific to exclude those disasters and scoped my comment to be about the recent near miss.


> To a normal person (no ignore the driver's employment history)

But in this case, which is what the previous comment is talking about, this doesn't apply.


No, it really does.

For example kinder eggs are illegal in the USA, because even though the small(ish) plastic parts are inside a plastic inedible case, too many kids ate the small plastic parts.

Now, the vast majority of people eating kinder eggs, don't eat the toy. However because its reasonable that an unsupervised child would eat the plastic, they were banned.

But.

The point of FSD is that it is safe. If its safe, a drunk person who is behind the wheel, shouldn't be able to cause it to crash, if the FSD is engaged. FSD should fail safe.

but it can't fail safe, it is not "full self drive" and should never have been marketed as such.

However, people's safety is supplemental to large corporation's feelings/profits/liability.


The driver in question worked for Tesla and would be aware to not trust the marketing name of the feature. That's why it doesn't apply.


He should have known better to trust his life to tesla? you betcha.

Doesn't make tesla any less culpable.


What? Why? The driver is also a target of marketing, as evidenced by the fact that they owned a Tesla. They aren't personally responsible for, or privy to, the lies of marketing. Employment at a corporation also doesn't grant you intimate knowledge of all parts of the corporate machine, in fact it might indicate that you are more susceptible to the corporate messaging.


This quote definitely indicates to me he was overly susceptible to the corporate messaging: “It was jerky, but we were like, that comes with the territory of” new technology, Bass said. “We knew the technology had to learn, and we were willing to be part of that.”


>The details make this seem less of an autonomous driving issue and more of an incredibly irresponsible operation issue.

In every single discussion with Tesla fans re: FSD, we hear that "FSD is safer NOW than human drivers", but every time there's an accident it's the driver's fault.

At what point are Tesla accountable for the mixed messaging around this product?


That's not really relevant here though as there is no claim of 100% perfection with AP/FSD, nor did I advocate for support of Tesla here.

This feature requires acknowledging multiple documents staying that it may make errors and is not to be classified as an autonomous driving system. This is present when you purchase the feature, in the tutorial videos about what the car is capable of, in a screen you must read before enabling this feature, and repeated in every set of release notes that appear on each update.

I fail to see why all blame for impaired driving evaporates because the manufacturer of the car is Tesla.

Let's put on the Product Manager hat. How would you convey to a user that they should use a feature in a responsible manner?


Those two facts are not necessarily contradictory.


I’m not saying this is where we are now,

but eventually when self driving cars are sufficiently advanced, they should be able to drive drunk people home without needing them to provide input (and probably best to forbid them from doing so all together as they’re impaired)


Drunk people are told not to drive themselves. Letting something labeled "Full Self-Driving" drive them would sound logical to a drunk brain.


I use a self-driving technology every time I drink. It's called Taxi or Uber.


[flagged]


Yeah. But also maybe don't call it "Full Self Driving"


Crazy that FSD is legal.


Why? You're supposed to be ready to take over. Marketing hype and a foolish name does not make the system inherently dangerous.


What does FSD stand for?


Someone should never drink and drive. Never!

Yet, I believe that even at 0.26 blood alcohol content he had a better chance at living if driving himself.


I’m not sure what “0.26” exactly means, but if it’s what I interpret to be that’s about one beer for a typical male adult.

It’s not nothing but it’s not a lot either.


I have no idea what some of the other commenters are talking about, but

  - 0.26 is around "blackout drunk" (I am not sure I have ever drank this much)
  - 0.20 is when I know that I have made a terrible mistake, definitely "wasted" as we say (though I haven't drank like this for many *many* years).
  - I begin to feel dizzy and ill at around 0.15ish... 
  - 0.1 is a drunk feeling.
  - 0.8 is the legal driving limit in most US states.
Source: I carry various breathalyzers with me whenever I drink. I probably have a higher tolerance than most though.

And here are some other sources for you:

https://www.utoledo.edu/studentaffairs/counseling/selfhelp/s...

https://en.wikipedia.org/wiki/Blood_alcohol_content

Also, one thing to note about these charts, they are pretty conservative. One drink is rarely 0.02 unless you rarely drink, and had this one drink on an empty stomach, or are a very small person. Or maybe if you take a shot and measure it 10 minutes later.


Correction: - 0.08 is the legal driving limit in most US states.


You may be thinking of 0.026. 0.08 is the legal limit in most states. 0.26 is drunk enough that you need to worry about them choking on their own vomit.


BAC is given in percent (parts per 100, %), whereas a lot of countries use permille (parts per 1000, ‰). This would equate to 2.6 permille, which is quite a lot, especially for non-drinkers.


The US typically uses "percent" when referring to blood alcohol levels while many other countries use "per mille", sometimes leading to confusion around the rules. When somebody mentions Sweden's "zero point two" limit, it's actually incredibly strict, not incredibly lenient!


Of course they do... but also the first time I hear that, didn't know.


Ah yes thanks. That’s it! So in Europe it’s 2.6, which is A LOT!!


0.26 if a BAC, would be more like 5-10 beers, one drink typically is good for 0.02-0.04


It means blood-alcohol percentage and the limit is usually 0.08%. The article correctly states that it's over three times the limit. I believe your math would mean four beers equals 1% of alcohol per 100ml of blood.


It is Blood Alcohol Content (BAC) and 0.26% is about 8-12 drinks depending on the person's tolerance and weight. Legally drunk. The legal limit to drive in most states is .08% which is typically 2-3 drinks.


0.26 is pretty drunk, in my opinion. A 140lb person drinking 8 shots would leave them close to blacking out unless they drank a lot, and very often.

https://www.healthline.com/health/alcohol/blood-alcohol-leve...


It's over 3 times the legal limit for the most forgiving definition of drunk driving in Colorado.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: