No matter the cause of this accident or previous ones, Tesla's response highlights why I'll never buy one of their cars. When an accident occurs, the response by Tesla should be nothing more than "we will provide any and all data we have concerning this incident to the appropriate authorities when we receive a valid request for that information."
Musk tweeting about whether or not the driver purchased certain features is insane. He shouldn't have access to that data, let alone be allowed to publicize the information.
I recognise the risk — these days I assume that entirely innocuous facts about me today can become socially unacceptable in the future, having witnessed this happen to multiple people in my life already.
I also recognise that if Tesla doesn’t get out in front of every single incident, it may set back the replacement of human drivers with safer AI drivers.
I also also recognise that Musk is wildly overpromising on the self driving tech, and really only trust him as a rocket scientist and salesman (be the thing he’s selling cars or visions of the future), not digital privacy.
I don’t really trust any famous person for privacy, because they necessarily don’t have even close to as much of it as a normal person.
>I also recognise that if Tesla doesn’t get out in front of every single incident, it may set back the replacement of human drivers with safer AI drivers.
Tesla doesn't need to get out in front of every incident. Musk feels the need to because of the outlandish claims he continues to make about how close Tesla is to having autonomous vehicles. This is something Tesla's board should have reined him in on long ago.
>I don’t really trust any famous person for privacy, because they necessarily don’t have even close to as much of it as a normal person.
I agree that famous people have less privacy, and there's a lot of unfairness around that. But even if Musk hadn't chosen to become a public spectacle, that doesn't give him the right to treat others the same way.
I'm not convinced that 'autonomous' driving will ever be better than non-distracted/drunk drivers. Suppose this is a task that, short of overhauling all our road infrastructure, ultimately requires GAI? I think we generally have way too much confidence that cars will ever drive better than people, at least on the current road system. I am not saying that they definitely won't, only that I'm not sure we should take the attitude that anything that gets in the way of autonomous driving is sacrificing human lives. We don't know that.
I also worry that mixing autonomous systems with people will result in more accidents, both in the case of drivers nodding off instead of supervising the car, and in the case of other drivers being surprised by the autonomous car's behavior. On top of this, it would not be surprising if autonomous car accidents were more psychologically devastating - they would likely commit different categories of error than human drivers.
Conversely, if the tech industry wanted to save lives on the road, easily the most impactful thing they could do is shut off any phone screen moving faster than five miles an hour. This would inconvenience passengers, but we did after all survive for many years without phones in the car.
> Suppose this is a task that, short of overhauling all our road infrastructure, ultimately requires GAI?
I would be surprised if level 5 self driving requires an AI smarter than a mouse or an eagle — freeform environment, arbitrary weather, hostile others, camouflage(!), long range navigation.
The rest of your points I broadly agree with though.
I don’t think this a great comparison. Firstly, sheets of glass are not a common road obstruction, and even if they were cars have the added advantage of more sensor types (doesn’t require much more brain, it’s equivalent to just another ‘colour’); Secondly, I’ve also been hit by a car driven by a human.
The driver had stopped at a give-way (major-minor junction) [0], then failed to look in my direction and pulled out into me as I was turning into that road.
I came to the conclusion years ago that we're going to need RFID tags on everything (or something similar) to make FSD work safely enough without AGI. We'll probably also need dedicated lanes in some places. Nothing I have read or seen yet has given me any reason to think differently.
Edit: The common counter-argument I see to this is that tags could be altered maliciously. My response is that we've already been building systems which scan for and filter out malicious changes for a long time, and we keep getting better at it. It seems to me like this kind of system would be far easier to design & update than any FSD software I know of.
RFID is no solution at all, how do you ensure a toddler running into the road is properly RFID chipped?
If you need every possible obstacle to be tagged with a locator beacon, you’ve essentially just reinvented 1970s autonomous subway or PRT technologies that require the track to be completely fenced off from surprise animal or humans or debris ended up on the track.
The idea is not that it would be RFID only, but to reduce the number of non-tagged objects & people so false positives happen less often. Obviously you still need a way to deal with random, unexpected obstacles, but it's a much smaller problem space when the surrounding environment is mostly properly tagged.
There's a very real risk that half-assed "AI" driving just leads to worse human drivers that stop paying enough attention to driving normally. Sensors are definitely better than humans at reacting to collisions though.
Hah, I thought you meant worse drivers outside of the Tesla. I guess if I wanted to be an ass, I can cut off most Teslas because they have the auto-brake feature. (Well most modern cars have it).
You know who’s making real progress on AI driving? Waze is. I don’t want to be too dismissive of EV progress thanks to Tesla but I feel like there’s no contest about who is pushing forward AI driving beyond the status quo.
And Waze doesn’t need a professional troll to go around garnering media support 24/7.
Waymo believes that the the roads need to be fully 3d mapped, with crosswalks, street lights and signs before their car can autopilot safely on it. More mapping will happen if/when the AI autopilot is ready.
I believe Waymo works better in a pre scanned area, where there are no construction sites or major changes, but in a general context you can atleast attempt to use Autopilot, which you can't with Waymo. Thus I would not say that Waymo is miles better.
About Waymo: I used to think they were a leader in this space, partly based on their published safety statistics, but I am no longer so sure.
I had the great fortune of living in "Old Palo Alto" (think "mega rich tech CEO paradise") for a year while my wife was at Stanford. This is a grid of low-traffic, low-speed, sidewalked residential streets, ~10 minutes on all sides from the nearest highway, with typical mid-Peninsula weather (in other words, essentially perfect driving conditions).
I would would frequently see Waymo vehicles going all through the neighborhood at 6-9AM on Sunday mornings when I was out on long runs. They would often be essentially the only cars I saw on the road.
If there is a better way to pad your urban safety statistics, I do not know of one.
I have my mom Texting me that I shouldn't drive my Tesla because of this, and I should sell it and buy something else.
How many of those people will read the follow up paper from the NTSB?
This is literally the definition of smear. Shout something bad about a successful X, and by the time the actual truth comes out, the damage is already done and people have moved on.
This is a problem created entirely by Musk. He repeatedly makes claims about Tesla's technology that even their engineers say don't match reality [1]. No other car company gets the scrutiny Tesla does because no other CEO persists in making outlandish claims about the capabilities of their vehicles.
Tesla needs Musk to do two things: 1) Stop wildly over-promising Tesla's FSD capabilities, and 2) stop publicly posting rebuttals to any news about accidents involving Teslas. Combine this with recreating a PR team for Tesla who can work to push out real information about what the cars can do plus respond appropriately when asked to comment on any accidents, and the public and media will move on before you know it.
You seem to complain about it being more based on a PR angle rather that facts and truth, but the whole point of what parent is saying is that this is a game Musk himself is pushing.
When the truth is the PR angle, it should be pushed.
A lie shouldn't be made up to advance PR, but if someone is reporting an actual lie? Anyone speaking up their truth has my 1000% backing, even if I do not agree with that position.
I am not willing to join the post-truth era of American politics.
> Musk tweeting about whether or not the driver purchased certain features is insane. He shouldn't have access to that data, let alone be allowed to publicize the information.
I agree with you up to this phrase. A company knows exactly what you purchased from them, so knowing what a certain customer bought is just a call away, and not only for the CEO.
Regarding publicizing if you bought a certain feature or not, I think withholding that information when the media, and even the police, were blaming the non-existent feature for the accident, is more than you could ask of most of us.
Choosing between risking a fine for divulging shopping data, or a costly and unfair reputation hit while the police investigates. The decision seems obvious.
The media does not care about the accident itself (nor the NTSB findings) so the results wouldn't make headlines. Let's see how many newspaper will retract their fake news about this particular accident...
What they care about is all the rage they can spark from anything seemingly bad that can be associated with Tesla and/or Musk.
I'm sure they would change their coverage if Tesla were paying billions to them through advertising. But they don't, so here we go.
The story was widely reported as a driverless Tesla crash. In fact, there are no driverless Teslas. Furthermore the Autopilot driver assistance feature was not even used and could not have been used. It's too bad you will never buy one of their cars because of Musk trying to correct this story, because their cars are quite good.
If you actually were a potential Tesla buyer before this, you'd certainly be in the minority if this is what changed your mind. Most people see the transparency as a good thing, especially people with concerns about whether their own car might kill them. By getting in front of the story, he's doing his job. Correcting a false and derogatory narrative that threatens your differentiating feature in an extremely competitive market is absolutely the most important thing he should be doing.
It feels a bit odd for them to say, autopilot could not have been used in that area. "In a reconstruction using a similar Tesla, the agency found that Autosteer was not usable on that part of the road, meaning the Autopilot system couldn't have worked." Whose to say that Tesla didn't update something between the crash and when the NTSB tested autopilot in that area? I don't think they did do that, but are there any safeguards in place to detect that happening?
Its one thing to say that a car cannot do something because of a physical issue on the car, vs some software that can be updated or have bugs that haven't been caught yet.
This point feels a little strained. There is a question: "Was autopilot in use?", and an experiment was performed to answer that question, which produced an informative result, which was reported. You seem to be attacking the linguistic structure of the statement by interpreting "coudn't" to mean some kind of existentially inviolate truth and not just... the result of the experiment.
I mean, come on. This is what people were saying within hours after the crash: autopilot as shipped simply doesn't have behavior consistent with this accident. It won't command the accelerations that would have been required on that tiny street and (as measured by the NTSB) won't even engage in that particular environment.
It is sensationalized, but it could have a bug and not a designed behavior. This could have been a software bug or a rare combination with hardware or environment conditions.
The report said the car's lithium-ion battery case was damaged, and in the ensuing fire the infotainment system's onboard storage device was destroyed.
However, the "restraint control module," which stores data such as whether seatbelts were in use, how fast the vehicle was going, acceleration information, and airbag deployment, although damaged by fire was recovered and turned over to the NTSB's laboratory to evaluate.
The point is that Tesla can do over the air updates, which means (in theory!) that the experiment could be gamed because it would use a different version of the software.
My wife's truck has a lane guidance system. It will gladly keep you in a well marked lane. When there are no lines or bad lines it makes a loud ding and shows a big orange cancelation message on the dash display. This is a current year F150. I can only imagine Tesla has at least as good a system as Ford. As for updates, the infotainment system has a record of the last check-in and the last update, as well as the current systems version, similar to your phone or computer. With the number of Tesla hackers out there, it seems nearly impossible for the conspiracy you're suggesting to be a reality.
>it seems nearly impossible for the conspiracy you're suggesting to be a reality.
What conspiracy? majormunky is saying Tesla might push out updates every so often, and that might effect whether autopilot can be used, and the investigators might not have realized that.
> Whose to say that Tesla didn't update something between the crash and when the NTSB tested autopilot in that area? I don't think they did do that, but are there any safeguards in place to detect that happening?
That one. To do so for a very specific location with no visible lane markers (implied by the article) to disable the lane guidance system seems very much like a stretch. "I don't think they would, but are there safeguards" implies the author of the comment is describing intent. I don't think it's a stretch to say that's a statement of conspiracy.
On the other hand wouldn’t Tesla be irresponsible if it didn’t manually review sections of road involved in a crash and perhaps override the settings if automated systems had made bad calls?
Can you imagine a second wreck on that same stretch of road and the accusations against Tesla for not doing the manual review envisioned above?
Have Tesla said definitively that they cannot override capabilities without updating the version number?
I think manual review might actually be the worst way to solve this problem, because you're relying on human labor to fix every single hiccup, which ruins the point of an automated system anyways. I think for sure they run the route through a bunch of simulations to tune the ML mode, but I seriously doubt they would go as low-level as using telemetry to disable certain features by hand. The best solution would be updating the model so that it makes the right calls on the right type of road - so that when it encounters a similar road, you don't need to wait for another fatal accident before kicking the review process into gear and making the necessary changes again.
tl;dr it doesn't scale and would be a logistical nightmare to maintain individual configurations for every road the system struggles on
Of course it would not solve the issue in a generic way. To me, the reasonable fix would be to immediately disable the feature in the specific area manually and then retrain the model or whatever it is they do.
The article says that it couldn't have been in autopilot because autopilot requires lane markings for it to be activated and that area has no lane markings. It's nothing to do with software versions.
Not a statistically meaningful possibility. If someone suffocates you don’t assume all the oxygen in the room just happened to randomly end up away from their face.
Do you have a source for that statistics? I have seen bugs in my Tesla but never had the oxygen issue happen. This is just anecdotal of course, but I suspect your comparison is off.
Sure, but the existence of individual bugs is very different that picking a very specific bug.
It’s the same as assuming that because someone regularly wins the lottery a specific ticket must have good odds. The space of all possible bugs is so vast, if the odds of every possible bug where as high as a lottery ticket then the software would be unusable. Some bugs are of course more likely than others, but assuming this one is more likely than average requires justification.
A man who uses his celebrity status to pump cryptocurrencies in order to enrich himself and the companies he owns has no moral standards and no credibility.
Even if he was pumping crypto, that doesn't mean he has no morals or that he would lie about technical details, particularly where he can be easily found out.
To be clear, are you referring to the guy in the company selling cars claiming that the current models sold today will be fully autonomous with only software patches?
Hah, the more evil version of billionaire dodgy car salesman would have a database script to modify the purchase order to retroactively remove that item.
"Self-drive? Our records say he never bought it!".
It has everything to do with the character, since he provides no evidence for what he says, we have to trust him. Therefore, the first question we must ask is: is this person trustworthy?
Autopilot also includes the less-capable levels, not just FSD. For example, Enhanced Autopilot uses Autosteer and TACC which are both mentioned in the article.
also just because the driver was in the driver seat at the time of being on camera doesn't preclude some sort of stupid behavior like leaving the driver seat while on "autopilot"
All facts aside, people are still required to be in control of their cars and need to be legally able to intervene.
"But I used autopilot" is probably the most used excuse to avert fault away from the driver. Especially in the most broken insurance system on the planet where an accident like this leads to such a change in life that the driver is scared of the outcome.
my understanding is that whether autopilot is in use or not has no bearing on your insurance. as far as the insurance company is concerned, it's an accident while you're responsible for the operation of the car.
I’ve heard that the presence of Autopilot is actually viewed negatively by insurers.
This goes against intuition-wouldn’t a machine that behaves very reliably be better than a human driver - but my dad used to say that the insurance company’s calculation was only ever about how likely they thought they could win if they sued the other party. I don’t know if that’s true, but something rings true about it.
My insurance rates are the "same" as a similarly priced Audi or such.
As Teslas are safer over time, one would expect rates to gradually go down. Some repairs are a bit more expensive, so could be a bit of a balancing act.
> As Teslas are safer over time, one would expect rates to gradually go down.
In Germany it's currently the opposite due to the overly high amount of Teslas that are declared as total damage because the auditors that work for insurers usually don't know that their frames can be partially replaced.
Additionally there's no availability to repair a Tesla in practice at that point, but the given software lockout policy of Tesla is a different story.
At the same time, “he/she used autopilot while it is only drive assist” is a similarly overused excuse made by Tesla all around. If so many do so, maybe they did buy that car anticipating that feature, which makes it at least misleading advertisement.
The ironic result of this could be that insuring an Autopilot car gets more expensive than driving a normal car due to the high cost in investigating an accident after it happened.
According to the article, the main storage aka black box was destroyed in the fire. They are trying to pull data from a secondary controller that was less damaged.
Can you provide an example of a publication that "strongly implied or outright claimed" on their own that autopilot was involved?
At least in the mainstream press, they seem to have reported what the police were saying at the time, and indicated that they were quoting the police. For example, from the NYT [1]:
> Two men were killed in Texas after a Tesla they were in crashed on Saturday and caught fire with neither of the men behind the wheel, the authorities said.
> Mark Herman, the Harris County Precinct 4 constable, said that physical evidence from the scene and interviews with witnesses led officials “to believe no one was driving the vehicle at the time of the crash.”
They also attempted to get Tesla's side of the story, but Tesla didn't respond:
> Tesla, which has disbanded its public relations team, did not respond to a request for comment.
I'd be upset with the police for making such confident statements that were contradicted by later investigations. But the media, who may not have had access to the accident scene, seems to have done a reasonable job reporting, at least from what I've seen.
In an ideal world, maybe they'd do an independent investigation instead of relying on police statements, but it's a question of how to allocate resources across all things they could be reporting. It probably makes it more difficult to do this when Tesla won't talk to them.
> They also attempted to get Tesla's side of the story, but Tesla didn't respond:
To be fair, that's a little disingenuous given that Musk was literally tweeting denials at press time. Tesla certainly didn't refuse to respond, they had lots of things to say. But the article didn't get them in time so went to press trying to imply they had something to hide.
As far as the media reporting the story: it's fine. It was newsworthy. The "there was no one driving the car" business from the fire chief seems in hindsight to have been the real failing. That quote seems to have been a lot jucier than was warranted. In fact it seems almost certain that someone was manually operating that car via some mechanism or another.
I don't think this was a case of them simply not getting a response in time. The NYT will usually add an update to the online article if they receive a response after it's published. And according to Electrek, Tesla refuses to comment on all press inquiries [1].
> Since dissolving the team at the beginning of the year, I had hoped that Tesla would try to build a new one, but it doesn’t seem to be the case.
> Instead, Tesla is leaving all press inquiries unanswered and doesn’t seem to comment on any story.
But I agree with you that the local authorities really screwed up here when communicating with the press.
> The NYT will usually add an update to the online article if they receive a response after it's published
Then why didn't they update this article? Again, it's not like Tesla didn't say anything relevant. You're making what amounts to a process argument: Tesla didn't respond in the right way and so the Times is allowed to claim that they refused to comment because it's technically true.
But "technically true" isn't what journalism is supposed to be about, and the Times is usually pretty good about that in its hard news coverage. Which is why you can tell that this is really a business/investment story being pawned off as news.
Headline: A driverless Tesla crashed and burned for four hours, police said, killing two passengers in Texas.
Lede: "Just before midnight Saturday, a Tesla drove swiftly around a curve, veered off the road, struck a tree and burst into flames in The Woodlands, Tex., a suburb north of Houston, police said."
"Driverless" is used here to relate the reader to "Driverless cars," aka "self driving car." Note that the police only said that the driver wasn't in the driver's seat at the time of the crash.
The lede puts the agency for the driving on the car: "a Tesla drove..."
Looks like NBC did it too:
The vehicle, a 2019 Tesla Model S, was driving at high speed when it ran off the road, hit a tree and burst into flames, he said.
Your whole argument hinges on the fact that the publications are intentionally trying to smear Tesla and their AutoPilot feature. The fact of the matter is that those articles very clearly state that they're quoting what the police say about the crash.
From your The Verge article, that supposedly is the example of trying very hard not to divulge this fact.
> Authorities in Texas say two people were killed when a Tesla with no one in the driver’s seat crashed into a tree and burst into flames, Houston television station KPRC 2 reported.
It's the first line of the article.
> Preliminary reports suggest the car was traveling at a high rate of speed and failed to make a turn, then drove off the road into a tree. One of the men killed was in the front passenger seat of the car, the other was in the back seat, according to KHOU. Harris County Precinct 4 Constable Mark Herman told KPRC that “no one was driving” the fully-electric 2019 Tesla at the time of the crash. It’s not yet clear whether the car had its Autopilot driver assist system activated.
That's the third paragraph. I don't think they're hiding that they're parroting police officials.
>Your whole argument hinges on the fact that the publications are intentionally trying to smear Tesla and their AutoPilot feature. The fact of the matter is that those articles very clearly state that they're quoting what the police say about the crash.
I don't think it has anything to do with smearing Tesla. I think that people have anxiety about self-driving cars and publications know that, so they play to that. This isn’t about Tesla, or AutoPilot. It’s about getting the details right.
If these publications are claiming to quote the police, they're doing a shitty job of it, because the police said only that there wasn't anyone in the driver's seat at the time of the crash. The police didn't comment on auto pilot, didn't comment on the car being "driverless" (which would imply that it never had a driver to start with, which it did. If someone takes their hands of the wheel to stop their kids in the back seat from fighting, does the car become driverless for that moment? If so, where are all the other news stories calling those crashes driverless? Car accidents happen all the time while the car is cruise control. Is that driverless? Why not? It's essentially the same technology that Tesla has for adaptive cruise control).
I'm not going to dig up the article but you should be able to find it very easily:
A writer in the LA times straight up called for federal regulators to step in based off this. I've seen newspapers go full activist before in the past but this was shocking because they were doing so based off of what is obviously a false premise.
It doesn't take long to speak to a few Tesla owners and realize that autopilot even the non-FSD doesn't operate this way.
These days, the cycle with the fake news is long gone by the time anyone knows any facts. The facts arrive too late. The clicks have been clicked. The damage has been done.
The media moves on to its next money-making event. No real repercussions are to be found.
Almost every day media around the world are subject to libel lawsuits. Recently, Newsmax and Fox News were sued by Dominion Voting Systems and their employees and Meghan Markle sued Daily Mail for publishing personal letters.
And most of the respectable news organisations still abide by the core tenets of journalism e.g. multiple, vetted sources.
The ones that don't claim that no reasonable person would take them seriously when someone sues them for defamation (Fox has done this multiple times).
It would also be great if we got retractions of all the "burned uncontrollably for hours" reports where the firemen "couldn't put it out, and Tesla didn't respond to calls to help".
It was "strongly implied" because no one was in the driver's seat. Now there is evidence there was a driver, after an investigation. Why should they retract anything?
Would be good form after a reactionary piece prior to the results of the NTSB investigation to see the publications that published such pieces take some responsibility for what they publish.
Better form would have been not publishing reactionary articles in the first place.
The actual reason they don't need to retract anything is that the sensational articles already paid off, and they're now off to write another unsubstantiated claim of a headline on something else.
Why would a serious publication take statements of the police at face value much less as evidence of anything?
> It was "strongly implied" because no one was in the driver's seat. Now there is evidence there was a driver, after an investigation. Why should they retract anything?
A publication which does not retract or correct misinformation is a key part of the problem.
Oh, I don't know that AP was invented as a car feature for so long, I mean dozing and drunken drivers must "strongly implied" because no one was driving...
another anecdote: an acquaintance let a friend borrow their Tesla X plaid (or similar, I think this was before plaid). Upon leaving their subdivision the "friend" immediately totaled the X by launching it into a drainage ditch in Florida. I've never driven a Tesla X or any car with this much power. I am surprised by how someone can get into so much trouble in 550 feet - but clearly people are surprised and don't let up on the accelerator.
I just rented a similar Model X (P100D) last week. They are remarkably fast for the size of the vehicle, but I found the accelerator pedal to be the closest thing to an ideal throttle control that I've experienced. It's extremely gentle and forgiving in slow/close situations (e.g. parallel parking), has instantaneous response and backing off of it brings an aggressive regen braking effect that is very featherable (?) and useful.
I would guess they just panicked in a strange an extremely expensive car and weren't able to lift their foot off the pedal once doom was imminent. It happened to me when I was a kid, I ran over a fence while turning my dad's girlfriends car around in a tight parking lot. I accidentally goosed the throttle while backing up, panicked, then stomped it to the floor while backing over that poor fence. Very strange experience, almost like my leg was being shocked and I couldn't control it.
It could also be the carpets. The carpets suck. They have good velcro to stick to the floor, but there's a terrible glue that keeps the carpet attached to the velcro and it softens and unsticks in the heat.
If you have those carpets, I would recommend sewing the velcro patch to the carpet so it doesn't come loose.
Yep, I’ve been there too. Mom’s new Camaro, ironically I hit my neighbor about 10 miles from our neighborhood. No one was hurt but I have never dreaded a phone call to mom more heh.
And he was arguing that the plaid 0-60 time of 1.99 seconds had an asterisk: with 1-foot rollout
Rollout is when you put your car tires between the two light beams at the drag strip. When the lights change, and the car starts moving and the front tire clears the first line that is the 1-foot rollout location.
What I found interesting
@ 1 foot a tesla will be going 5-6 miles per hour
@ ~100 feet it will hit 60 mph
@ 550 feet... a P100D can probably be going pretty fast.
Kind of a strange video. The crux of it is that Tesla is being misleading, but then it is pointed out that this is how the rest of the industry quotes 0-60 times. Does he really want Tesla to be the only "honest" one?
The problem is that the 0-60 times for the regular model is measured from a standing start, which is the standard for regular cars and which makes it appear worse, and without any visual indication on the main page that the times are measured in different ways.
Only when going on a sub-page do you get an asterisk with a footnote mentioning the difference.
Possibly, though if the original articles were reporting that 'no one was in the drivers seat', then there must have been a fairly substantial impact to throw the driver into the rear of the car.
Plus, it was a 100D on a back road, I can't imagine they'd be doing a leisurely pace... Conjecture, of course.
Didn't one of the early statements from the investigation say that they knew that no one was in the driver's seat at the time of the accident? I don't know on what basis they would make that statement and of course it could be erroneous, but all the more reason to remember we all know so little we probably can't reason about it well enough for conjecture to be much better than random guessing.
Yeah, but despite his certainty the usual disclaimers about initial investigations apply. I think the officers weren't thinking about how many people would pick up on their words.
Back road? Says they made it 550 feet from his home in a culdesac before leaving the road. What's the quarter mile trap speed of a 100D and what are the power numbers? How fast could it really have been going, under its own power, before gravity took over?
Electric instant on torque can be a dangerous thing. So assuming just under the 1/8 mile number, we're probably around what? 80-85 mph before leaving the road? That'll do it. I assume it's a TX culdesac, similar to mine and not dense pack like my old one in CA. Plenty of ace and temptation to do something I'll advised.
The fleet of EVs is very different from the fleet of light duty ICE vehicle. You can't do a 1:1 comparison and have anything useful come out of it. There is no EV equivalent of a rusted out 1995 S10 with a gas tank held in with a ratchet strap for anyone to crash because EVs haven't been around that long.
I suspect once you control for vehicle age the difference is marginal. Statistically nobody is burning to death in late model luxury sedans (Tesla included) because they're well enough designed that practically nobody is getting trapped in the vehicle in the first place.
In Hollywood crashes maybe, but in the real world auto fires are not common at all. As of 2018, there are 56 fires per billion miles driven. Furthermore, only 3% of all fatal automobile accidents involved fire.
>> And burning alive because they couldn't escape the car. No big deal.
>This isn't uncommon in motor vehicle accidents, however.
Don’t gaslight me. You’re playing up vehicle fires to downplay the fact that lithium ion batteries burn easily and are hard to extinguish, and Teslas in particular are harder to exit when the power is out.
The saddest thing about this whole thing is that you’re carrying water for a charlatan.
Do Teslas/electric cars burn more frequently in severe crashes than ICE vehicles? Last time I looked, a few years back, the data showed that ICE vehicles burned quite a bit more frequently.
That is a really good question, tbh. I've seen comparisons that show the Tesla's burning less often, but those weren't specifically post crash. A lot of ICE cars come from fuel leaks, often after major repairs and without a collision.
I'm not aware of a direct comparison of post-crash fires. Anecodally, it seems the post-crash fires are rare with the 3 and Y and much less rare with the S/X.
tesla’s are death traps. you can’t put out their fires with water. and since their doors aren’t mechanical you’re stuck inside once the battery dies. this fire took over 4 hours to put out with the fire department not knowing how to put it out
since their doors aren’t mechanical you’re stuck inside once the battery dies
This is maliciously false. There's a manual door handle just like you'd find in every other car; to the point where people new to them almost always use it by accident instead of using the button.
Doors that dont open on a power failure sound super fucking unsafe. Looks like on the Model s they do open manually in front if you pull the handle back enough and in the back via a hidden release cable under the seat which sounds less than ideal. I'm not sure if this is the case with the other models.
The Model 3 has a manual emergency release in the armrest: it's one of the more awkward parts of the car, actually, because it's right where most people expect the door handle to be and the actual "door handle" is a button a little bit up.
The thing I continue to find interesting is the fear involved with an "autopilot" crash (even if this was not one).
Every day around the world, hundreds (1000s?) of people are killed in standard car accidents, whereas, as a proportion of the market, driverlss cars cause a tiny number of crashes, yet people panic, regulators bear their teeth and protesters speak up as soon as single accident was caused by (or assumed to be caused by) a driverless car.
I guess it is an illusion of control in normal cars, despite the fact that electronics should be better in almost every regard for safer driving.
> despite the fact that electronics should be better in almost every regard for safer driving.
Maybe in the future. For now electronics are still handicapped by AI that is not anywhere even remotely close to humans.
"AIs" ability to reason about traffic situations and, by extension, planning are laughable and even if that properly worked it would still be handicapped by image recognition I wouldn't trust to tell a trashcan from a car.
At this point it's pretty obvious that until better AI comes along we're stuck with terrible. Certainly just pouring ever more resources into "current gen" NNs won't get us anywhere.
It's a matter of trust. Do you trust a robot to make decisions of life or death? You can't empathize with the robot, you know it can't empathize with you, and you instinctively don't feel like you can understand or predict the actions of agents that you don't empathize with and don't think empathizes with you. Humans make mistakes but they're understandable or comprehendible mistakes; with a robot you don't know types of mistakes to expect. You don't have a theory of mind for a robot.
> Every day around the world, hundreds (1000s?) of people are killed in standard car accidents
Hundreds? Lol. In India alone 150,000 people die in road accidents every year. That is almost 500 a day for 1 country. So over the world it must be well into multiple 1000s.
There is a group of people who are into cars and who agree that autopilot is generally safer. They consider themselves being exceptional drivers (and to be fair they often are pretty good drivers) and therefore they think that in their particular case it's safer to not use it.
Of course determining so on individual feelings is dubious (and perhaps there is or soon will be no person outperforming the AP).
I really don't understand how they can make this statement:
> The report states that when the car started, security video shows the owner in the driver's seat, contradicting reports at the time of the April 17 accident that the seat was empty when the car crashed.
Those two things aren't contradictory at all. The car's journey could have started with the driver in the driver's seat, but been empty when the crash occurred. A lot can happen in the time between the start and the end.
I suspect they tried to activate AutoSteer, didn’t realize they failed, and let TACC (which did activate) drove them straight into some trees. So much for automatic emergency braking.
Automatic emergency braking systems don't generally brake for stationary objects once the vehicle is going beyond parking speeds. They're really designed to reduce colission forces between vehicles in motion (or recently in motion), and some more recent systems try to reduce forces between vehicles and pedestrians.
A Tesla is happy to slam into a parked emergency vehicle at full speed, as has been demonstrated several times; it's not surprising they don't stop for trees either.
A Tesla will, yes. But even a small Toyota Yaris will absolutely smash the breaks if you try to drive above parking speeds against something. It's EuroNCAP mandatory for top rating so every 5 star rated car from 2021 has this.
My 2018 Subaru Forester will absolutely emergency autobrake at highway speeds. I've seen it happen when it detects a car exiting out of the lane as still being in the lane (road is curved so car is still in front, but I have no intention of following it).
Hmm, true, perhaps that was a bad example. It also brakes for cars that are waiting to turn (completely stationary) that it thinks I'm approaching too fast (say at 40 mph).
Right. I somehow doubt Subaru is ahead of the game compared to Tesla on that.
I took a look at how Subaru Eyesight works. It uses a dual camera system to identify a vehicle ahead and apply braking if it slows down, pedestrians and other specific patterns. Some of it is limited to a 19mph closing difference (surprise).
So the statement above that “mine emergency brakes with objects ahead at any speed” is, unsurprisingly, incorrect.
It doesn’t react to any object and would probably happily hit a tree under the right conditions, or a parked car. Happy to be proven wrong with a demonstration video of the car at full speed.
Uhh, no, it's totally correct that it will slam on the brakes.
Source: own a Subaru, have driven it at a cardboard box in the middle of the street, and have had it slam on the brakes. Have also had it slam on the brakes when swerved into my lane, saving me from an accident. So... worth it.
A Tesla Will also slam on the brakes in general. Swerving into your lane likely fits the 20mph window that Subaru is designed for. But that’s not what’s in question in this comment thread.
Have you tested yours again a stationary emergency vehicle or a tree? Otherwise there isn’t much to discuss here.
This is not true for Teslas, and I've tested it (in a safe environment) in my own. It uses vision, as well as radar, to detect obstacles, and will brake from highway speeds to zero when there is a car stopped ahead.
I don't think it's guaranteed though. I've noticed the same as you, but I've also had it lock the brakes after seeing a shadow (lol) and I've had it speed up to 70 mph on Topanga Canyon Blvd (speed limit 45 mph) and nearly run into a parked semi.
Except the kind of radar used in a Tesla cannot do that for you hence while the AP ignores static objects and crashes happily into parked cars, fire trucks, ...
To add even more detail, as there still appears to be a lot of confusion:
The radar systems in these vehicles send out a radio pulse in a broad approximate-cone forward. They get bounces back from everything that reflects radio in front of them. Distance from the object is calculated by time between pulse and response. Speed towards/away from the object is calculated from Doppler shift of the radio frequency.
There are two main things that these systems can't detect.
1. Speed of the object perpendicular to the direction of radio wave travel.
2. Location of the object within the approximate-cone the radio pulse travels in.
Note that thanks to the second, you can't calculate the first with higher-level object tracking, either.
So the data you get back is a list of (same-direction velocity component, distance) pairs. There's no way to distinguish between stationary objects in the road and stationary objects above the road, to the side of the road, or even the surface of the road itself.
Radar just doesn't provide the directional information necessary to handle obstacle detection safely.
This varies based on exact hardware revision, software revision including OTA updates, and active settings at the time. I don’t think anyone but Tesla can make any definitive statement about the behaviour of any given car at any given point in time.
I agree and I can't fathom why this is an unpopular opinion. It's bad enough when my phone or computer suddenly works differently (or not at all!) due to mandatory OTA update. It'd be terrifying if it my car's safety features could do the same.
It's weird that the NTSB drove another Tesla car on the same road and concluded that because autopilot wasn't available for their car it couldn't have been on in the crash.
Aren't vehicle logs a thing? What if there was a bug which mistakenly turned it on in the original car? What if there was a software update sometime between the crash and the investigation which changed the behavior? The article even says that the investigators recovered the damaged "black box", but a month later we have no details about it?
> The car’s restraint control module, which can record data associated with
vehicle speed, belt status, acceleration, and airbag deployment, was recovered but sustained fire
damage. The restraint control module was taken to the National Transportation Safety Board
(NTSB) recorder laboratory for evaluation.
Also, the accident was less than two months ago, not two years. It's likely they're still recovering and analyzing data.
Teslas log a tremendous amount of data, but my guess is that it was melted in the fire and unable to upload before it was destroyed. There is a black box with some logs and I believe 8 photos that are taken when the airbags deploy but perhaps the fire destroyed that too.
> The car’s restraint control module, which can record data associated with
vehicle speed, belt status, acceleration, and airbag deployment, was recovered but sustained fire
damage. The restraint control module was taken to the National Transportation Safety Board
(NTSB) recorder laboratory for evaluation.
The report is linked in the article. They report that the storage for the media control unit (where the video goes) was destroyed. The restraint control module (more limited data capture including speed/acceleration and airbag status) "was recovered but sustained fire damage" and has been taken to a lab for evaluation. They might get something off of it.
- The gross generalization saying that it couldn't have been enabled because they failed to enable it I their specific situation.
- The gross extrapolation of saying that the driver must have been in the seat when it happened because the driver was in the seat when the car started.
I mean, how useless is that of an investigation.
Where is the log analysis, witness reports, and camera images of points closer to the crash site.
nothing to see here. they were just going 30 and hit a tree randomly and then got in the back seat and burned to death over 4 hours with a fire that was impossible to put out and the doors non operational
oh and i’m sure all these brake failures in china have nothing to do with it
Musk tweeting about whether or not the driver purchased certain features is insane. He shouldn't have access to that data, let alone be allowed to publicize the information.