Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Having taken Waymo rides multiple times in San Francisco I can attest to how awesome Waymo is. I am worried Tesla will bring a bad name to the whole robotaxi industry. Waymo has never had an at-fault injury accident. Tesla FSD has killed many people.


In the US it is officially 2 deaths involving Teslas with FSD engaged, with a couple more under investigation but not yet verified that FSD was engaged.

Still way more (no pun intended!) than Waymo, which has had 1 Waymo involved in a 6 car crash that killed someone in one of the other cars. Besides the human fatality a dog was also killed, and 5 other people were injured, some seriously. The Waymo was empty at the time.

Ironically this crash was due to a Tesla.

The Waymo and the other cars were all waiting at a red light when the Tesla rear ended them at 98 mph.

The driver of the Tesla was not impaired at the time of the crash. He says he tried to stop but the brakes were not responding.

The driver was from Hawaii, and it was later discovered that there is someone in Hawaii with the same full name, Jia Lin Zheng, with a record of around 20 traffic crimes over the last 20 years, including excessive speeding and running red lights.

I don't know if it had been determined if the Jia Lin Zheng visiting from Hawaii who caused the San Francisco crash is the same Jia Lin Zheng as the Hawaiian Jia Lin Zheng who has the long record of unsafe driving.

I'm not familiar with the naming conventions of whatever country/culture that name comes from. Is Jia Lin Zheng the kind of name that probably many people have in Hawaii or is it one that is likely rare?


> In the US it is officially 2 deaths involving Teslas with FSD engaged, with a couple more under investigation but not yet verified that FSD was engaged.

That is not a useful metric for Tesla. They disengage FSD when they detect a potential accident.


> They disengage FSD when they detect a potential accident.

Even if that were true, any accident where FSD was disengaged up to 30 seconds prior is counted as being engaged. And 30 seconds is long enough in driving that if FSD disengaged that long ago, there's no possible way any accident at that point was related to it.


>Even if that were true, any accident where FSD was disengaged up to 30 seconds prior is counted as being engaged.

source?



Well that, and most of the incidents I've seen haven't been using FSD but instead traditional Autopilot which hasn't received updates in years.


Bullshit. And I am tired of having to call people out on it.

Autopilot shuts down when it can't handle the situation it's in. This doesn't help it "avoid blame" at all. Because Tesla considers Autopilot implicated in any crash that happened within 5 seconds from Autopilot being disengaged.

> To ensure our statistics are conservative, we count any crash in which Autopilot was deactivated within 5 seconds before impact, and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed.

NHSTA's reporting requirements are even more conservative:

> Level 2 ADAS: Entities named in the General Order must report a crash if Level 2 ADAS was in use at any time within 30 seconds of the crash and the crash involved a vulnerable road user being struck or resulted in a fatality, an air bag deployment, or any individual being transported to a hospital for medical treatment.


It’s still counted as FSD enabled. Would you prefer FSD to remain active and potentially cause further collateral damage after an accident causing who knows what kind of damage to the vehicle? Safer to shut it down when systems are working and brace for impact. Seriously use your brain.


Can you elaborate on how things are counted?

Is it "counted" if FSD was engaged within a certain time frame prior to a crash? If so, do you know what time frame?

Or only if it was disabled automatically due to detecting a potential crash?

The latter would still be problematic, as a human driver noticing a problem just prior to the FSD disabling itself would potentially be missed (right?).

Do you know who does the counting and who makes the rules in this regard?

Asking as you seem to have more knowledge here than me.



Thank you!


Your argument is ridiculous. Might as well attribute all accidents with any Tesla close in vicinity to FSD then?

The data is collected in all of these incidents, and most people have seen the clips of FSD avoiding otherwise potentially lethal accidents, so "They disengage FSD when they detect a potential accident" is also just patently untrue.


Tesla’s disengage their self driving just before a crash. That’s how the company can say FSD hardly ever causes crashes.


They don't do that, but even if they did, it wouldn't skew the statistics, since the NHTSA still counts it as engaged if it disengaged within 30 seconds of the crash.


Wrong. They disable it because it’s a safety thing. They still count it towards their metrics that FSD was enabled.


Wrong. Tesla doesn't release safety metrics:

https://www.reuters.com/legal/government/musks-tesla-seeks-g...

Waymo publishes tons of safety metrics on their website. Here's an analysis/summary:

https://www.damfirm.com/waymo-accident-statistics.html


You're immediately proven untrue: https://x.com/Tesla/status/1793561959734051273?referrer=grok...

And the links don't even touch on things that are comparable lol. Waymo might keep all the data themselves as they own the cars, while with Teslas, the drivers can and will just grab the camera data themselves, many post it YouTube.


This "statistic" is absolutely horseshit.

And Tesla knows it.

You cannot compare "the subset of conditions, locations, weather, street markings where FSD is available, because if they're not suitable, you can't use it" against "all drivers, all conditions, all weather, all the time, whether suitable or not" and keep a straight face.

Also, "fun" facts:

Tesla doesn't count an incident as an accident if the airbags don't deploy. Modern airbag systems don't blindly deploy on impact at a certain speed. Sensors assess speed, intensity of impact, angles, chassis intrusion before determining whether to trigger airbags. Sometimes it just might be seatbelt tensioners that fire. You can hammer into someone at 30mph and because of those variables, airbags don't deploy (I've also witnessed this literally hundreds of times as a firefighter/paramedic). But no airbags? That 30mph collision? "Not an accident". This also includes accidents where damage to the vehicle was so severe that airbag systems were unable to deploy. Not an accident in Tesla's "statistics".

Even more egregious - Tesla specifically does not count fatality accidents in its accident stats. Why? Who the hell knows, but they don't, and have said so themselves.

Tesla also redacts more information than any other company to the NTSB about driver assistance system incidents. Including Waymo.

So, due respect, nothing has been "immediately proven untrue". The only thing known is that Tesla is happy to pimp themselves on garbage logic and math that there's no earthly way they know is not a number that's close to useless and deceiving.


[flagged]


Unsubstantiated indeed:

> But statisticians have pointed out serious analytical flaws, including the fact that the Tesla stats involve newer cars being driven on highways. The government’s general statistics include cars of all ages on highways, rural roads and neighborhood streets. In other words, the comparison is apples and oranges.

https://www.latimes.com/business/story/2022-12-27/tesla-stop...

Biased media?

Then let's try Tesla's own words:

> and we count all crashes in which the incident alert indicated an airbag or other active restraint deployed. (Our crash statistics are not based on sample data sets or estimates.)

https://www.tesla.com/en_gb/VehicleSafetyReport

You're trying too hard to cope. Tesla's own vehicle safety report says that they don't count accidents without airbag deployment.

There's plenty of points about Musk's bullshit. Just a few months ago he was telling investors that Teslas can ignore noise from dirt, dust, snow, because Tesla's cameras do photon counting.

Spoiler: they don't. they can't. Photon counting requires special cameras. It requires an enclosed lab so you can you know, actually count the photons.

But then there's people like you, who can't seem to understand why his repeated garbage spewing might engender skepticism in others, and instead put it down to them being haters or jealous or something.


[flagged]


He's an absolute fucking dumbass with rich ass parents. He's hired a lot of smart people.


You sound like one of those fan boys that keeps a picture of Elon next to their bed.

He's a con-man that has endangered lives by tacking on the word "beta" to a dangerous "full self driving" system for almost a decade to fool customers. Meanwhile others like Waymo did it the right, safe way. And wait no... now it's labeled "supervised" and "unsupervised" full self driving or some bs like that.

He's a con-man that cheats his customers by manipulating odometers (https://www.popularmechanics.com/cars/hybrid-electric/a64555...)

He's a con-man that destroyed Twitter but did some bs financial engineering with his XAI to make X suddenly valued exactly the same as when he bought it (https://finance.yahoo.com/news/elon-musk-paid-high-price-114...).

He's the con-man that remotely controlled his autonomous robots (https://www.latimes.com/business/story/2024-10-15/tesla-opti...).

Last but not least, he's the immoral idiot that fanned hate and the political divide in America by supporting Trumps claims of a stolen election. He then manipulated a democratic election by offering million dollar in bribes (or prizes as he called them) to people who voted for Trump. https://www.cnbc.com/amp/2024/10/20/elon-musk-offers-1-milli...


Trying to prove something regarding Tesla by linking to the personal disinformation platform of Tesla's CEO makes me laugh.


Okay, point us to the data Tesla publicly releases telling us they do or don't count it.

Why would you trust a word they say when Elon has lied out of his teeth at every single investor meeting for the last decade.

I say this as an owner of a Tesla myself.


Source ?


There are 50+ deaths if you include Autopilot. Source: https://www.tesladeaths.com/

Some of them were due to the use of cameras as opposed to LiDAR for example: The May 7, 2016 crash near Williston, Florida, in which a Tesla Model S operating with Autopilot struck the side of a tractor-trailer making a left turn across the car’s path. In this incident, both the car’s camera system and the driver did not detect the white side of the tractor-trailer against a brightly lit sky, which resulted in the Tesla passing under the trailer and causing a fatality.


As for 'Jia Lin Zheng'—it's a typical Chinese name. I can confirm that, as I am Chinese myself.


Who would doubt that this is a typical Chinese name?

I'm layman regardin this but this would have been my vote in a quizz :)


The question wasn't whether or not it is typical. It is whether or not it is common.

For example, in the US John Smith and Scott Baker are both typical US names, but John Smiths are way more common than Scott Bakers.


Most likely it was the same driver. I was in Arizona and someone crashed into me, and he claimed he "lost control of the vehicle" which was an old truck. I can buy you lose steering or breaking, but not steering AND braking.

He was simply looking at his phone in reality.


Zheng Jia Lin is a common name. And there are many Chinese/Taiwanese in Hawaii. Source: wife’s family is Taiwanese.

I remember this incident. It happened a couple of blocks away. Unreasonable that they let him go.


Unreasonable? Society is breaking down all around us because of this shit. MCCA


Yes, I think that people should be prosecuted for killing people with their cars. WAWA


How many people saved by the tech?


Probably none. Fatalities in Teslas grossly exceed those in comparably-priced vehicles, which is the only benchmark that matters. Unless your counterfactual is that Tesla would be even more dangerous without FSD, but I don't think that is a useful counterfactual.


Only 15-20% of Teslas in the US have FSD.


How many have autopilot


Tesla doesn't release the data required to assess this.

And from that information alone, you can get the gist of what that data says!

(inb4 you post the accidents per mile chart which is very obviously useless and designed to mislead midwits, as it is not controlled for age of automobile or driving conditions)


Saying the Waymo was even involved in the crash is misleading. It was a victim along with most of the other vehicles.


The good news so far is that Tesla doesn't have a robotaxi service at all, they have a plain taxi service. We'll see what happens if they ever release a self-driving car, but for now, as in the past 7-8 years, they are way behind Waymo in the self driving car arena.


Waymo came close with me. There were two left turn lanes, and it migrated from the inner one to the outer one in the middle of the turn without a blinker while I was next to it. It got lucky that I'm young and wasn't too tired and also that it was a relatively safe place to run me off the road.


This is tough for people to get right too. Near my house we’ve got 2 left turn lanes that merge into a 3 lane road without any lane markers, and I’ve got to maintain constant vigilance for people drifting into my lane.


Are you sure that you were in the right and there were two left turn lanes? There is an intersection by me that everyone uses wrong and you get folks using it as a double turn lane when the lanes have text that say the name of the road you are technically supposed to stay on. Really the city is to blame and not the driver because it’s confusing but still, I think the lanes are all mapped manually for Waymo (but personally I don’t have any inside knowledge)

Traveling south here on Land Park Dr there are two lanes, some people from the right lane veer left and the left lane veer right through the middle of the intersection. There aren’t dotted lines to help.

https://maps.app.goo.gl/?link=https://www.google.com/maps/@3...


Quite certain. The two left lanes can both turn left, and the middle lane can also go straight. It's well marked (though a bit tight if you try to take it fast).

https://maps.app.goo.gl/n2kUUHmDKaSURFqx9


Is the turn you're referring to the one where lane 1 (the inner left) then merges into lane 2 after the receiving block segment starts? (John Daly Blvd. onto Park Plaza Dr. southbound)

I wonder how something like that is represented to the waymo or reasoned about.


John Daly Blvd onto Skyline southbound, where the two left turn lanes continue into two highway lanes.


Camera based guidance systems are unreliable, and the number of edge-case failure modes grow exponential in time.

LIDAR/LADAR based systems are not perfect, but do offer mm precision for guidance systems. SLAM based LIDAR systems can be very good, but are also not perfect when forced to guess where a platform is located.

Cheers, =3


> edge-case failure modes grow exponential in time.

How so? Honestly asking.-


That is a long explanation, but generally even human binocular disparity incorrectly guesses 3D structures and distances. Our brains automatically fill in a lot of missing information that computers just can't know a priori (example: you know a dog hidden behind a car doesn't actually vanish nor remain stationary.)

Most guidance platforms would use LIDAR/SLAM to describe the local road surface, and overlap camera vision data to extrapolate distant surfaces and objects. Note distant objects also have lower resolution, unknown non-distinctive features (speed bump, or open man-hole cover etc.), and increasing sparse data as velocity effectively lowers world-state sampling rates.

The world-state is constantly changing at every intersection, sampling constraints add latency, and the navigation way-point goals may reach contradiction with immediate path-planning due to ambiguous/expired information.

Cheers =3


So, it doesn't "grow exponentially", you just wanted to express that there are "many".

> you know a dog hidden behind a car doesn't actually vanish nor remain stationary.

Computers can do that, too. It's not that different ... guessing missing words.

I think it was two years ago that Tesla told the public that they could do that.


I wonder if - at some point - AI might not do the "filling in" of information the brain does by generation (ie. filling in the gaps) or something.-


Maybe, but first a real "AI" will need to be invented.

I suspect a Amish horse buggy is more practical. =3


>Tesla told the public that they could do that.

Lol, really? They either developed superman x-ray vision, or just tracked object occlusion with a common re-acquisition mitigation (so worthless when physical inertia carries a vehicle into an object collision.)

>So, it doesn't "grow exponentially",

The further the object... the more possible choices will need to be made in the guidance system. Note, guidance and navigation are related, but different problem domains. Roughly, the possible choices (and errors) if I recall grew by:

((m cars) * (k lanes ) * (r occlusions) * (s sign laws) * (1 + world_sate_delta_error(t)) ) ^ (n intersections + w way-points) = 1/hype_correction

...but that doesn't even cover the projected future risk(t). =3


Failures in automotive programs are surely linear to time over a year with all seasons.


Failure rates are a different area, and most engineers have proven it is not linear. =3

https://en.wikipedia.org/wiki/Bathtub_curve


[flagged]


They were conflating computational feasibility with sensor device failures.

Ask questions if you don't know, as being rude is not constructive. =3


Quite rich for you to accuse others of being rude. My comparison is apt, borne of your own behavior.


He's kind of entertaining. I usually don't do this here and feel it's normally in bad taste, but he crumbles pretty spectacularly when pushed and I was curious how much material he had for me.

You should see him ascribing nephrotoxicity to a chemical from a mushroom based on the reactions of squirrels in his yard eating an entirely different mushroom. I think I really got him at his peak though just skimming through.


Try not to thread hijack, and read the papers you were sent =3

https://en.wikipedia.org/wiki/Autism


Says the guy that demanded that I cite sources for something someone else said.

You're too funny.

Edit:

The very first one on the now locked comment said he misidentified the mushroom in question.

I'll assume from there you read none of it.

Edit:

Now he edited the post to remove the first cited paper, omg.

Let's do this:

The 4th one is the one you already admitted had bad data.

At least one of the other two clearly states that the mushroom was unidentified. You will have to read or delete both.


You should take a breath and consider why you are so emotionally triggered by criticism of Tesla.

For real man. You are wound up and have left dozens of increasingly irate comments.

This is not emotionally stable behavior. Why are you so heated over criticism of a corporation and by extension a person you don’t even know?

Worth taking a moment for some self reflection, this isn’t healthy.


1. I don't like disinformation 2. I know of this subject +. I might know more of the person than you give credit for too.

Would you consider someone writing and rating Community Notes for a hour or a few hour's sitting once in a while "unhealthy" too? You should steer away from academic institutions. Or Stack Overflow. They'd be joker's asylums for you!


Human bicameral driven cars are unreliable too, but FSD is more reliable. And more reliable, with way more data than any competing technologies.

Using vision for driving is something that has worked for as long as cars have existed. Trying to push some "millimeter precision" solution with unproven feature set and prohibitive hardware accessibility is just asking for no real safety improvements and just more lives lost.

Cheers.


It is true most humans can not track more than 5 objects concurrently. However, lidar data sets tend to produce sparse unambiguous point cloud data that is a lot more feasible to process. For example, a picture/reflection of a road on a bus is not a viable option with 3D data.

Cheers =3

https://www.youtube.com/watch?v=vJG698U2Mvo


Adding some stat of x things tracked adding top of your stat of "millimeters" really shows the level you're arguing here. Real world doesn't care about exact stats on some simply enumerable attributes, but adaptability. You might see 10 bisons but will instantly react to a scorpion in middle of them in your vision.

In any case. Trying to argue against vision even if LIDAR hypothetically was better (it isn't) would just lead to more deaths, maybe at best shielding the rich driving in cities. FSD's stats don't lie :|


My point was LIDAR data removes the ambiguity of camera derived data, is not fooled by featureless area ambiguity, and it is computationally easier to handle sparse data.

Many high-end multi-beam lidar also embed things like basic Bicycle and Pedestrian object detection in the sensor front end. Things have improved significantly with sensors, but the risk is never 0.

The rate of death doesn't really override an expectation of product safety, and humans understanding other humans intent.

Have a wonderful day =3


Fragmentation of results removing instead of increasing ambiguity doesn't pass the smell check, neither isn't it not something exactly against the leading FSD engineers have commented on the issue while they were removing other non-uniform sensors, even disabling existing ones entirely in cars.

Trying to claim some entirely different stack in some third party LIDAR tower's own processing is somehow "beneficial" sounds like a project manager who thinks adding engineers equates to linearly faster progress.

Just no. On a slight tangent though, I recommend reading about Tesla's vertical integration. It's not something any other company has managed to get implemented so deep in automotive, which makes it quite incomparable in some aspects where others can't adapt even if they wanted to.


Admittedly, I only have limited out-of-date knowledge from integrating iBeo, hokuyo, and SiCK lidar models (way better models out now.) Yet in general, the SLAM algorithms tended to use optical Odometry, pose recovery, and point-cloud data. Then extrapolated the projected surface using camera data within a GPU accelerated OpenCV layer. It does require a proper global shutter machine vision camera (high fps / no smear), but does work fairly well under ideal conditions. The winners of the Darpa grand challenge documented the methods in detail, and several OpenCV books cite the work as a student project.

Let me know if you have trouble finding the projects. =3


gluing together third party stacks like there's no tomorrow doesn't make you better evaluating how a vision-only ML model with limited, yet uniquely hardware-accelerated, one-task actual real life, real-time ML model running right now would integrated with foreign data, especially as the actual relevant engineers have directly talked against such things.

It might make you headstrong in believing against something that'd be easier to see the core sensibility of if you weren't so invested in just a specific corner/angle though.

I've avoided working a work project involving LIDAR scanning before, even back then the hellishness of the hardware was a large factor. I wouldn't mind playing around with a Jetson Nano though.


>gluing together third party stacks like there's no tomorrow

That is why I don't really like ROS. lol =3

>doesn't make you better evaluating how a vision-only ML model

In general, the monocular SLAM algorithms rely on salient feature extraction, and several calibrated assumptions about the camera platform. How you interpret that output is another set of issues, as the power budget is going to take the hit.

For machine vision, I'd skip the proprietary Jetson Nano... and get a cheap gaming "parts" laptop with a broken LCD and several USB ports (RTX4090 or RTX4080 is a trophy.)

No one wants to fork over $30k for an outdoor lidar, but using only cameras is a fools errand. The best platforms I've seen commercially use camera + lidar + radar.

For student projects, one can get small radars and TOF sensors for under $20 off sparkfun (similar to the one in iPhone Pro 11/12/13). We live in the future... =3


> FSD is more reliable Citation needed. And not Tesla’s typical lying with shitty statistics data (fun fact: most car accidents do not happen in places where FSD is commonly used).

Also, vision-only systems work great… if they’re backed by strong intelligence.


So would you rather have the actual data fed to you by a different parent? Airplane goes brrrr ! https://insideevs.com/news/720730/tesla-autopilot-crash-data...

>fun fact: most car accidents do not happen in places where FSD is commonly used

How is that even supposed to in any way be relevant when talking exactly about cases where FSD and similar are used. Sigh.

>Also, vision-only systems work great… if they’re backed by strong intelligence.

Yes. And I do recognize that "Best, with custom in-house NN hw" might not still be "Strong" on all aspiratory statistics. But its already much above human capability, and regardless if you want to try to say the stats are 2x 3x, even 4x exaggerated, they'd still blow the alternative safety standard out of the water.


Only cameras? I respectfully disagree.

https://www.youtube.com/watch?v=IQJL3htsDyQ


Your previous link was of that old BBC vision test, comparing being safe from danger to being able to see obviously irrelevant details of a video.

And now you link that debunked Mark Rober video that literally doesn't even have FSD turned on, while giving the most ridiculous free wins to LIDAR. Talk about writing the tests for the exact limits of a specific system. https://www.youtube.com/watch?v=QhX_fgekpk0

You're really running out of steam :D


Kind of sus, but it is consistent with other reported incidents including at least 1 fatality with FSD.

https://www.cnn.com/2025/01/07/business/nhtsa-tesla-smart-su...

Best of luck =3


"Consistent" comparing a video with a literal billboard of a road in the middle of a road to real life? With the entire FSD feature turned off in the video by the way :D ?

And one fatality with the most dangerous general form of transportation we have?

And an article about a fancy pants 5 miles a hour park retrieval feature bending a few posts as if it was relevant?

Dude I don't need luck, I could roll ten D12 ones in a row and win


One of the YT videos is probably incorrect, but other data under less demanding conditions already suggests FSD camera only options don't work as advertised within cities.

I'd rather not have drivers playing dice while driving. =3


some data suggests maybe this result in maybe some condition maybe

ok


Tesla always seems to find its telemetry data when they get sued. Some of their customers are less then honorable. lol =3


Company doesn't spread all its private data all over the internet all the time

more at 5


I didn't know Teslas come with a House of Lords


Probable, based on the cyber truck build... lol =3


> Human bicameral driven cars are unreliable too, but FSD is more reliable.

You do not have the data necessary[0] to substantiate this claim.

[0] Accidents per mile controlled for at least vintage of car and driving conditions


I do have the data necessary[0] to substantiate this claim. And no, I'm not going down listening in the rabbit hole of people trying to poke pin prick holes in to a safety margin bigger than a Gigafactory.

[0] https://insideevs.com/news/720730/tesla-autopilot-crash-data...


1. Autopilot is not FSD

2. Autopilot, being a typical ASAD, is used exclusively on highways and in conditions where typical ASADs work reliably

Weird to brag about being unwilling to apply even first order criticality to a press release but you do you.

Thank you for reaffirming that in fact you do not have the data required to substantiate your claim.


As I said, I'm not going down the rabbit hole. You go ahead and drive right in to the fallen tree in the middle of the road as you so vehemently argue it's not there.


I think you should consider reasoning about reality in real terms instead of offloading to metaphors that, by design, don’t add any more information to the discussion than your own conclusions.

By the way, it’s a “binocular” system. “Bicameral” refers to a design for institutions like legislatures.


the thing about waymo is that i suspect they're running the same ML fraud that tesla itself is running in the silicon valley in general, which is to overfit on the 20% of situations that occur 80% of times.

for waymo itself, you can overfit on 100% of the situations that will be encountered. 49 square miles isnt that large. its the real world outside that which im concerned about its efficacy in. i think if you put a waymo in a small town that no alphabet engineer has ever even heard of, then youll see it fail badly as well.

FSD is a reinforcement learning problem, and we have no good way of training non-simulation algos for that. and a real dynamical driving environment cant be simulated accurately enough


Waymo works in a remarkable range of situations. I took a waymo in LA and our route came through an awkward four-way intersection at the crest of a hill on a residential street. Another driver went through the stop when we had right of way, saw us and then just stopped, completely blocking our side of the road. The waymo just backed up a couple of yards and then slowly went round the wrong side and proceeded on its route. That is, in a weird situation it did exactly what a good, cautious human driver would do. Small sample, but it makes me think they are not doing what you say, they are just actually trying to approach the problem seriously rather than Tesla’s “full speed and damn the torpedoes” approach.


> if you put a waymo in a small town that no alphabet engineer has ever even heard of, then youll see it fail badly as well.

Which is why it is a non-goal for Waymo. It should be a non-goal for Tesla too, given the state of the art.


It's hilarious to see Tesla fans try to act like designing for an undefined operational domain is somehow extra brilliant and not one of the stupidest fucking ideas anyone has ever come up with.


San Francisco, Phoenix and LA represent a strong diversity of driving conditions. Certainly not all driving conditions, but no one is throwing a Waymo into a small town in the way you describe. Expanding slowly and cautious seems like the rational thing to do, I’m not clear what you are proposing as an alternative (or specifically what the alleged fraud is).


> San Francisco, Phoenix and LA represent a strong diversity of driving conditions.

This could very well be true, but if you’re looking at it from a perspective of someone who lives in a rural area with real winters, for driving purposes, those all look like pretty much equivalent large American cities without a winter.


Waymo has done winter testing in Buffalo, Tahoe, Michigan FWIW.


> i think if you put a waymo in a small town that no alphabet engineer has ever even heard of, then youll see it fail badly as well.

Waymo is not claiming to work in small towns.

Tesla is. Soon™.


FSD works in small towns today. Source: small town FSD user.


Not sure where you're getting your information from.

My FSD (v13.2) has driven unmapped roads, including gravel roads, hills, narrow roads, and switchbacks, in the backwoods of Tennessee. From watching the display, it clearly identifies the road features and navigates them.


With that logic might as well do away with all defensive armies because they've "killed so many people". Firefighters too.

FYI. FSD is safer than human drivers on large datasets. Accidents cause deaths of thousands every year. Arguing against FSD for "safety" has The Grim Reaper cackling.


It’s that Waymo is both more widespread and better. If there were no Waymo, then sure. But there is Waymo.


You do realize that Waymo, at best, is comparable to a train with an extremely limited, and expensive, rail network on a level where Robotaxi would be a 4x4 car in a roadless world?

Waymo is SLS compared to Starship. So, not comparable and could never fit the shoes Robotaxi has been planned to fill since the initiation of the FSD project. I.E. SLS = a few academic missions. Starship: Mars colony. Waymo is as good for safety as doing nothing with its inability to scale.

Waymo costs as much per ride as one with a driver. Robotaxi is technologically fundamentally close to starting its shift after you arrive home and get out of your car. Earning you part of the profit btw. And with no growing pains, with FSD working on novel, untested roads.


There is no cost pressure on Waymo to reduce rates below Uber or Lyft as they are much smaller, they are price takers not price makers in the market. Lowering prices far below your competitors when you are supply constrained for the time being would just be lighting money on fire.

Mapping is expensive, but not really in a per-mile-driven basis. There are 4M miles of public roads which get over 3.2T miles of total driving, or ~800k vehicle-miles per road-mile. You could have pretty high mapping costs per mile and still have very low per-mile-traveled costs for mapping. And there's every reason to think the cost of mapping and updating maps on a per-road-mile basis will go down over time, not up.

Waymo is scaling pretty rapidly, and the rate of expansion is accelerating. They've been proving out the technology, and are only now starting to commission special purpose vehicles as they move out of the research phase and into deployment.

Perhaps Tesla will catch up, but for now we know Waymo's are at least ~6x safer than humans in diverse independent conditions, while FSD according to publicly available information is 50-100x less safe (with critical interventions every few hundred miles).


And with no [hypothetical] growing pains [not including the obvious practical growing pains we are all witnessing with our eyes currently], with FSD working on novel, untested roads [plus or minus a bunch of Teslas driving around with camera arrays mounted in specific, limited areas, but besides that totally novel untested roads definitely work].


FSD has simply proven untested roads work, you cannot argue against that.

>Growing pains

Robotaxi has been in testing for less than like 1/50 the time Waymo has been out, and has already once surpassed coverage in their starting city.

You know who also had growing pains? Hulk. Growing that quick.

Elon can literally draw and balls a dick on top of Waymo's long-amassed support area. Even if they want to check these starting areas a bit better with some basic mapping setups in advance, it's obvious their stack isn't hindered by requirement of hard, slow HD mapping and cars that look like they're growing mushrooms with the ugly LIDAR sensors on them.


Tesla is hindered by the fact that there are human drivers in every single "robo-taxi"


Yeah the appearance of the 2 ton rolling robot is the important part


Oh okay it’s like Hulk. Got it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: