And please keep in mind that this is pretty much in the optimal settings for FSD: great weather, huge streets, everything is as "car friendly" as it gets. I'd like to see FSD navigating an old Italian city, there's just no way. Even in car-friendly Germany you very often have streets with side-parking, which are too narrow to fit two cars, you need to communicate with oncoming cars, for instance by waving or flashing your headlights, swerve into free spaces on the side to let other cars go through, watch out for cyclists overtaking you from both sides, etc. And this is not even mentioning the weather... So this robotaxi thing might eventually work in some US cities, but pretty much nowhere else.
I see a good number of bikes in the video but no moving car for the most part, and in the few instances of an oncoming car, IIRC the other car was yielding, the auto-driving car just stayed its path, maybe slowed a bit. No cutting, no weaving, no squeezing, no nothing. Not the kind of “dense traffic” I have in mind. I would call it light-to-no traffic on poor roads.
It already fails in Manhattan. I know that by US standards, the traffic in Manhattan is already considered chaos, but it's usually pretty tame compared to other countries.
There are these videos by this HUGE Tesla fanboy, just look here
where at 8:20 it fails to properly deal with a cop car and the fanboy complains that the cop did not obey traffic laws (good luck arguing that when you make him do an emergency stop...).
FSD consistently has problems with pedestrians in NY, who of course will just walk also on a red light if you hesitate during turning. US people might consider this "chaos", but it's perfectly normal behavior in many countries, where a red light for pedestrians is more a suggestion than a rule...
> where at 8:20 it fails to properly deal with a cop car and the fanboy complains that the cop did not obey traffic laws
More specifically, the cop car had its lights on and emergency siren going, so presumably was trying to respond to a call, in which case it is the responsibility of other vehicles to get out of the cop's way.
> US people might consider this "chaos", but it's perfectly normal behavior in many countries, where a red light for pedestrians is more a suggestion than a rule...
Also worth nothing the "jaywalking" is not a thing in many jurisdictions:
About other countries:
Well, the pedestrian is always bargaining with its physical integrity. The pedestrian must damn well always be right unless it is definitely suicidal (not rethorical)
> the fanboy complains that the cop did not obey traffic laws
Emergency responder (fire/paramedic) here. EVIP (Emergency Vehicle Incident Prevention) is a course that in most states, responders are required to take to operate a vehicle in "emergency mode" (i.e. lights, sirens, both). In some states it also serves as a replacement/alternate for requiring a CDL for larger vehicles.
In "emergency mode", the vehicle, based on most state's laws or administrative codes, "may ignore any and all rules of the road, including but not limited to, speed limits, traffic control signals, designated lane markings, one way directionality". Quite simply, on the road, in emergency mode there is no "obeying traffic laws"...
BUT...
What the words give you, the fine print often takes away. While in emergency mode, the driver of the emergency vehicle is -presumptively liable- for ANY incident that occurs, unless it can be demonstrated otherwise (and even then there is still likely to be contributory liability). This liability is personal and organizational (i.e. I can be personally sued if I hit someone in my ambulance, even going code 3).
To mitigate this, in additional to the department/organization's own insurance coverage (often self-insured by the appropriate government entity) will draw up their own department policy that applies intermediate limits (i.e. while the admin code may allow me to drive at any speed I like, our department policy says I may exceed the speed limit by no more than X mph, and X is reduced by 5 for every complicating factor, like night, or weather). The department will then take a separate insurance policy that says as long as I am within department policy rules, they will cover any personal liability that may occur.
>While in emergency mode, the driver of the emergency vehicle is -presumptively liable- for ANY incident that occurs, unless it can be demonstrated otherwise (and even then there is still likely to be contributory liability). This liability is personal and organizational (i.e. I can be personally sued if I hit someone in my ambulance, even going code 3).
I think you are mistaken. Please cite your sources.
Drivers of emergency vehicles must exercise due regard, but as long as they meet that standard, AFAIK they are not liable for accidents they cause while operating with lights and siren, when lights and siren are warranted.
For example, CA Vehicle Code section 17004 provides: “A public employee is not liable for civil damages on account of personal injury to or death of any person or damage to property resulting from the operation, in the line of duty, of an authorized emergency vehicle while responding to an emergency call or when in the immediate pursuit of an actual or suspected violator of the law, or when responding to but not upon returning from a fire alarm or other emergency call.”
There is an exception to the rule: California Vehicle Code 21056 states that CVC 17004 and CVC 21055, “does not relieve the driver of a vehicle from the duty to drive with due regard for the safety of all persons using the highway, nor protect him from the consequences of an arbitrary exercise of the privileges granted in that section.”
The law may be different in other states, but I doubt that your statement of the law is true in any U.S. state. Very curious to learn if I am wrong about that.
In your state, 21056 basically takes away all of the protections, when you read it as such.
"You may operate a vehicle outside of normal traffic law"... "not relieved from a duty to drive with due regard for the safety", means that if a court rules that your driving didn't meet "due regard" standards, well, now you are "not protected from the consequences".
While I am not an expert personally, in Washington, all EVIP courses must be approved and accredited by the Washington State Patrol.
RCW 46.61.035 states much similar:
> (4) The foregoing provisions shall not relieve the driver of an authorized emergency vehicle from the duty to drive with due regard for the safety of all persons, nor shall such provisions protect the driver from the consequences of his or her reckless disregard for the safety of others.
There is also no specification in the RCW regarding "no liability".
You might think 'reckless' is a good escape clause. "Just don't drive recklessly". Problem is even if you slow from 60mph to 20mph (school zone speed) to go through a red light... going through a red light at 20mph would generally be considered a reckless act. (Most departments will typically expect that you come to a complete stop until you are able to "clear" the intersection visually.)
Coziness of cops to DAs and prosecutors, most likely:
> King County Prosecutor's Office decided against filing criminal charges against Dave. Manion stated that the criminal review did not turn up enough evidence to prove beyond a reasonable doubt that Dave had acted with recklessness or criminal negligence, as required under Washington State law. The decision drew widespread criticism from communities in India and the US.
That is the criminal side. To the earlier point on liability or protection therefrom:
> The Kandula family declared their intent to pursue civil actions against Dave and the Seattle Police Department.
Yes the way humans negotiate with others in a split second just through eye contact or the way they break, or just one minor gesture of a hand is astonishing. People on the AI hype train should literally sit and write down how much complex problem solving through communication happens when two drivers engage non verbally.
Not only is self driving a mess on just a road level in a city like Deli or Rome, when they actually get to the numbers where communication with humans becomes vital because they're not just a blip in the traffic you're in for a whole other hell of problems to solve.
My neighborhood has the effectively one-way street thing going on. FSD would be a total clown show in here. Tesla would have to add some way to determine the velocity of the oncoming traffic around a 90 degree bend while dealing with extremely limited visibility due to seasonal foliage. They would also have to add a way to factor in the historical behavior of these cars over time. For example, I know the guy who drives the blue Tundra is kind of pokey, so I speed up a bit to clear the contention first. The more compelling scenario might be the one where I know the white Mercedes SUV has an extremely aggressive driver, so I'll prefer to wait on them to preserve the peace.
Yes, what is also important is that in many cities you often need to drive with a bit of... let's say "confidence", otherwise you will never leave your current spot. When you try to enter a busy street, you sometimes JUST NEED TO GO, trusting the oncoming traffic will slow down if needed. Sometimes a nice person will flash headlights to let you know it's OK, but I'm pretty sure FSD will not be able to notice that...
Also, even in rule-abiding Germany, some traffic laws are seen more relaxed than others. For instance, the law says that at a stop sign, you need to come to a full stop, but you will see that most drivers don't do that but just drive very slowly (if that...). Likewise, the law says to keep 1,50m distance from bikes and motorcycles, but in the above mentioned narrow streets, that would also often mean that you will not move at all for a long time. I would guess that FSD would need to abide by traffic laws just for regulatory purposes, and that would make you look like an idiot in many cities...
That's not even to speak of countries where traffic laws in general are more like suggestions...
> in many cities you often need to drive with a bit of... let's say "confidence"
Two things I've told my step-daughter when she was learning to drive, one is more common - "it is better to be predictable than polite" (no stopping when you have the right of way to 'wave someone through'), but more germane to your point:
"You should drive assertively, but not aggressively."
FSD already understands some human gestures, and it learns this stuff by seeing how human drivers do things. Understanding a headlight flash doesn't seem like a big ask. And recently, the biggest complain from FSD users was that it drove a little too aggressively.
Because it copies human drivers, FSD actually was doing slow rolling stops for a while. Then people freaked out and pretended that was super unsafe, and the government made them do full stops.
I am not going to trust super expensive car and life/health of me and my family on 'its generally working ok, sort of' level.
This ain't new headphones or TV, people apart from emotional early adopters have very different expectations when their lives are at stake. Namely absolutely stellar performance compared to average driver, being marginally better while struggling with tons of non-standard situations ain't cutting it.
Neither of those things were "learned adaptations" from human driving, and this is one of the biggest fallacies around FSD. People letting their FSD do stupid things and only intervening at the last moment (if at all) to let it learn from its mistakes.
That it might use the direction of the car in front of it as a guidance track doesn't mean it understood the human gestures of the cop telling it to do that. Or in a concert parking lot where an attendant might be doing things like alternating cars to the left and right lots.
> Because it copies human drivers, FSD actually was doing slow rolling stops for a while.
No, this was programmed behavior, with an interface/config setting, to do a "rolling stop". People "freaked out" because Tesla was literally allowing the car to perform illegal traffic infractions, and if they'd do it for stop signs, what else would they do it for?
But none of this is some Tesla "swarm" learning to do rolling stops. There is no adaptive learning happening. This is all trained from static models according to parameters from Tesla.
Version 11 had lots of hand-coded behavior. Version 12 is entirely a neural network, trained on human driving. Partly it comes from people running FSD and intervening sometimes, and partly it comes from just passive observation of a people doing their own driving. When FSD runs, video feeds into the neural net and it outputs vehicle controls, that's it.
Learning to respond to gestures is just more training on video and car control data. It shouldn't be hard to believe given all the other things we're doing these days with large neural networks.
But that's exactly the point: there's no general rule for this kind of thing, it depends on the concrete situation. There are definitely stop signs where you need to do a full stop because you cannot see any oncoming traffic. Basically: it's OK to do rolling stops until it isn't.
Same goes for flashing headlights, these can mean different things, depending on the situation. It can mean: I see you, feel free to drive. It can also mean: what the hell are you doing? Or: you left your laptop on the car roof.
Learning that sort of context is what neural nets are good at. The whole point is that none of this is hardcoded; the AI just learns what humans do in similar situations. At the point they're not even hand-labeling anything. The only input is video.
Flashing headlights can be ambiguous for humans too. There have been plenty of times when someone flashed lights at me and I had no idea why.
Imagine if you sit in a self driving taxi and need to get somewhere and the longer the ride takes, the higher the price for it. Then the taxi not really getting anywhere fast.
The thing I've been most amazed at about FSD is that there hasn't been any sort of lawsuit around FSD refunds.
There are people who paid thousands to tens-of-thousands of dollars for a promise that their car would eventually have "full self driving".
A not-insignificant portion of those people have lost any chance at actually getting FSD, whether by their car being totaled in an accident, or having sold it, or such, without ever seeing working FSD.
As far as I know there's no way to get a refund, and people very obviously didn't get "full self driving" as advertised by Elon, so it really does seem like some people paid for a promise that turned out to be nothing... which sounds like lawsuit material to me.
There have also been multiple public promises that FSD would be delivered in a matter of years (like in 2016, the promise that "by the end of next year, FSD will take you across the country safely while you sleep").
If anyone bought FSD due to believing those promised timelines, that also seems like it would be a pretty strong case for a refund to me.
There have been lawsuits. I'm unable to find the actual documents, but this article [0] reports on it and has some unbelievable quotes in it.
> Lin rejected Tesla's argument that LoSavio should have known earlier. "Although Tesla contends that it should have been obvious to LoSavio that his car needed lidar to self-drive and that his car did not have it, LoSavio plausibly alleges that he reasonably believed Tesla's claims that it could achieve self-driving with the car's existing hardware and that, if he diligently brought his car in for the required updates, the car would soon achieve the promised results," Lin wrote.
> Coca-Cola dismissed the allegations as "ridiculous," on the grounds that "no consumer could reasonably be misled into thinking Vitaminwater was a healthy beverage"
I have FSD v12 and for my drives, in my part of the country, I have to do a mild safety intervention maybe once a week (like, getting closer to another car than I'm comfortable with)
sorry other people are having different experiences, but FSD is a significant quality of life improvement for me. Nothing is nicer than getting in the car after a long day and letting it chauffeur me home. (yes I still pay attention)
Still paying attention? With a "chauffeur" you don’t have to, that’s the point. Isn’t that a obvious??
Also you’re not personally responsible if your chauffeur has an accident.
The Elon miracle, he scams you, but you still defend him and pretend like you’re happy with the scam. "In less than a year you’ll be able to get from New York to San Francisco during your sleep". That was in 2016. 8 years later, he says "robotaxi" and people still believe him.
People who drive often find basic driving "automatic"; if you still need to be vigilant, maybe even hypervigilant because you're not personally driving the car, what's the advantage?
I don't dispute that people get used to driving, but I think arguing that "driving" requires the same amount of effort as "not driving" is a more specious claim
I have FSD and totally agree with you. I use it about 90% of the time, feel less stressed and more safe. I feel like I have a good idea and where it might have an issue and am super vigilant in those situations but way more relaxed on those same stretches of road it's driven perfectly 100 times already. It's "nice to have" in normal driving conditions and f-ing wonderful when there's traffic.
> I think arguing that "driving" requires the same amount of effort as "not driving" is a more specious claim
It's really not, the human brain is _really_ good at abstracting.
In pretty much exactly the same way you don't have to pay attention to balancing to walk, your brain learns to do the same thing for cycling and driving. It abstracts the mechanics of the car (e.g. turning the wheel to turn left/right) and sort of extends its view of your "body" to the car.
This isn't an instantaneous process of course, it takes time to build up that abstraction, but it absolutely exists. If your brain wasn't good at doing those types of abstractions then walking/running/typing/speaking/breathing/etc. would take a phenomenal amount of attention and practice.
Different people have different abilities, just like with literally everything else.
For some people it’s easier to zone out and drive on auto, for others its easier to use FSD. Some people find it easy to zone out in bumper to bumper traffic, others not so much.
> I have to do a mild safety intervention maybe once a week .... (yes I still pay attention)
Except by your own admission you're not "not driving." You're monitoring the car, except without any physical feedback loop with your hands and feet. The only difference in mental load is the physical control of your limbs to do minor speed and direction adjustments. YMMV but I just find that less taxing than vigilantly monitoring a piece of software because I know how that sausage is made.
You don’t have to be hypervigilant. And in particular you don’t have to pay attention to basic lane keeping, which frees your attention to scanning ahead for potential issues. Not sure if or when it’ll ever be truly autonomous but it’s still a huge value as is. Maybe not $12k like they tried for a while, but significant
Why do people keep bringing up this incident that happened in the very early days of the system when it wasn't even called FSD? It isn't representative.
Please stop using intentionally falsified statistics by the manufacturer to push product. NHTSA has already stated that Tesla numbers do not even make basic attempts to make the numbers comparable.
Tesla only counts crashes with pyrotechnic deployments. NHTSA has stated this only accounts for ~18% of crashes on average [1] which can be derived from publicly available datasets. No competent statistician or scientist would miss a literal 5x underestimation that is frequently mentioned by laypeople as a source of uncompensated bias and that is easily derivable from well-known public datasets. They make no attempt to account for other less easily computable or subtle forms of bias before blasting it at the top of their lungs to convince customers to risk their lives.
That is intentional falsification meant to push product and has no place in civil society.
"Gaps in Tesla’s telematic data create uncertainty regarding the actual rate at which vehicles
operating with Autopilot engaged are involved in crashes. Tesla is not aware of every crash involving
Autopilot even for severe crashes because of gaps in telematic reporting. Tesla receives telematic
data from its vehicles, when appropriate cellular connectivity exists and the antenna is not damaged
during a crash, that support both crash notification and aggregation of fleet vehicle mileage. Tesla
largely receives data for crashes only with pyrotechnic deployment,2 which are a minority of police
reported crashes.3 A review of NHTSA’s 2021 FARS and Crash Report Sampling System (CRSS) finds
that only 18 percent of police-reported crashes include airbag deployments."
"ODI uses all sources of crash data, including crash telematics data, when identifying crashes that
warrant additional follow-up or investigation. ODI’s review uncovered crashes for which Autopilot
was engaged that Tesla was not notified of via telematics. Prior to the recall, Tesla vehicles with
Autopilot engaged had a pattern of frontal plane crashes that would have been avoidable by
attentive drivers, which appropriately resulted in a safety defect finding.
Peer Comparison
Data gathered from peer IR letters helped ODI document the state of the L2 market in the United
States, as well as each manufacturer’s approach to the development, design choices, deployment,
and improvement of its systems. A comparison of Tesla’s design choices to those of L2 peers identified
Tesla as an industry outlier in its approach to L2 technology by mismatching a weak driver engagement
system with Autopilot’s permissive operating capabilities. "
This pretty much jibes with my experience. Autopilot has phantom breaking still, but navigate-on-autopilot feels more predictable on the freeway and more useful than FSD. The fact that v12 has “exited beta” and is now the “Supervised” mode doesn’t give me confidence right now. The iterative progress it went through during the beta was really good — I felt that each point release was getting closer to a polished system that could be trusted. The v12 stack supposedly replaced all of the prior work with “photons in, photons out” straight neural networks, but I think this simply didn’t result in the stability the system needs. The v12 software behaves pathologically in situations where autopilot is excellent.
As a concrete example, it routinely switches out of the current lane to “follow route” and then immediately slams on the brakes to slow down, when no turn or exit is present. Then it turns on the signal and dutifully tries to get back into the previous lane. On screen, it says it’s wanting to change into a faster lane. This will happen multiple times on the same road. This is on CA-57 northbound in SoCal —- an area where I would expect there to be pretty good testing.
Personally, AP / navigate-on-autopilot is still superior. On city streets, sure, FSD can sort-of manage. But it isn’t trustworthy enough for me to use it.
FSD 12 is not used on freeways. City streets only. You’re observing behavior of the old stack if you’re seeing notices for lane change reasoning.
They’ll be unifying it later this year.
I'd have expected things like that to roll out on freeways first and later come to city streets, because freeways should be an easier case since you don't have oncoming traffic, cross traffic, pedestrians, bicycles, and traffic lights to deal with.
Ah yes, of course - we just have to wait for the next version. The response seen in the Twitter replies, YouTube comments and now HN comments to descriptions of FSD foulups for the best part of a decade now.
So far, I'm not sure what to make of it. The chart looks somewhat exponential. But since they brought more and more cars onto the road, one would expect exponential growth in this chart even if no additional usage was caused by software improvements. So one would have to un-cumulate the chart and then divide the y-axis by number of cars on the road to get usage by car.
And then one would have to factor in price changes.
I think the spike at the right is from the free trial they pushed out shortly after the release of FSD 12. The way I read the chart, the release of FSD 12 itself has not caused an increase in usage.
Does anybody know what caused the increase in growth in March 2023?
Around then they rolled out a change that I hate, which was to remove cruise control and replace it with self-driving. Previously “one tap” was cruise, two was self-drive. Now it’s right into self-drive. I assume that increased hours significantly.
They just changed the assignments of what tapping the right stalk once vs twice does -- before, once brought you into cruise control and twice into autopilot. After the change, that order is reversed by default, but you can change that from the settings.
So for the real idea of how things are going, we would need a graph of also the total cruise miles (for FSD enabled Teslas). But this change was definitely done in order to get more easy miles for FSD testing, not UX.
It’s also not an accident that the trial went live shortly before Q2 numbers. Eager to see how the graph looks like in the next report (or if it is omitted completely).
I use FSD a lot and generally like and feel less stressed and safer. For me, since I dive mostly the same roads a lot, I know which roads and intersections FSD can handle correctly and which ones it can't, so I use it where it works and don't use it where it doesn't. I've personally had a pretty big increase in use over the last year because it's just plain getting so much better. It works in places now where it didn't before.
I wonder if those Vegas loop Tesla use FSD? Or if the Teslas being tested in parking lots count as FSD miles. Didn't Tesla game their range numbers? Didn't Musk say "we could've gamed our demos, but we actually didn't!" for their city to city demo? Also notable they released a 30 day free trial so it makes sense to see an uptick
Musk claims that Tesla will announce fully autonomous driverless taxi mode on August 8th.
That's "announce", not "ship". If this were anywhere near close to happening, there would be test vehicles all over the place, like Waymo and Cruise. There would be press reports.
In reality, Tesla has an autonomous vehicle test license in California from the DMV and reports zero miles driven.
They should teach Tesla’s “autopilot” (and its FSD upgrade) in business schools. Turns out you can sustainably push up company valuation on vapourware. You have to wonder if Tesla’s autonomous driving technology was actually ever meant to turn into a product. Or whether it is mostly a tool to justify the lofty Tesla stock price. I very much doubt that it is technologically ahead of its competitors.
"ODI completed an extensive body of work via PE21020 and EA22002, which showed evidence that Tesla’s weak driver engagement system was not appropriate for Autopilot’s permissive operating capabilities. This mismatch resulted in a critical safety gap between drivers’ expectations of the L2 system’s operating
capabilities and the system’s true capabilities. This gap led to foreseeable misuse and avoidable crashes. During EA220002, ODI identified at least 13 crashes involving one or more fatalities and many more involving serious injuries, in which foreseeable driver misuse of the system played an apparent role. ODI’s analysis conducted during this investigation, which aligns with Tesla’s conclusion in its Defect Information Report, indicated that in certain circumstances, Autopilot’s system controls and warnings were insufficient for a driver assistance system that requires constant supervision by a human driver."
Ford’s Blue Cruise is hands-off in mapped highways. Waymo is driverless. Musk is really getting away with all the hype for what Tesla cruise control delivers.
Ford's Blue Cruise is not hands-off on mapped highways, only on portions of some highways where curves are shallow. And it can suddenly demand you take over in a failure mode that completely disables itself forcing you to instantly take over in a split second.
Tesla's most recent version of FSD (which is released to a limited number of non-employee testers so far) uses only eye tracking for driver monitoring and does not require the user to touch the steering wheel as long as they are looking forward.
For Tesla yes, because it can't be trusted. Allowing you to not touch the wheel while still expecting you to jump in at any time isn't an improvement by any measure.
Any remote watcher can't be expected to avoid a crash in realtime.
Waymo is trusted to behave safely without supervision, but of course they monitor everything to validate and improve.
One interesting bit is that recently Nvidia's CEO said Tesla is ahead of the other companies in the space. My opinion is also that Tesla isn't but then we have a connundrum. Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?
> “Tesla is far ahead in self-driving cars,” Huang said in an exclusive interview with Yahoo Finance.
Nvidia's CEO said that a day after Elon raised a capital multi billion dollar capital round all spent on a 100,000 gpu Nvidia cluster for another of his companies, so it could easily just be flattery.
> Is Nvidia's CEO just saying this because Waymo doesn't buy GPUs from them?
Waymo works in limited places and relies on a ridiculous number of sensors. Have you seen one in real life with all its equipment? Surely that makes it less advanced or at less ambitious. Assuming something malicious about Nvidia’s CEO seems like a big leap.
Once you add sensors you can't promise buyers that their car with less sensors will be just as self driving as the car with more sensors. So adding sensors is a big deal for Tesla, because they are in the market of selling a promise of self driving.
One is literally able to drive itself with no human. The other is advanced cruise control. This is like saying a regular plane is better than a fighter jet because it has less sensors and both can fly.
When the dust settles, it will certainly be taught in business schools. And Musk will be in prison (not for FSD specifically).
I watched the shareholders meeting yesterday - it was amazing. Elon repeated all the same things, he kept telling for the past 5 years at least, none of which is close to become a reality. And none was described in any tangible detail - all very vague promises.
As for FSD, autonomy and Robotaxis, one has to remember when it was announced and promoted - when Tesla was close to bankruptcy (per Elon himself).
as GME, that trump social thing, et al. already showed it a couple of times fundamentals don't matter. Tesla is held by folks who either don't care (ETFs, institutions, hedge funds, blablabla) or Elondong lovers.
I'm not necessarily disagreeing, but, given enough capital, a lot of wild sounding things can become real. Hype is a great tool to attract capital.
Clearly Musk understands this very well and plays that game expertly.
It's really not necessary for all his promises to come true, as long as he can point to a track record of having made some of those wild things come true. So far, that's working.
sure, I mean Amazon hasn't paid dividends either, yet it's a good investment. so there are many ways to value a stock. and as Jim Simons showed the usual traders miss quite a lot.
While I agree that Tesla is nowhere close to having an actually autonomous driving system, I think that Tesla did invest more into research and probably collected more data than anyone else on the market. This amount of research has to have some results, even if they don't have a product yet.
Yep, because if you want something bad enough, and if it’s clearly possible, enough research will get us there! Except: commercially viable fusion, quantum computers, hyper loops, AGI, interstellar space travel. Hmmm.
That’s the problem with research; much of it turns out to be a dead-end, or exponentially more difficult as you approach the goal. FSD looked extremely likely there for a time, but I think the problem was actually AGI in disguise.
Machine-learning of any kind has this uncanny ability to get you really far with very little work, which gives this illusion of rapid progress. I remember watching George Hotz' first demo of his self-driving thing, it's absolutely nuts how much he was able to do himself with so little. Sure, it drove like a drunk toddler, but it drove!
And that tricks you into thinking that the hard parts are done, and you just need to polish the thing, fill in the last few cases, and you're done!
Except, the work needed to go from 90% there to 91% there is astronomically higher than the work needed to go from 0% to 90%. And the work needed from 91% to 92% is even higher. Partly because the complexity of the corner cases increase exponentially, and partly because everyone involved doesn't actually know how the model works. It's been hilarious watching Tesla flail at this, because every new release that promises the moon always has these weird regressions in unrelated areas.
My favourite example of complexity is that drivers need to follow not only road signs and traffic lights, they also need to follow hand signals from certain people. Police officers, for example, can use hand signals to direct traffic, and it's illegal not to follow those. I can see a self-driving system recognizing hand signals and steering the car accordingly, but suddenly you get a much harder problem: How can the car know the difference between lawful hand signals, and some dude in a Halloween police uniform waving his hands?
You want to drive autonomously coast to coast? Cool, now the car needs to know how to correctly identify local police officers, highway patrol officers, state police officers, and county sheriffs, depending on the car's location.
Park rangers, all the fire departments, normal people who try temporarily route traffic around something unusual like a crash, animals, hazardous conditions.
And to detect when someone is doing a prank or just a homeless guy yelling and waving their fist at cars etc
One of the original overpromises from Musk was that you could definitely totally summon your car from NY to LA and it would magically drive all the way, next year, for sure.
Yeah, because if it understands hand gestures, it totally won't be used by criminals, directing it to a chop shop where they can disable it and cut it to pieces. What are you gonna do as the owner?
It already exists in Waymo. It obviously has a limited ODD but it absolutely works and easily passes “closely resembling FSD” for most real use cases (I.e. getting to work, school, and the store and back)
>... the 65-year-old divorced mother of three is a devout Christian who starts every day by reading the Bible while her coffee brews, and who relies on her faith during testing moments, such as the many market upheavals...
It's weird how people keep calling this vaporware when it actually works, is active on roads, and is used by tens of thousands of people. That's the strangest usage of the term I've ever seen. Vaporware is the term used to describe products announced that never make it to market in any form.
*: The definition of "work" includes veering incl. but not limited to other vehicles, road shoulders or road divisions, sometimes self stabbing the car incl., but not limited to its driver with road railings or other roadside objects. The car might catch fire as a result or independent of the event if its feelings are hurt, or just feels like it, and burns for days, releasing its densely packed magic smoke, sweat, blood vapor and condensed tears of its designers and builders. The fumes might be toxic. Please don't inhale them.
Most dangerous way to travel, full stop. FSD or not. I don't think a perfect safety record is possible. Only better than what people currently accomplish given the inherent unsafety of the whole system. If safety were a top priority, the cars would be on rails.
> Did someone give you the impression that cars without FSD have ever been safe?
Did I say anything resembling or implying that? I don't think so.
> Most dangerous way to travel, full stop.
I love a quote from a famous driver, paraphrasing: "Racing is some people knowing what they're doing driving in a closed circuit. Traffic is the same, but with people who don't know what they're doing".
On top of that, I had enough incidents to know that what humans can do in traffic. They make good stories though.
> I don't think a perfect safety record is possible.
Me, too.
> Only better than what people currently accomplish given the inherent unsafety of the whole system.
I think cars with driver monitoring is more secure than cars with FSD or hands-free driving. I love to drive cars with lane hold, adaptive cruise and driver monitoring, because these systems improve safety and augment humans at the same time.
I don't believe that AI and/or computer vision is close to matching human perception and reasoning to handle a 2ton steel box like humans. Augmenting humans' capabilities is a far safer and reliable (if not unsexier) way.
> the cars would be on rails.
I love trains to death, but they're not perfect either.
Fake it til you make it is a fundamental principle of startups. We just don’t usually see it at such a vast scale.
There’s a timeline where Theranos was acquired for 9b by UnitedHealth if they could keep the grift alive juuust a bit longer and Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.
Tesla has even more and deeper financial and branding defense mechanisms. That said, the clock is ticking, now, I think
> Elizabeth Holmes ascends to the tech firmament permanently while her enablers congratulate each other.
Holmes and at least some of her supporters still ardently insist, to this day, now that everything is out of the bag, the "pulling filing cabinets in front of doors to specific labs on FDA inspection days so they only see the labs we want them to" crap, all of it, that she, and humanity, have been robbed of the truly magnificent biomedical advances that Theranos was just about to solve.
FSD is like ChatGPT, it works in many cases, it does some mistakes, but it is certainly not “useless”. It won’t replace full time humans yet (the same way that ChatGPT does not replace a developer) but can still work in some scenarios.
To the investor, ChatGPT is sold as “AGI is just round the corner”.
But "works in limited cases" is absolutely not enough, given what it promises. It drove into static objects a couple of times, killing people. Recent videos still show behavior like speeding through stop signs: https://www.youtube.com/watch?v=MGOo06xzCeU&t=990s
Meaning that it's really not reliable enough to take your hands off the wheel.
Waymo shows that it is possible, with today's technology, to do much much better.
It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
What they do claim is that with human supervision, it lowers the accident rate to one per 5.5 million miles, which is a lot better than the overall accident rate for all cars on the road. And unlike Waymo, it works everywhere. That's worthwhile even if it never improves from here.
Fwiw you can take your hands off the wheel now, you just have to watch the road. They got rid of the "steering wheel nag" with the latest version.
Well the recent NHTSA report [1] shows Tesla intentionally falsified those statistics, so we can assume Tesla-derived statements are intentionally deceptive until proven otherwise.
Tesla only counts pyrotechnic deployments for their own numbers which NHTSA states is only ~18% of all crashes which is derived from publicly available datasets. Tesla chooses to not even account for a literal 5x discrepancy derivable from publicly available data. They make no attempt to account for anything more complex or subtle. No competent member of the field would make errors that basic except to distort the conclusions.
The usage of falsified statistics to aggressively push product to the risk of their customers makes it clear that their numbers should not only be ignored, but assumed to be malicious.
> It's not enough for robotaxis yet, and Tesla doesn't claim that it is. They just think they'll get there.
"By 2019 it will be financially irresponsible not to own a Tesla, as you will be able to earn $30K a year by utilizing it as a robotaxi as you sleep."
This was always horseshit, and still is:
If each Tesla could earn $30K profit a year just ferrying people around (and we'd assume more, in this scenario, because it could be 24/7), why the hell is Tesla selling them to us versus printing money for themselves?
They do plan to run their own robotaxis. But there are several million Teslas on the road already. They're just leaving money on the table if they don't make them part of the network, and doing so means they have a chance to hit critical mass without a huge upfront capital expenditure.
... and then react in a split second, or what? it's simpler to say goodbyes before the trip.
> They just think they'll get there.
of course. I think too. eventually they'll hire the receptionist from Waymo and he/she will tell them to build a fucking world model that has some object permanence.
The driving into static objects thing is horrible and unacceptable, I agree. As I understand, this occurred because Autopilot works by recognizing specific objects: vehicles, pedestrians, traffic cones - and avoiding those. So if an object isn't one of those things, or isn't recognized as one of those things, and the car thinks it's in a lane, it keeps going.
Yes, it was a stupid system and you are right to criticize it. And as a Tesla driver in a country that still only has that same Autopilot system and not FSD, I'm very aware of it.
But the current FSD is rebuilt from the ground up to be end-to-end neural, and they have the occupancy network now (which is damn impressive) giving a 3d map of occupied space, which should stop that problem occurring.
Oct 2014: "Five or six years from now we will be able to achieve true autonomous driving where you could literally get in the car, go to sleep and wake up at your destination."
At this point, I'd be surprised if ChatGPT has not yet given someone a response which caused them to make a mistake that resulted in a death.
We found out about the lawyers citing ChatGPT because they were called out by a judge. We find out about Google Maps errors when someone drives off a broken bridge.
For other LLMs we see mistakes bold enough that everyone can recognise them — the headlines about Google's LLM suggesting eating rocks and putting glue on your pizza (at least it said "non-toxic glue").
All it takes is some subtle mistake. The strength and the weakness of the best LLMs is their domain knowledge is part way between a normal person and a domain expert — good enough to receive trust, not enough to deserve it.
Or produces code that compiles but is subtly wrong it probably won't kill someone, well until we start developing safety critical systems with it.
One day we might have only developers that can't actually write code fluently and we'll expect them massage what ever LLMs produce into something workable. Oh well.
Self driving cars in general are a useless technology demo. Not just TSLA (although more so because of the repeated false claims).
It’s a way for the car industry to fight against their extinction.
In the beginning the argument was: “it’s not the cars killing people, it’s the damn ‘jay walkers’ (term invented by auto industry, btw). Get those people off the road and cram them into the sides of the streets so my fat car can ride freely!11”
That campaign worked, to some extent, and now we have a patch work of sidewalks.
Then later…
Eisenhower (inspired by the ability for the German military to easily mobilize across the country via autobahn) pushed for interstate highways subsidized by the people. Auto industry capitalized on this and this contributed to the invention of the American suburb and slow decay of once walkable urban cores.
Cars were a luxury item. Now it’s a necessity, along with a whole laundry list of items for a car owner:
- gas
- time spent looking for charger, and charging
- parking (less space at home for living and instead using space for car)
- time spent finding public parking
- parking fees
- time spent in traffic
- car repairs
- car maintenance
- car insurance
- yearly taxes for registration
- car sales tax
- car depreciation
- toll fees for turnpike/regional highway
In recent years, people have been realizing how car centric transportation cannot scale (ie, induced demand); and is an environmental disaster.
Now the auto industry’s answer is: “oh we have self driving cars!!1 that’s going to fix it. It’s the damn human that can’t drive!! Have aRtIfICiAl iNtElLiGeNcE hold the wheel! As for pollution, electric cars will fix that” (totally ignoring the carbon emissions to transition to an EV, increased brake dust and tire wear pollution, and rolling 10 yr contribution to e-waste in the form of batteries, and a grid that has traditionally relied on non-renewable sources)
In recent years, people have been realizing how car centric transportation cannot scale (ie, induced demand); and is an environmental disaster.
In the US, I'm pretty sure a significant majority of people are more or less all in on a car centric lifestyle. That doesn't mean it can't change, but I sure don't think it has meaningfully started to change.
Nothing represents the idea that cars were intended to be absolutely welded to the idea of respectable American society than the "parkway" (it's a park, but you drive on it!).
- EVs need to hardly ever break, so no brake dust and certainly not more than for ICEs.
- Tires only wear faster if
you accelerate like crazy.
- Batteries likely last 20
years and afterwards another 30 years as energy storage devices. And then they can be recycled with 95% of the materials being reused.
- And pollution is not the same as CO2 emissions which is what is being addressed.
>Batteries likely last 20 years and afterwards another 30 years as energy storage devices.
Out of curiosity, do you have a citation for 20 year old EV batteries being able to be repurposed for another 30 years? Assuming they are used for gridscale storage, 30 years could very easily be 10,000 charge-discharge cycles (thats slightly less than one per day).
LFP batteries have a much longer lifespan that lithium ion, but in a brief search I cant find any claim that they will last a half century. For example, this article [1] says LFP batteries have a "calendar aging" rate (capacity loss independent of active charge cycling loss) of "ca. 0.2 percentage points of capacity fade per month at 25°C and to ca. 0.5 percentage points per month at 50°C". So, in ideal conditions a battery that is kept in storage would take 20 years to reach half capacity and 40 years to reach zero capacity. Presumably daily charge-discharge cycles would reduce that lifespan significantly.
Your numbers are empirically wrong. If they were right my own LFP EV should have now lost 5%, but it hasn‘t.
One discharge cycle per day is utterly insane. Nobody does that. How many people drive 350 miles EVERY day for 30 years? 350 miles - that’s about one discharge cycle.
> people have been realizing how car centric transportation cannot scale (ie, induced demand)
Car centric transportation scales just fine. That’s why most people still prefer it today, at scale. And induced demand isn’t real - it’s just unmet demand. If people want to make more trips, then it makes sense to expand roads to help them do that.
The article answers the wrong question of 'is FSD perfect?'.
A better question is whether it's 'cost-benefit positive'. That's all that matters when users decide whether or not to use something.
If FSD reduces fatigue and allows you to arrive to work fresher, it might be worth tolerating the odd wrong turn extending the drive time by 2 minutes.
> Without FSD, you pay attention to the road and everything else is within your control. With FSD, you still need to pay attention but now there’s the additional cognitive load to monitor an unpredictable system over which you don’t have direct control.
For the author, FSD is a worse experience in addition to costing a lot of money.
When you are new to it, yes. For me it's a better experience because I've been using it so long and have a good idea of what it can and can't handle. I use it extensively on roads I know it works well on, and use it sometimes tentitively on new roads when I'm in the mood. That I spend almost all of my driving time on the same few routes makes FSD very valuable to me. It's probably still not a net benefit if you're mostly driving new places or places where it doesn't work well, but it's getting better.
> For me it's a better experience because I've been using it so long and have a good idea of what it can and can't handle.
How would you know that, though? That's the reason why at this point I can't see myself ever using that kind of a feature. The added stress of the unpredictability would make the experience miserable.
I believe the author isn’t dishonest about how they felt. But that supportive reasoning is pretty discardable. They emphasized one change as additive but glossed by the subtractive change implying there was none in concurrent executive decisioning. Experience around cognitive load can be a result in part from a users anxiety with the momentarily unfamiliar, and i think this probably is more at play than author is self aware of in the comparison.
The article answered that.. the author experienced increased cognitive load while using FSD, needing to be hyper aware of not only the road conditions but being vigilant and ready to correct mistakes.
But as the article points out, you won't arrive fresher if you have to still stay alert, but with the addition of needing to anticipate the car randomly doing something crazy. I find it more stressful than driving myself.
No technology that's actively being worked on is "done". It seems silly to decide that because it isn't perfect today, it's only a useless technology demo.
How does Tesla FSD fair against the Waymo self driving taxis?
I'm curious about whether self driving is still an impossible task right now, or if it's just a matter of quality between companies - in which case it's possibly a fair bet being made by Telsa execs that they'll bridge the gap given time and money.
The Waymo cars drive themselves without (in person) supervision. As best as I can tell from three rides, they do it perfectly. However the range is limited to city streets in a few cities, no interstates at all (they are adding this soon.) The Tesla FSD even at v12 can go about five minutes before I have to intervene. If you’re extremely tolerant and don’t care what other drivers think (eg weird slow behavior at stop signs, sudden rapid bursts of acceleration in inappropriate places, turn signal decisions that a human driver wouldn’t make) I bet you could push it up to 10 or 15 minutes without intervention. I don’t have enough courage to genuinely let FSD loose on city streets without intervening.
More generally, Waymo’s approach is to own the hardware and heavily supervise it with remote workers who can instruct it how to deal with complicated situations (eg lane blocked by emergency vehicle.) Tesla has none of that infrastructure yet. It’s sort of hard for me to see a business model where (1) the user owns the hardware, (2) there are necessary remote human beings monitoring and advising the car in sticky situations (that costs money), and (3) a third party company takes on the liability risk. The idea that you’re going to “rent out” your personal car during the day runs into the question of who pays when someone gets killed/hurt, and that immediately runs into the question of how a remote operator deals with the problem of malfunctioning hardware it doesn’t own (and why it needs to borrow other folks’ personal hardware at all.)
Waymo operates a business that offers hundreds or thousands of rides every day under regulatory supervision, and we have public data about accidents (there have been a few.) My anecdotal observations aren’t a replacement for data, but there is data. For the Tesla I’ve had FSD on my personal car since it became available in beta and have way more than three rides. My observations are sufficient to tell me it’s not reliable enough to run unsupervised, at least as long as I’m liable or in the same city as an unsupervised one.
And Tesla is selling FSD for thousands of dollars per activation.
Waymo technology is awesome, but I'm pretty sure, right now it's a money-losing business for its owner, Google.
Currently, it's definitely less reliable than Waymo, but is available anywhere in the US or Canada, and is coming to more countries soon.
The other factor is the trajectory of each endeavour. Waymo are gradually adding more cities, and Tesla FSD is gradually getting more reliable.
Both of them are going to be perfectly fine self-driving systems at some point in the future. It's an open question as to when Waymo will be able to scale up substantially, and when Tesla FSD will be reliable enough to operate as a robotaxi service.
You can see lots of videos of both on YouTube to gauge where they're up to. If you find accounts that are focused on each, you can search by oldest videos to see the progress that's been made and extrapolate from there.
Tangentially, but I'm wondering how the calculations really go with the robo-taxis. Taxidrivers here are probably getting $5/h or so and it's a job you can get without any skills at all, you don't need to speak the language and you don't need any tests, you just need to be able to sit down, input stuff on a GPS and drive. So there's an infinity of people to employ if you're a cab company. Neither Waymo nor Tesla want to give the tech away for free. Is there a market (without corporate subsidies from Waymo etc)?
It works out pretty well in theory. Even at minimum wage the cost of the driver is a lot of the cost of the taxi ride. It does all hinge on the cost of the vehicles + the cost of monitoring + the cost of maintenance being low enough to offset this though. Cost of vehicle is higher in the case of waymo but even 3x the cost is still useful, cost of monitoring is probably still a bit high at the moment but is the main thrust of improving the tech, and maintenance is not necessarily much worse than a normal taxi.
Is that the old FSD stack that, starting with 2018, has been "ready for fully autonomous driving where you can take a nap by this time next year", according to the CEO of the company?
Why are people still hopeful for Tesla FSD when other companies are so, so far ahead already?
Geolocked driverless cars are a major advance over FSD. Mercedes "glorified lane assist" is actually safe in well defined parameters, and gives you a well defined amount of time (10s?) to re-engage, with guaranteed fail-safe if you don't. FSD is... A tech demo?
> gives you a well defined amount of time (10s?) to re-engage, with guaranteed fail-safe if you don't
I don't see it in "DRIVE PILOT Special Terms and Conditions".
> As the fallback-ready user of this system, it is the responsibility of the user of the DRIVE PILOT Subscription Services to remain receptive and take control of the vehicle as soon as DRIVE PILOT function issues a takeover request.
> Conditions may arise at any time that require the user of the DRIVE PILOT Subscription Services, as the fallback-ready user, to respond to a takeover request.
> If a user of the DRIVE PILOT Subscription Services does not respond to the takeover request within a certain period of time, the vehicle’s emergency stop procedure will begin
I think you understand that in real life the situation can go outside "well defined parameters" at any time, so the wider the acceptable set of parameters the better. And Drive Pilot seems to be pretty limited in that.
Yeah, I thought exactly the same thing. If you didn't know what kind of person Elon is after the Thai cave diver incident in 2019, you weren't paying much attention.
I just completed my trial and it was better than I expected, and yet still utterly terrifying at bits. It still feels _very_ far off from being able to safely navigate and negotiate only medium-complexity environments, but maybe the vaunted 12.4/12.5 update will change all of that.
For example:
* It still won't change in or out of a solid line HOV lane here in Arizona. Feels like an easy fix, but there it is
* I have concerns about its ability to check oncoming traffic when coming out of an occluded side street or alley. For example, my alley (where my garage leads) connects to a MAJOR road that is extremely fast. It is also fairly occluded in the side view by bushes and a light post. A human will move their head forward, crane their neck, and also be able to detect subtle changes in the light and shadows through the bush itself to determine if there's _any_ movement and interpret that movement as a potential car, even if they can't positively see a car. They can inch forward until they can see that the path is clear. The Tesla's side-facing camera is in the b-pillar, behind the driver's head, and at best, it can inch forward (and does) but gaining a high-confidence positive reading that the path is clear is... well, nearly impossible in certain cases that aren't impossible for humans, and that's concerning.
* Parking still takes one too many adjustments, and impatient drivers around you definitely notice it
* At one point, the FSD/AP engine itself crashed on me while fully engaged. Unfortunately, this happened on a freeway connector ramp with a pretty steep curve, and when it crashed, it disengaged the steering wheel and sent us careening towards the barrier: it was a single lane HOV ramp, and we were going about 70 mph, so if I hadn't been hover-handing, it would've easily resulted in a bad accident. This wasn't a case of disengagement or AP getting scared or losing confidence. The engine itself suddenly, without warning, and for no discernible reason, crashed entirely. (It immediately threw an error and said AP System Error/Take Control Immediately.) It then showed the car in a sea of black, as the visualization/FSD engine rebooted. This sort of crash is kryptonite. It's terrifying and its randomness and senselessness and opacity towards what caused it if anything is haunting. Again, a disengagement like this with no driver would result in catastrophe.
On the flip side, I was fairly surprised at how well it handled a lot of basic driving tasks. Visual-only parking still freaks me out (especially since my model HAS ultrasonics, but you disable them when you go visual-only, which is absurd), and a couple turns felt close to the curb, but overall, driving was fairly smooth and decent.
I have the added benefit of living in Phoenix, which is Waymo country. Waymos drive more confidently, and more importantly, are already fully autonomous. They navigate complex environents fairly decently (though, for example, my dad got stuck in one doing loops of a dealership parking lot that confused it a few weeks ago) and they're comfortable to ride in. They're not yet on freeways, but apparently that'll change soon, but they also only go the speed limit, which is Phoenix is... a choice.
Elon keeps pushing this dream of a robotaxi fleet of Teslas, but I agree with the OP that it feels a far way off before I'd be comfortable with the idea of these things fully autonomously driving, and I say this as someone who sees a half dozen Waymos every single day. I also wonder more broadly about the core conceit here: not that fractional car ownership doesn't make sense; it absolutely does, but in the idea that Tesla owners are going to be comfortable with their ~$50k-$150k vehicles roaming around and picking up strangers who... hopefully don't do things to their car, all while hoping the car comes back home. I don't believe Elon was pitching the robotaxi as being wholly Tesla owned vehicles, but it seems like a big societal shift to get people comfortable with their cars having minds of their own, and taking in randos.
The pre 12.5 stack uses traditional C++ conditionals together with computer vision for highway driving. 12.5 and after is the a single stack setup (at least that's what Musk claims).
The 12.3.6 (or the 12.4.1 that's being trialed right now) is pretty much the current production state of the art for Tesla street level (not highway) single stack self driving where the steering commands are directly issued by the neural network.
It’s about as shit as the 300k lines of c code. Added with drink spilling acceleration in 12.3.6 for no reason at all.
There are major improvements in taking a curve and not scaring the shit out of the driver, much faster basic turn right on stop sign, lane changes when it does it, it’s hard to describe just how smooth they are it’s a little nuts the car just floats over it’s not human, but it really should not have changed lane most of the time… so decision logic is shit. Etc.
And that's the whole problem with it - there isn't really any 'decision logic', it's a giant low level neural network trained on the outputs of human drivers. It understands driving less than GPT understands semantic concepts in language. There's no 'executive' high level control, it's just a big stupid animal brain that reacts to stimuli like 'turn left ahead' or 'car ahead slowing down'.
> At one point, the FSD/AP engine itself crashed on me while fully engaged. Unfortunately, this happened on a freeway connector ramp with a pretty steep curve, and when it crashed, it disengaged the steering wheel and sent us careening towards the barrier
It seems the lessons of Therac-25 were not only ignored, but thoroughly trampled underfoot. WTF.
I wonder how is there no open data on "no mistakes FSD zones", e.g. areas of maps or routes that would be proven by large numbers to be error-free for Tesla FSD without interventions.
Would clearly show progress and make people understand when they're trying the road that's not proof yet.
FSD is not ready. I'm a Model Y owner, tend to lean pro-Elon, and will gladly tell you that it's a fantastic car - but the self-driving is still not ready. My free month of autopilot made me trust it less. It's quite good at highway driving, but the phantom braking and haphazard lane changing has eroded my trust to the point that I am manually driving the car 100% of the time.
I prefer the previous pseudo-FSD which simply stayed in the current lane, and relied on human input for anything more than that. This whole FSD update has been a sobering realization that I really only want the safety features.
> It's quite good at highway driving, but the phantom braking and haphazard lane changing has eroded my trust to the point that I am manually driving the car 100% of the time.
Elsewhere in this discussion someone noted that the latest FSD updates don’t apply to highway driving. I don’t own a Tesla and am not a fan of them due to the various privacy issues and reliance on touchscreen controls. But for what it’s worth phantom braking is an issue I have on my non Tesla cars with their driver assistance features. I’m not sure if it is worse on a Tesla, but anecdotally in my friend’s Tesla I was blown away by how comparatively advanced a Tesla feels.
US tech has many such stories throughout the ages. but there is still a "cultural significance" to the novel ones like FSD imho. the vision marketing helped fuel the eventual development to make it a reality, whether we are there yet or not.
another place where this is in play (also in the ai space) is the openai mumbo-jumbo. while there is plenty of discussions on hn about the real efficacies of genai, we have to agree that it has motivated companies to drop bucket loads of resources into its advancements. same way, the auto industry and researchers got a reason to fund designing the cars of the future.
a smaller example where this worked out was the macbook air and ultrabooks space. while everybody was captivated by the original fitting inside a large envelope, it was an overheating, slow (even for its time) mess for many years. but by creating a clear market demand for the vision, we finally have devices that meet the original vision.
while i cannot stand behind the current driver of FSD tech (or ai for that matter), i respect their role as a catalyst in research.
I’ve been using FSD for the last 6 weeks. Paid for the monthly subscription, and running v12.3.6
It’s almost killed me once, when it was about to blow through a red light at ~70mph, while a car was about to turn left in front of me. There was no indication that it was going to stop, so I slammed on the brakes.
Then, after about three weeks, it stopped tracking the lanes properly, and would drive straddling the lane divider. I’d repeatedly enable it when I was in the far left lane of a 4 lane road, and watch as it would promptly begin driving about two feet into the right lane. Disable, and reenabled and it would repeat this engineering feat. It continued for 2 days. Yesterday it was just driving extremely right justified. If there were any cyclists in the bike lane, my mirrors would be intruding into their space.
Of course I bought the FSD function as I knew that I would be attentive and provide feedback, kind of a service to humanity, as this feature is far from prime time, and isn’t safe. I may cancel the service as I’m concerned that my 17 year old new driver will enable it, and not be as attentive.
I just figured out what’s broken. The front facing cameras are obscured due to what appears to be glue outgassing from the glue holding the flocked material surrounding the front camera enclosure. This is the second time this has happened. The Tesla allows you to drive, and gives no warnings that it can’t resolve properly. I reported it to the NTSB before, with no resolution. Time to do it again.
The front facing cameras were fogged. Just fixed that, but it’s still driving right justified all the way into the adjacent lane. Something is broken. NTSB time.
Disclosure: I work for a Tesla competitor, not on self-driving. This is solely my own opinion.
More and more, I feel that we need at least a couple more paradigm changes - specifically:
- More understandable models - being able to get good answers to "why did you do that"
- Better retraining - Don't do that thing! (without training a whole model)
- Better internal learning - don't wait for the next download
Non ML changes: Public understanding and acceptance of self-driving, favorable insurance and legal framework.
Public understanding includes improvements to two way communication between humans and self driving vehicles.
Legal framework includes limited liability for manufacturers. NOTE: I don't love this idea or even like it, but without it we will only get driver enhancement features, not SDC. (If you're angry at this suggestion, know that I am too)
---
My basic approach to all of this is that you need to have a solution to any problems that automation will encounter. The World is a problem generating machine. Most solutions to automation problems involve constraining the problem space. But as that guy said "Life, uh, finds a way"
Right. Scary stuff. I'm not excited to drive a cheap second-hand ICE car, but the fanciness stops at AC and 3.5 mm AUX-jack on the stereo, and that's pretty nice. If I wanted to I could do service and repairs myself.
You can also just have a dumb EV and thus do a favor to both your own safety and the survivability of the planet. EV does not automatically entail AI-assist.
I'm not very well informed in this area but I suspect there are no serious alternatives. Ignoring price, are there EV:s that can travel at least 500-600 kilometers on a charge but only weigh 1500 kilos and hence are rather simple to lift with consumer or improvised tooling? Are there EV:s without remote control and 'upgrades'? Can I change lamps and shift tires on such EV:s? Do they fit at least two child seats or is that amount of space more of a premium feature?
The existing EVs' specs are more than enough, don't worry.
And yes most of these EVs are still pretty dumb, so you'd like them. It's just that Tesla got the hype.
By the way, when you realize that you actually never drive for 500km straight without e.g. several 30min pauses to rest, which can be used for charging if you have an EV, you discover that you have much more options than you'd think to choose a model to buy. (most governments officially advise it, as not doing pauses would endanger the lives of everybody on the road)
Seems it's rather common that they're Internet connected when I look at some ads. Do you have an example?
30 min pauses? Why? I stop for five-ten minutes every four to five hours when I need to pee, and I can't rely on there being charging stations along the way.
At least all the Toyota and Mitsubishi electric prior to 2018 or so. And I did not even look up anything in particular, it's just well known that none of those had any kind of remote upgrade.
I would advise you to do longer pauses when you are on such a long trip (not only me, the road safety specialists too). It's not for the bladder, it's for the fatigue, particularly of the central nervous system.
The risk is not to be killed by a bladder shrapnel, it's to switch off the attention and be crushed by an unnoticed lorry.
About road safety: Yes. Seriously lol please put a bit of your own energy in searching this.
And it's not "to make the specialists happy", it's to increase safety according to the knowledge accumulated thanks to such specialists.
(ok I see "neo-luddite" in your profile, but usually that does not entail "not trusting scientists/specialists")
You have a flesh brain like everybody else, with biological limitations that lead to fatigue and decreased attention after a long drive without sufficient pause.
Image segmentation is almost a solved problem. There is no reason why it should get confused even with a vision only system. Their problem is most likely that they don't have enough compute to process a history of frames and instead process a single image at a time leading to jumps in the segmentation results and those random jumps cause unpredictable braking.
In my experience it's almost always shadows. I can't be sure of course, but I've definitively noticed a correlation: both shadows from overpasses or shadows from semis, this happens more often when the sun is low too.
It never happens at night though, which in my mind makes the shadows hypothesis weaker.
The author makes a mistake early on “hoping” when he buys a new car there are alternatives with adaptive cruise control as “good” as Tesla. This is the problem with Musk’s hype. There are many, many better systems. Including Mercedes level-3 self driving which is far beyond anything Tesla have. Simpler systems like autopilot which perform better are in Audi’s Hondas and more. People really need to understand that Musks pitch is simply a lie and there are independent tests proving how far behind Tesla are.
Author here: I’m well aware that there are similar autopilot systems. There’s a reason why my requirements list has “extensive charger network” before “autopilot just as good”.
(Also, the autopilot of my wife’s Audi is a huge regression compared to the one of my Tesla. Though that’s the 2019 model, so who knows…)
""If somebody doesn't believe Tesla's going to solve autonomy, I think they should not be an investor in the company," Musk said on the latest earnings call in April. He added, "We will, and we are."
Musk has been making these kinds of pronouncements for years, and the company has yet to deliver.
This is probably the most difficult problem in computer sense/vision that we have, and I am beginning to think that short of the purely hypothetical "AGI", this requires the computer known as "the human brain".
The only way out I see is separate or vastly updated infrastructure - separate roads and highways, and streets with specific signs and markings to help FSD vehicles navigate. For example, to ban FSD vehicles from taking certain risky routes.
I struggle to understand the negative reaction to him online. I don't like him because he divorced the mother of his children and throws out pie-in-the-sky projections about his businesses. But those are forgivable sins in Silicon Valley. And even though Musk writes many checks he can't cash, it's impossible to deny that he pretty much single-handedly restarted commercial space travel and EVs as American industries. The best selling car in the world in 2023 was American, not Japanese or German or Korean. That should make up for a lot of unkept promises about FSD, etc.
At the root of the Musk hate seems to be the fact that he (loudly) doesn't subscribe to a handful of political positions that some folks have adopted as axiomatic in their moral worldview. They essentially see him as a heretic, and attack him for stuff that others get a pass on. I mean, Zuckerberg made his billions spying on everyone and destroying the mental health of teenagers. His businesses have a tiny fraction of the socially redeeming value of Musk's. But Zuck doesn't get a fraction of the hate Musk does.
My car is old enough to not have adaptive cruise control and lane centering, but that's all I'd love to have for now. Well executed, it's such a joy on a highway or in traffic.
Jeep and others have it but drive like a bowling ball with guardrails up.
Has anyone here successfully retrofitted Audi a4 with ACC? I hear Cruise started like that, would be awesome to get a kit like that.
The car has to have electric power steering that takes forged external commands. ~2005 cars tend to have incompatible hydraulic type units, 2019~ cars tend to have signature enforcement for external steering input.
Gas, brake, steering in cars are controlled by each separate computers with each own firmware - there's no centralized keyboard controller or USB root hub to take over. The computers have to be designed and built to accept driving commands, or replaced by one supporting it, or mechanically actuated. The latter two paths are rarely taken.
Honestly stop and go is where I most want ACC / follow mode.
Regular cruise works fine on highways, and my experience with adaptive is that it makes passing more difficult as it slowed down way too early (though other implementations might be better), plus the focus of smooth / comfortable slowdowns means it can be hard to realise if you’re watching the road and your surroundings, at which point you’re quite a ways below speed for the passing lane and need to accelerate back up.
Also doesn’t deal well with passing trucks for lack of anticipation.
FSD is worse than autopilot, and makes autopilot worse. When you enable FSD, it turns on the cabin camera and applies the same driver attention standard to autopilot. Autopilot is really good! But if FSD is on, it becomes next to useless. You actually have to look at the road all the time!
Why would he do something Tesla should have done? Why would he risk his life for a blog post? He’s a regular person trying to reach his destination, not a Tesla beta tester with Tesla-funded insurance in case the car tries to kills him.
1. I explicitly mentioned to do it only as long as it was legal and safe to do so.
2. The post is about three test rides. Sure he may have had a destination to reach at the same time, but he was testing.
3. It probably wasn't a life or death situation the way it was written. "The car would have figured itself out eventually" does not sound immediately life threatening.
4. There's no basis for "the car would have figured itself out eventually". I felt the bits I quoted came off a bit too hand wavey for something that is labeled a test. Either don't include it at all, or see what actually happens (within reason).
author pointed out that either way drivers behind him would've been very confused, so even if it DID correct, it still made mistakes that disrupted the flow of traffic.
The first 2 and last incident seem like mapping errors. I know people working in this space, and it sounds like they all think the problem is "just giving it a better map". They want to rely primarily on a world map and use things like vision to "see" cars and other local stuff. That approach will never work (except in limited cases like Waymo mostly gets away with). Humans rely primarily on vision and understanding, and reading signs.
FSD requires AGI. These systems are all expensive science fair projects and have already killed a number of people.
Almost a decade ago I used to be a hyped up HS graduate fully spoon-fed the AI hype bubble (after 2012, the first "deep" learning breakthroughs for image classification started hyping the game up). I studied at a top 5 university for CS and specialised in deep learning. Three years ago I finished, rejected a (some would call "prestigious") PhD offer and was thoroughly let down by how "stupid" AI is.
For the last 2-ish years, companies found a way to throw supercomputers on a preprocessed internet dictionary dataset and the media gulped it up like nothing, because on the surface it looks shiny and fancy, but when you peek it open, it's utterly stupid and flawed, with very limited uses for actual products.
Anything that requires any amount of precision, accountability, reproducibility?
Yeah, good luck trusting a system that inherently just learns statistics out of data and will thus fundamentally always have an unacceptable margin of error. Imagine using a function that gives you several different answers for the same input, in analytical applications that need a single correct answer. I don't know anyone in SWE that uses AI for more than as a glorified autocomplete which needs to be proof-read and corrected more often than not to the point of oftentimes being contraproductive.
Tldr; it is exactly zero surprising that FSD doesn't work, and it will not work with the current underlying basis (deep learning). The irony is, that people with power to allocate billions of dollars have no technical understanding and just trust the obviously fake marketing slides. Right, Devin?
What everyone seems to be ignoring when it comes to autonomous cars is that the last 1% or 0.1% or whatever remains unsolvable for "AI" might not really matter.
We can change roads, ffs! At the bare minimum, we could fence off difficult areas and force manual driving. Once many people own an autonomous car, there will be pressure to make roads safe and convenient for them.
Phantom braking for instance, is only a safety issue when the following car gets too close. Lanes can be annotated, etc.
The "AI" part is just marketing to get these cars on the road and set the customer's expectations. Once that is done, the hard parts will just be moved from the car manufacturer to the road builders.
Road infrastructure is crazy expensive and even if you are willing to rebuild it for the autonomous cars instead of building a mass transit, you will have either segregated roads for autonomous cars(which is essentially individualistic mass transit) or you will have mixed car infrastructure.
The US has a road network of 7 million km and apparently it costs about 1 million to 45 million[0] to build 1 km of road, depending on the nature of the infrastructure. Obviously the higher end is for large roads built in city centres but even at the lower end of the cost spectrum its crazy expensive.
IMHO doesn't make sense to rebuild the infrastructure for autonomous cars. If you re going to rebuild the infrastructure you can be better of to built continent wide mass transit. At least you won't need parking lots.
Half of the city infrastructure is dedicated to supporting 3000-5000 pounds of highly manufactured metal to move a single 100-200 pound passenger going the same direction as everyone else.
And we want to spends billions on autonomous driving so we can continue being locked in the absolute least efficient solution so we can require manufacturing another $10k or so of equipment on hundreds of millions of cars to enable FSD?
Every time I see someone start a new planned community I am wondering why they haven't designed it with autonomous vehicles in mind. You could get away with much simpler systems.
I would think the opposite... If I planned a community I would take 0 thoughts into self driving cars. I'd focus on making sure every single area can be reached via a walking/bike path.
Self driving will figure it out anyway (given enough time).
You are severely underestimating how much of regular traffic problems are unsolved and far from being solvable with current tech. It's not 1% or 0.1%, it's probably closer to 5% or more if you include all year round conditions and countries with less car-friendly roads than the USA.
"well before Elon revealed himself as the kind of person he really is"
Many of us knew that Elon was full of shit long before but until he started spewing his politics no one cared to listed to even respected industry experts. A sad state of affairs IMO.
If you aren't in the cult of personality, a lot of people need to justify it at this point. I have no interest in buying a Tesla unless he divests fully. Everyone else seems to be catching up without lying about their projected capabilities and I don't need an EV today. I don't want to give him any more money than he has.
How many CEOs & owners have you vetted when making purchases? CEOs are a pretty shady crew as a class of people; it attracts people who like power and they are the dangerous ones. Or people who are single-minded in their focus and they tend to come with a raft of unusual views and personalities. And even after vetting, you're probably just biasing your purchases in favour of CEOs who do image management rather than people you'd actually like to support.
If you're buying cars based inversely on how much money the company owner has, you're implicitly ruling out a lot of good options. It is using the language a bit loosely but in some relevant sense we'd expect people who make the most cost-effective products to make the most profit.
When I see a Tesla, I think of Elon. When I see another car I don't think of whoever the CEO is, because I don't know who that is. For better or worse, Tesla=Elon in most people's eyes. He had made himself the figurehead, and is an extremely divisive figure. I didn't like feeling that I was implicitly supporting him whenever I got into my car.
You could apply this same thought reasoning with the trolly problem.
You know one CEO but not the others. The others could be worse people overall but they choose to hide it. So you make a decision based off a known bias.
I don't think you're irrational for making this decision, it's just interesting. It's something I've never done.
I'm not boycotting it because I'm under the illusion that it will affect Elon in any way. I am avoiding it because it makes me feel ick whenever I get in it and am forced to think of him. I also I don't want other people to think I support him.
I know what you mean, but there's a few other companies who I boycott because they (or their CEO) said or did something that disgusted me, and I've had no reason to go back later and see if it's been corrected.
I don't know the name of a single other car company's CEO. If they are in the news or a topic of conversation among friends for a questionable or objectionable reason, I will evaluate them. Terrible people can be unknown for long periods of time, but it's not worth my time to investigate everyone just to be fair to Musk and his fans.
Lmao how amny other companies do you buy products from that have shady business strategies or CEOs with unacceptable views? At least Elon doesn't hide it.
I suspect its because having a personality lead company implies that people buying that product tend to agree with the personality.
Tesla is an extreme example, Elon is either Jesus, or Judas. There is no in between. I have a Powerewall, because it was at the time the best product.
When I do talk about it, the second or third question is: "what do you think of, you know, nod, nod, wink, wink".
He's a polarising figure, which makes getting useful reviews out of their products/business practices hard. (same with Facebook, Nestle and other pariahs.)
I've not looked recently, but giving it a cursory look there are stackable systems out there for ~£1.1k per 5kwh, which is about a third of the price of a power wall. They also have local APIs which is really nice to integrate into Home Automation.
It looks like the powerwall isn't the best anymore, so I probably wouldn't
I've seen it firsthand living in a very liberal area. 3 years ago people loved the brand. People would strike up conversation about it. It was all upbeat, happy and hopeful.
Now, people rarely talk about it. And when they do it's apologetically. It often ends with some down statement about politics. Someone saying there were no other choices. Just a crappy time.
I love my Tesla. It's a great car. But it's my last. I want nothing to do with this type of politics or conversation.
I for one am sorry for its frontman. It's a good car that I don't want again... That's just stupid of him.
Sounds less like they're actually upset about their purchase, and more like they're afraid to be bullied. It's a good strategy. I'm all for making people apologize and pre-emptively justify making decisions I don't like.
I think it's because people don't want to come off as endorsing elon musk/distance themselves from his weirder fans. I'll say though, 3 years ago is well after he called that one rescue diver a pedo so I don't know if the opening statement totally holds. Not that I really care that much.
Distancing is a good point, but maybe still not needed given that this is real critique. I would understand the distancing more if this review was praising FSD.
I think people did, to some extent. "Not an apple fanboy, but..." is a sentence i've heard more than once. But apple's figurehead never made as much of a habit as acting publicly obnoxious, not even during the steve jobs days
There is probably a reason why we hear very little about the political opinions of most Megacorp CEOs on a daily basis.
I'm sure I would agree with some of them and disagree with others but they are smart enough to not make that a center piece of their public persona.
Elon Musk decided on his own to
a) be a central marketing and PR channel of Tesla. He has everyone convinced that the company cannot thrive otherwise.
b) involve himself in hot button political discussions all around the globe (usually in a "hot-take-no-need-for-further-reasearch" manner).
c) buy a social media company while publicly lamenting the state of social media. At said social media he (again very publicly) instituted changes to align it with his political philosophy.
Whether one likes or dislikes his opinions, it is very clear that he wants to be seen as some kind of political influencer and that he bases some of his business decisions on this persona.
This is quite unusual and probably the sole reason people even think about the "frontmann" of the company when discussing the product
It's funny though because before Elon CEOs were hated just as much. Mostly because they are all bucketed together as too rich, too much control and not being as open in communication on decisions.
Now a new CEO disrupts (part of) that , and everyone hates him even more. Mostly because they just disagree with his comments.
It's very strange tbh. Should we be encouraging the transparency in CEO outspokenness regardless if you agree or not?
> TV said he was buying a Tony Stark car. Then TV revealed that Tony Stark was two Mussolinis in a trenchcoat. Car bad now
Do you celebrate the moon landing? The US maintaining a constant lead in aerospace over the rest of the world? Those were built by the Mussolini supporters of the world. NASA and Saturn V would not have existed if not for the nice German folks who were recruited to work for Uncle Sam. Science and engineering are more apolitical than you think.
Who cares what he puts on Twitter. IMO people are too obsessed over what people put on social media. Actions matter, words much much less (In this scope).
Judge people for what they do, not what they say (10000x on twitter).
Von Braun was a Nazi party member and used slave labor IIRC. Why would anyone think he would change? I bet Von Braun didn't. The US just used him, after trying not to, because he was too useful. Nobody was going to give him any more slave labor so what evil could he do?
Musk is another kettle of fish - he's very rich and has the potential to do a great deal of evil.
Right, getting more corner cases training data won't solve an architecture problem. AI in general quickly impresses when it's mostly right but improving from there is the challenge.
More data won't help if the problem is the tools as such.
Grandparent said the hard part is getting rid of the last 1%, parent claimed Elon said the dame when he said 99.99% of the trait data is useless.
But it's not the same.
Elon thinks he just needs the right data to solve the problem but it could be impossible even if he gets the data because of the limitations of the used type of AI.
If you need a screwdriver but only have a hammer more nails won't help.
AI is here used by me as an umbrella term for computer decision systems.
And yet on a very simple drive I have to intervene 4-6 times over a distance of 8 miles. How is this not useful? It would have been easier to ask people to record how to drive roads by now and use video game track logic where you race a ghost by now…
The only time fsd works ‘ok’, single lane roads with 90 degree stop signs / turns.
I don’t believe that the current hardware can handle what is needed to have passable FSD for an average consumer.
No. For the easy 99.999% of driving they keep very little of the training data.
Basically you want to minimize manual interventions (aka disengagements). When the driver intervenes, they keep a few seconds before (30 seconds?) and after that intervention and add that to the training data.
So their training data is basically just the exceptional cases.
They need to just make sure they don’t overfit so that the learned model actually does have some “understanding” of why decisions are made and can generalize.
It's not clear that a bunch of cascaded rectified linear functions will every generalize to near 100%. The error floor is at a dangerous level regardless of training. AGI is needed to tackle the final 1%>
The universal approximation theorem disagrees. The question is how large the network should be and how much training data it needs. And for now it can only be tested experimentally.
The universal approximation theorem does not apply once you include any realistic training algorithms / stochastic gradient descent. There isn't a learnability guarantee.
You said it only depends on network size, I'm saying it more likely is impossible regardless of network size due to fundamental limits in training methods.
I don't think it's aiming to be a "full review". When you take a driving test it really doesn't matter what stuff you are good at, it only matters if you're driving safely enough to be on the road. The article shows that FSD isn't worthy of a full review because it isn't safe enough to be on the road.
This article has the same feel as the old Tesla review by that journalist who was in bed with the oil/gas industry and Tesla provided logs of him driving around in a parking lot at 1% charge after he tried to slander Tesla that the review car "just ran out of battery suddenly"
> I’ve had a Model Y for more than 3 years now, well before Elon revealed himself as the kind of person he really is, and I’ve been happy with it.
> When I buy my next car, my requirements will be simple: I want an EV, an extensive charger network along I-80, and an autosteer that’s at least as good as what I have today. Let’s hope there’ll be decent Tesla alternatives by then.
Gee, I wonder if this is going to be an unbiased review of the technology.
Conversely, I'd worry about the impartiality of any Tesla review that didn't prominently disclaim that yes, they are aware Musk is acting like a loon recently.
You can’t make two million automobiles a year and two stage orbital rockets while “acting like a loon”.
Either Tesla and SpaceX are not congruent and coterminous with Elon, or he’s doing fine. Either way, the ad hominem has no place in the discussion of the works of the companies he runs. Both Tesla and SpaceX are kicking ass, so either you have to give him credit for it, or you have to stop bashing the company if you don’t.
> You can’t make two million automobiles a year and two stage orbital rockets while “acting like a loon”.
If it wasn't for the counter-example of Musk demonstrating that exact combination, I'd agree.
I don't know if all the stuff with, say, Twitter (usage as well as ownership), is him blowing off steam, or if he managed to build up leadership teams in Tesla and SpaceX that can just work around him, but somehow, somehow, he's managed to be exactly what SpaceX and Tesla needed to turn from a joke into a success… while also still being a lunatic with TBC and Twitter.
Even with Tesla, even though the company needed someone with his showmanship to get anywhere, the $420 privatisation thing, the way FSD turned from a fantastic surprise into a neck-millstone of overpromising and under-delivering are bizarre own-goals.
Would it be unfair to ask if you like the current FSD and think its on track, or if you don't. Or, the third option of not wanting FSD at all because I'm trying to work out what systematic bias you expect to see here.
The author of this article doesn't understand how continuous improvement in software works ...
Saying that it will "never" work is myopic.
Never bet against Moores Law or Elon!
As a Tesla M3 owner (in Europe where we don't have FSD yet) I cannot wait to have it for long road trips on highways where I want to relax and have a "copilot" do some of the "thinking" & driving for me.
Yes, there are still some instances of FSD Supervised doing strange things in the YouTube videos people are posting, but it's definitely no longer a "demo"! It's real and it's only a matter of time before it's better than humans in most circumstances ...
Now as a TSLA shareholder, I'm still sceptical that the RoboTaxi/CyberCab system will be 100% foolproof ... But if they can prove through data that it's safer than 99.9% of human drivers on every day journeys and thus get regulatory approval it will be game changing! If Tesla can do 90% of short ride city driving at a fraction of the cost of Uber/Lyft they have a real cash cow on the horizon!
"It's only a matter of time" means nothing. It was only a matter of time in Cesar's age before we got supercomputers and planes.
Tesla is near the bottom of the pack on self-driving tech, with no real hopes of going beyond the level 2 self-driving of FSD, which is objectively worse than just driving yourself. There are already multiple companies with either actual self-driving fleets (e.g. Waymo) or limited use cases of full autonomy with company liability (Mercedes), and many others with working prototypes for one or the other, or even beyond.
Tesla is living in their marketing bubble of "FSD robotaxis next year now", where they have been living since 2018 or earlier, while the rest of the pack barely even sees them in the rear view mirror.
you're so wrong. do more research. Driving on a geo-fenced area is much much simple than what Tesla is trying. Eventually they can do that as well without problems if they can't generalise.
Sure, but "we're trying something harder" doesn't mean anything. Waymo is succeeding at a well defined, useful form of driverless tech. They will probably be able to extend this, maybe slowly, to more and more cities.
Tesla is failing at everything they're promising, and not delivering anything useful along the way (FSD as it exists today is more dangerous and more demanding than regular driving).
You're comparing a warehouse to a train even talking about Waymo.
FSD has no limit. Recently it has been tested around in Europe and Asia, and the existing tech basically "just works" because sensorily and thought-wise it's made to function like the common meatbag between the seat and the wheel, just better.
No, making the skyscraper 100 meters higher isn't going to make the train less relevant even though the tracks aren't all built yet.
Again, FSD is just not doing anything that interesting. It only handles the easy scenarios, and punts to a driver for everything else, at a moment's notice. And it still constantly fails, per everyone who tries it, in even very simple situations. Look at what this article author found: that's just not something that happens with Waymo or Mercedes.
You're pretty much exactly 180 degrees in the wrong in that one. Due to the deep NN-based approach, FSD can react to the minutest of changes like seeing ahead of the car in front, or even hand signals by pedestrians to decide how to drive.
Again, even mentioning Waymo here proves you don't have the slightest idea of what you're talking about. Might as well compare FSD to trains, trains also "steer themselves" lol
The hardest parts of self driving are related to interacting with others on the road - other cars, pedestrians, cyclists, animals, debris etc. Waymo has mostly solved these problems, in a real environment, with only some simple limitations in top speed (and a major limitation for climate - they don't work in heavy rain, fog, snow, etc). Sure, the environment is fully mapped out, so it's not easy to scale their solution. But they've solved an extremely difficult problem extremely well, with some well defined constraints.
FSD has no defined constraints, sure. But it is FAR from solving the problems that Waymo has solved, and far from solving the problems that the CEO has promised to solve. If you hacked a Tesla to run with FSD without a driver in the car and let it roam in real traffic, even in Phoenix, it will crash within the first 100km. It's very, very far from being able to safely work without human intervention.
And NNs are inherently very noisy, so they are not a promising approach for a safety critical system. Combining that with pure camera vision, which is very limited even in animals that have literally had billions of years of training, is a fool's errand. FSD as it exists right now does not work, and will never work. Time will show this: Tesla will either give FSD up, or will switch to camera+LIDAR.
"Trains have solved the problems FSD is struggling with. They deal with junctions with automated signposts, and have a well established architecture of also avoiding train-to-train-crashes."
Your example just demonstrates my point. Waymo is not comparable because their "solution" is a dead end while FSD basically just works when being demonstrated in Europe and Asia even though it likely has like 0.1% or less of kilometers of having been driven there when comparing to US.
>it will crash
Just not true. Even ignoring that FSD can often go way longer without an intervention, remember, the "interventions" are extremely subjective and are just done when the driver becomes uncomfortable for reason or another. Many times likely the intervention just makes the car less safe in the situation, and the driver does it simply because they just had a different idea of how to fit a gap and were surprised by the car's plan. But the intervention logic is there to case for a ton of angles including legal and bug reporting / "feeling" safe instead of just purely _being_ safe.
>And NNs are inherently very noisy, so they are not a promising approach for a safety critical system
We're just replacing an NN with another one that is better. Replacing our transit system with something more mechanical akin to automatic train signaling systems (and getting rid of roads and cars!) isn't really in the books so blaming NN's is a moot point. Talking about LIDAR verifies that you don't actually have understanding of the history of FSD-like systems. LIDAR is easier for a beginner dev operation to start with but won't scale and is a fools errand given we already have the bloody reference of the previous control system working purely with vision!
FSD is the long term future and without comparison. Elon having been over optimistic about the timeline doesn't make it any less world-changing.
You do realise that waymo doesn't have drivers in their cars anymore? How about you do research on Tesla's competitors? Having a driver intervene is several orders of magnitude less advanced. Easily more than a decade away from the goal of self driving.
You can build a geofenced area that spans every single road in the USA so the fact that something is geofenced tells you absolutely nothing.
Are you _trying_ to make people not take you seriously? No one has the kind of tech Tesla unveiled back at Autonomy day, which is still state of the art technology wise. 100w for that much NN compute was a genius move to manage to get the talent in to build and to get early. Now that they've had more time to develop the tech, FSD is looking better than ever and in a few years Tesla will be licensing this around for a pretty penny. Eventually governments might even mandate it considering the comparative danger of human drivers.
No, I'm looking at actual results, not the tech stack. They are the only company that is only attempting Level 2, all the others are targeting or have achieved Level 3, or above. And Tesla isn't even doing Level 2 well, from the limited data that we have (since they of course refuse to release more). Look at any video about FSD on the internet and you'll see very frequent, inexplicable errors, in common simple scenarios.
A train can also run entirely automated but it is as valid a competitor to FSD as anything you have mentioned because FSD is general and works like a human. Video to inputs
Trains most certainly do not run automated, not on any long distance route anyway (there are some city trains/trams that do). And again, I'm comparing results. Waymo works autonomously quite well, even though geolocked to a city. Its going to be far easier for Waymo to take their existing mastery of traffic and extend that to less well mapped areas than it is going to be for Tesla to take their lack of mastery of any aspect of self driving tech and extend it to actually working.
What exactly do you think geolocking does to simplify the problem of self-driving so much?
Your example about trains is actually incorrect, not just in that most trains are "basically" automated, but in that some (long-distance) have actually been for a long time.
>Its going to be far easier for Waymo to take their existing mastery of traffi
I'm going to stop you right there. The whole comparison is theoretical, and I just realized you don't even understand they are in entirely different markets and are trying to make a concrete point. FSD is for customers but Waymo isn't man. Good day
The implication is that Tesla can't make it work. In the same way that Meta are unlikley to make AR glasses a "thing".
> As a Tesla M3 owner (in Europe where we don't have FSD yet) I cannot wait to have it for long road trips on highways where I want to relax and have a "copilot" do some of the "thinking" & driving for me.
depending on which country you are in, you'll be on the hook for any mistakes.
> It's real and it's only a matter of time before it's better than humans in most circumstances ...
This is the fallacy of statistics. Its probably better/similar performance than humans in 80% of cases. The problem is, it performs significantly worse when it fails. It does not fail safe. Thats the really hard part.
I also question the wisdom of using unverified recordings of average drivers to train on. Unless you know how go they are, you're going to be feeding really bad behaviours into the model.
If people are already using FSD (supervised) on a daily basis, it's working.
It's not "Level 5" yet, but they don't claim it is.
Totally agree that the "Full Self Driving" name is marketing BS. What should they call it instead? "Kinda Works Autonomous Driving, Please Pay Attention" ... ;-)
I really hope this will not been available in a long time in Europe.
I don't want to be part of musk's irresponsible beta testing plan as someone who lives around tesla cars.
I also really hope that there will be a point based drivers licence similar to human ones that applies to those technologies before this kind of experiment is started. If it loses its points, the whole system has to be disabled and having to go through a audit/licence process to be allowed on public roads again.
"With FSD, you still need to pay attention but now there’s the additional cognitive load to monitor an unpredictable system over which you don’t have direct control. Forget about just being focused, you need to be hyper-focused"
Driving with FSD on seems to be more stressful than just driving.
Driving with FSD in mid to moderate traffic is more stressful. Usually drop fsd to AP.
Driving with FSD on empty roads is very enjoyable and whenever it decides to do something you can recover/correct with no stress.
Driving in poor conditions with FSD feels safer, it sees better than me in the dark and rain for sure.
Driving with fsd in heavy traffic about the same as driving a car yourself, this where I usually drop back to AP.
Is driving my tesla easier or harder than another vehicle? It’s much much easier than driving my ICE suv. But from a pure FSD perspective, if the car didn’t have AP I believe my opinion would not be the same.
I remember using Text OCR and Voice recognition in the 90's. It was like 90% there, it seemed like we were just a few years away from it working perfectly.
Almost 30 years later, OCR and voice recognition, now backed by machine learning, is far more impressive than it was back then, but it still keeps making mistakes that I have to fix. And those are far, far easier problems than driving.
I wonder if the rules of continuous improvement apply to ML models. It seems that regressions are very easy to pick up. And that with a complex task like FSD it is virtually impossible to use the established tools to safeguard the stack from regressions (i.e. test cases that assert some unbroken functionality).
> In December 2015, Musk predicted that "complete autonomy" would be implemented by 2018. At the end of 2016, Tesla expected to demonstrate full autonomy by the end of 2017, and in April 2017, Musk predicted that in around two years, drivers would be able to sleep in their vehicle while it drives itself. In 2018 Tesla revised the date to demonstrate full autonomy to be by the end of 2019.
And this goes on for a bit longer up to the present day.
After close to a decade of failed promise again and again and again I think we can safely dismiss the "perhaps a bit too optimistic" good-faith defence.
But hey, maybe this time it's just around the corner, cross my heart hope to die. And maybe the rapture will be upon us next year like that guy is preaching on the street corner (for real this time – last chance to repent sinners!)
> The author of this article doesn't understand how continuous improvement in software works
And I don't think that you understand it either. In the real world, all exponentials are just the early part of an s-curve. Everything has limits, reaches diminishing returns and tops out.
If a technique is already topping out, there won't be great leaps in performance without fundamental changes. An incremental "update" won't do it at that point in the curve.
You can pick examples from several hyped technologies today that are going to work properly and change the world "real soon now".