Hacker News new | past | comments | ask | show | jobs | submit login
Tesla pulled its latest ‘Full Self Driving’ beta after testers complained (theverge.com)
63 points by akerl_ on Oct 24, 2021 | hide | past | favorite | 154 comments



Of course they did. There's only so much you can do with "feed more data in the machine learning pot, stir it, and see what comes out." We're doing a good job of proving what the limits of ML based self driving are...

Computers aren't human, and as it turns out, human brains and visual processing systems are more than the "couple crappy cameras and a bit of a neural network" thing that a lot of SV types seem to believe it is.

We're firmly into the "fusion" realm of self driving cars. A lot of money and a lot of time, effort, developers, etc, have been fed into the maw, and we've tried all the easy options - which it's clear don't work. I'll take the pessimal position of saying that the current approach Tesla is using (vision only) can't work, though I'm sure many will disagree with me. It comes down to a difference of opinion about the complexity and capabilities of the human brain.

A sensor-heavy setup like Waymo is using has the greatest chance of working, though probably only in heavily-mapped areas where it can use the sort of peripheral data humans use to identify the lanes by building locations and such when the lanes are unclear. As long as not much changes, which probably requires very regular remapping of regions (or an awful lot of data upload from cars), that's the most feasible approach, though it's not a general purpose solution given the amount of data required to do high precision mapping of the entire country, much less the world.

We'll see, but "a couple cameras and a car-scale computer" just doesn't seem a solution that's going to work.


> Computers aren't human, and as it turns out, human brains and visual processing systems are more than the "couple crappy cameras and a bit of a neural network" thing that a lot of SV types seem to believe it is.

Honestly, I think it's something approaching a general rule that most things are far harder than "SV types" believe they are. Maybe it has something to do with all the startups.


My opinion:

How to solve travel for cars: build roads

How to solve self-driving for cars: build digitaly aware roads

You can’t make self-driving work with a pure black box model. Neural nets need to be able to explain why they work. Currently they can’t do that well enough.


How will those roads be digitally "aware" of stalled vehicles, pedestrians, animals, fallen tree branches, traffic cones, etc? Lane keeping is one of the easier problems. It's detecting and classifying all the other obstacles where AI really struggles.


No one ever talks about snow.

Human drivers here, have no problems navigating snow covered roads, even when the road, side of road, and everything is uniform white. When there are no visible lines on road, when there is no road tarmac to see, when ditches on side of road are filled with snow.

Human drivers can tell where to stop at stop lights, without lines or other metrics. Subtle clues abound, but I cannot believe current tech can navigate here.

At least dirt roads are generally brown and mostly vegetation free, although I wonder how even this would be handled in desert like areas, where it is flat, and road and surrounding land are dirt.


Lets not pretend that humans are great at snow. Lanes become imaginary in snow and best case the two track is roughly similar to the actual lane. The power of humans in snow is understanding that rules are more like guidelines and bending or breaking some are necessary at times.


They’re incredible compared to current vehicle driving assists which won’t operate at all when the road is snow covered.

They’re also pretty great considering the millions of miles travels on such roads each year.


> How to solve self-driving for cars: build digitaly aware roads

Who is going to pay for it? Are you going to rebuild all the roads, if not then what is the point since your system needs to work on all kinds of roads.


You dont need to rebuild the roads. This is a dead simple idea. You need sensors every x distances, and then use WiFi/mobile data to update cars of the model it should follow or rules of the road.

At this point though, you’ve started creating mass transit.


All good questions. IMO the government, cost benefit analysis needs to be conducted.

My perspective is inherently Dutch. I think the roads here get replaced quite often anyway.


While not sure of the value, I agree, a change at replacement time is essentially cost free.

Paving while laying RFID tag strips would cost nothing per mile, compared to the labour of pulling up and resurfacing.


> How to solve self-driving for cars: build digitaly aware roads

I can just imagine the scenes when someone hacks the digitally aware road to tell cars X and chaos (and potentially death) ensues.

IMHO, self-driving cars are solving a wrong problem, and if they work fully someday, might exacerbate a few others (congestion, pollution, parking).

Public transit, with diverse "last mile" options ( buses, bikes, scooters, heck, automated shuttles/pods on dedicated lanes, etc.) is just a better way of achieving the same goal.


You’ll love the art experiments which stall Tesla by pouring a circle of sand around them.

If someone decides to prank a desolate stretch of road with tens of stop stings every 5 meters, you are going to have some very broken self driving cars.

Plus all driving models are cultural driving models. A Tesla will break down on Indian roads.

Self driving cars will need data from sources other than their models. At which point you are creating mass transit with extra steps.


> If someone decides to prank a desolate stretch of road with tens of stop stings every 5 meters, you are going to have some very broken self driving cars.

Then some pranksters would end up with big fines or go to jail? Just like if they drop water balloons onto cars on the freeway from a bridge?


Sure, but thats irrelevant - whether gangs hunt autonomous cars to kidnap their owners, or people play simple pranks - they point being that humans know how to handle those situations and either avoid or ignore them.

No amount of driving focused ML/AI is going to help with that.


I don't think those are very real concerns. If you have roaming gangs kidnapping car owners you have other problems to deal with. But either way, it seems like the naysayers here for some reason just don't want self-driving cars to happen, and are finding any and all excuses for why they will never work. I'm saying that if there's a will there's a way. We've made way larger sacrifices for other modes of transportation in the past. I'm guessing these are the same people that would have claimed that cars are impossible because nobody would want to build tens of thousands of gas stations to accommodate cars etc. Compared to the changes we've made for cars, even designing our entire cities around them, the changes needed for self driving are minuscule. The benefits are maybe not the same but it's certainly the biggest revolution in transport since the airplane.


You don't have to tell the car what to do, you just need signals that designate the road as being "compliant and safe" for self driving.


Yeah, and when someone changes the signals to change the speed limit and remove the stop, no priority and curvy road ahead signs, and the car assumes all is fine and safe, it will be a fun time.


My opinion: replace all cars with self driving only, then you completely avoid the hard problem of interpreting other human drivers. And you can also let the cars communicate with each other, be centrally controlled and/or have assistance in the roads. Why would anyone want to have a non-self driving car. Maybe hard/unrealistic, but it would be super cool to see this as an experiment in a smaller city center to see what happens. It would bring so much opportunity for optimisations also around traffic flow, parking spaces etc.


im pretty sure the videos that got tesla to pull the beta were videos of tesla almost hitting obstacles/pedestrians not other cars. We also shouldnt pretend that pedestrians dont exist, or are 2nd class to cars. imo pedestrians are 1st class and cars 2nd.


And put cyclist and pedestrian lives at risk.


Replace them with robots too, problem solved.


> parking spaces

If all cars are self-driving you don't really need any parking spaces at all, do you?


I'd still want my car, and I wouldn't want it just driving in circles for hours. Less spots could be necessary in downtown areas, since a car can drop you off and park a few miles away. car ownership isn't going to disappear with self driving, it may reduce, but plenty of people have use cases that a taxi model isn't great for.


Yeah that's my point! You could calculate the amount of needed cars and have them in circulation, and they could park themselves in some underground garage somewhere when usage is low. It could be just one big fleet of self driving taxis, self-driving Ubers.


> How to solve self-driving for cars: build digitaly aware roads

Putting aside the physical aspects of replacing existing roads, how would a digitally-aware road work? A bunch of bluetooth devices? And in what schema would the road 'speak' to the car? Ideally it would be an open format, but who would decide upon that?


That's something I haven't thought about. I hand waived it with pretending that people be just as motivated to do this as they are with self-driving cars.

It is an interesting question though! I think of a few things that could be done quite easily:

Communication can happen via low latency with relatively little data. I wouldn't know what's best for that.

* Roads can broadcast. Roads can listen through sensors and determine the position of every car.

* Cars can broadcast. Cars can listen.

* Things can go wrong. There'd be huge demand for any distributed system engineer :P

A use-case sketch:

A road can determine if it almost goes out of bounds. A road can determine if two cars are about to collide, but so could two cars.

So this would work well in places where there are only roads and cars (e.g. highways and "road only" type of roads -- i.e. no bicycle lanes).

Cyclists can communicate as well. They need to chip their bike.

Pedestrians can communicate as well. They need to chip their coat. Though I'm not happy with that solution (what if the coat gets lost?).

Anyways that's my rough sketch. I'd need to research the technologies needed for this.

This would make for a fun systems design interview.

Whomever wants to brainstorm with me on it, feel free to email me. I don't think I'm particular talented in this area but I can be a rubber ducky.


I liked the idea of simply using mobile data services to update cars from a network. Keep a series of cat eyes or pressure monitors on regular tarmac roads, have a collector station every few meters and use that to relay information.

In areas without network you dont use self driving tech.

Driving data is cultural data, its not some natural phenomenon which is clean and periodic. If a truck transporting cows breaks down and there’s a herd of cattle running all over the highway, I seriously doubt the models are going to perfectly handle that situation.

Level 5 self driving is impossible, but with road data and external coordination, we can achieve the same effect.


We have wild deer crossing the roads here. Will you ask them to chip their antlers?


How else will they get groceries?


It’s funny how unpopular this argument was. There is no way level 5 self driving works, unless you are aware of anomalies beyond what the models are prepared for.


> How to solve self-driving for cars: build digitaly aware roads

This is going to happen. It will be proprietary. Think roads with DRM.


> the current approach Tesla is using (vision only) can't work

Of course it can. We do it. I'd phrase it myself as:

The current approach Tesla is using (ML techniques available now, without major, new breakthroughs) can't work.


It really comes down to what you think of the human brain.

I don't think computers will ever come anywhere close to the human brain in terms of practical understanding and problem solving. Machine learning involves "Give a computer an awful lot of existing labeled data sets and try to get it to figure out, via un-debuggable magic, what they are." The human brain can figure out novel things without having to have had the huge dataset of previous examples, and I'll point to literally any small child as an example (I have two, and one solid rule of parenting is "They're more creative than you would ever expect").

The next question is then one of power efficiency, and there, again, even if we could do something similar to the human brain (which, see above, I don't think is at all likely), can we scale the required bits down to a car's power budget? A 10kW server rack in the trunk isn't really feasible.

You seem to believe that anything humans can do, we can eventually design a computer to do. And I disagree entirely.


And even with all of that, humans still get confused while driving. Think about the last time you came upon a construction zone with unclear lane markings, or a traffic pattern change in an area where you drive frequently, almost unconsciously.


Absolutely, which is why it should be obvious that self driving is impossible.

IF I stuck STOP signs every 5 meters, a human will realize that they can be ignored and its a prank. Unless autonomous cars are trained for precisely this kind of malice they will fail.

IF someone decides to hunt autonomous cars, there really is nothing in their models which will save them. They aren’t designed for predator/prey dynamics.

When you open up technology to the world, you open it up to the best and worst of humanity. There is no way fully autonomous cars survive.


I can lay a spike strip across 8 lanes of highway traffic whenever I want. It's not like it takes self-driving cars to allow people to be assholes and break the law. It's why we have laws.


Suddenly you don't need a spike strip and physical damage and consequences though, just some cardboard and textas to make stop signs.

The bar for malice will be significantly lower, and it's the car that will be at fault, not the person with the stop sign.

Can I walk around with a stop sign on a pole? Why not? It's not an official street sign. We don't have laws that I know of against carrying around homemade stop signs, I don't think, because any competent driver is able to identify that they aren't official.

Maybe I should get a backpack with a stop sign on it, for when I'm riding the bike.


I mean, you can find fault with different aspects of the example all you like, as long as we agree that AI will fall for tricks and pranks that normal people won’t.

Driving data is inherently culture biased on the most frequent scenarios. But this is a data set with many outliers.


A few days ago drove over intersection, but didn't turn as usual, was confused by old lane markings that were left there, from quite a time ago. Our road environment is quite far from clean.


No, not at all. OP was implying (maybe?, could have read that wrong) that lidar was what was holding Tesla back. I was just commenting that it's obviously _possible_ without lidar because we do it every day without spinning lasers on our heads. Will it ever actually happen? I have no idea at all. Maybe not. But it won't be for lack of lidar. It will be for the lack of everything you just mentioned.


Please read “Language Models are Few Shot Learners” https://arxiv.org/abs/2005.14165 it’s the paper behind gpt-3

Large transformers display few shot learning similar to children. It’s a major advancement, machine learning is much more than supervised learning nowadays.


The optic nerves are really a direct part of the human brain. It's a mistake to think of them as separate organs.


Lets extend the time scale a bit. So you really think that in 10.000 years we cant figure and build a computer that rivals the human brain ?

assuming that we havent destroyed ourselves and of course dont enter a dark age of a large percentage.


> So you really think that in 10.000 years we cant figure and build a computer that rivals the human brain ?

There might be hard limits we'll never break through. All the experts will tell you is "we know that we don't know", because when it comes to the brain we don't know shit.

There are many theories out there and materialism is only one of them. People deify tech and underestimate the brain, I'd advise to do the opposite


Okay, so but excluding the possibility that the human brain runs on pixie dust...


We might simply not be able to comprehend our own brain to a level that let use emulate it. The toys we're playing with today aren't even remotely close to emulate any kind of intelligent animal, the best we can (almost but not even fully) do is a worm with 300 neurons [0] (vs 80+ billions for the human brain)

Technocrates told us we'd have AGI and flying cars by now, I'd stay cautious about people claiming reaching AGI is a question of time alone.

[0] https://openworm.org


OpenWorm has been stalled for like half a decade now too.

That said, there's a huge difference between "the brain may be too hard for us" and "the brain may not even be physical." That's about on the same scale as the difference between "the pyramids were built with advanced mechanical tools" and "the pyramids were built by aliens."


Cameras are much worse sensors than human eyes, and the brain has specialized parts to handle vision, mapping, planning, etc. It's not just an algorithmic issue.


I always find it odd that hacker news has such pessimism on technology advancements. It might not work now but give it another 5 to 10 years this may very well become reality. The pace of advancement in ML/AI is incredible.


Ill bet right now that level 5 self driving will never come. Whatever the pace of technological advancement.

If it can work, then a Tesla trained in SF would work on streets in India or Vietnam. But it cant, since driving data is cultural data and self driving narratives lie in a cultural blind spot.

Advanced economies have relative more stable traffic and rule of law which influences their driving.

If those disappear, old driving data is not going to apply.


When you look at history and tech there is only seemingly on way: progress. The thing is, it's not the case, and before you can deem something to be progress or not it usually is way too late (gas, asbestos, lead in gas, lead in paint, freon in fridges).

So what makes you dream now might be completely detrimental in the future. Not everyone thinks autonomous personal cars are a step forward, I personally think it's a step in the wrong direction.

Everybody asks "how" but nobody ask "why". "It's just progress why don't you like it?!"


> It might not work now but give it another 5 to 10 years

Yet Tesla sells it today as working


AGI has been 'just around the corner' since, IIRC, roughly the 1980s. One thing hackers are not is credulous. Sales & marketing is for optimists.


Maybe... but probably not. Care to set up a $10 wager for 10 years from now for agreed-on terms about self driving?

I'm pretty pessimistic and jaded about computer technology. I've been in the field too long, and done too much computer security.

(1) Whatever guarantees you think the hardware is supposed to be making, it isn't. The hardware is now complex enough that just about any security boundary can be violated with enough creativity, if you have enough fancy features to try and keep performance marching forward if you've hit a process wall. Rowhammer. Intel's... everything. The various branch predictor things. Etc.

(2) Modern software, as a general rule, is dependency hell. You suck in hundreds of megabytes of code from God knows who, all of which is sucking in yet more, to do something. Nobody has any idea about all the code their project is sucking in, unless they've written it themselves. So, by that definition, nobody can actually understand their code, because some corner case, 15 layers down, can break something in absolutely unexpected and obscure ways.

(3) Murphy is an ass, and that corner case will happen, when you least expect it and are least able to debug it.

(4) Those piles of complexity, on top of piles of complexity, stacked on top of the previous complexity that exists to solve problems caused by the previous complexity, which was created to get around performance problems due to... etc, just go all the way down. It's all technical debt, and it leads to diminishing returns on investment, or, as I'm beginning to think, negative returns on investment. The more we try, the worse we make things. I'll point to Apple here, and the fact that despite the fact that they know messaging is the literal worst case for untrusted remote input, and that they've built explicit systems to sandbox it, firewall it, safely parse it, etc... it's still been, very recently, vulnerable to remote, zero-click, zero-user-awareness remote exploits that give the remote attacker root level privileges on the phone to do whatever they want and exfil whatever they want. Yay.

(5) The "culture" around modern software development is absolutely terrifying when it comes to anything safety critical. Fine, ship whatever when it comes to the latest addictive social media app, and it if crashes, welp. But when it comes to things in the physical world, that's absolutely not acceptable. When Tesla OTA'd updates to the brake firmware of CR's Model 3 that utterly sucked in the braking tests and it got better, everyone was so excited about how, see, OTAs can fix issues! Very few people asked how on earth Tesla had shipped defective braking firmware in the first place, if it was a quick fix. Personally, I'd like my brake system firmware well tested before release, and absolutely "hard" in ROM, so that it can't be updated. Look at the papers from a while back where the security people were able to remotely compromise a Jeep's braking system through remote cellular connectivity and the radio. That's just not OK in the slightest!

(6) As Tesla continues to demonstrate, the current way that ML/AI is done is absolutely rubbish for any sort of safety critical system, which automotive control systems are. "Lol, whoops, yeah... hey, thanks for reporting that your car can't take that turn that it used to take just fine!" is the sort of stuff that's funny, until it's tragic. Tesla's software practices are very clearly unsuited to anything remotely resembling the sort of safety critical systems they think they're developing.

That's all before you get into the physical limits we're approaching, the economic cost of modern fabs, and the fact that something like 40% of the world's high end silicon production is in a country that China very much appears to be planning to take back.

There's an awful lot of money chasing the hope, but I'm (obviously) very skeptical that much of anything useful will come out of it, other than making a bunch of people rich in the process of saying "Well, we're still 3-5 years out on this technology and I intend to retire before you can really call me out on the fact that in 3-5 years, it's still going to be 3-5 years out."

I work in the low level weeds, and I see what an utter disaster modern tech is. I'm increasingly using less and less of the stuff in my personal life, and my life is better for it.


Engineers tend to be good at enumerating failure conditions.


i think people are pessimistic because technology can still advance without being a public beta that puts others lives at risk


I'm more bullish on Tesla's approach, and it seems like many of you are misunderstanding the approach, or the technology.

First of all, it's not a "pure ML approach" as in end-to-end. It's separated into perception and planning. As you can see in the UI now, the perception part is already quite good and gradually getting better. Adding lidar at this point wouldn't help much. What I do think will happen is that they will continue with the philosophy that the car should be able to drive without HD maps in most places (as if driving the route for the first time), but that they will augment the planner with low-res maps generated from other drivers in the hardest spots (e.g. weird intersections that humans also struggle with). They already have the capability to create these maps as seen during AI day, it's just a matter of admitting they are needed in highly complex areas.

The second misconception is that the planner is somehow making instinctual decisions based on a simple neural network. Their approach is similar to AlphaZero, which includes Monte Carlo simulations with deep neural networks to predict paths of everyone on the road. I see no reason why AlphaZero cannot eventually beat humans at predicting traffic as well as it can predict moves in Go and chess.

A third misconception I see is that there are so many disengagements, so it's clearly not safe. And that's true, it's not safe yet, but Tesla's strength is that they can have thousands of enthusiastic beta testers willing to drive around while paying careful attention, often intentionally seeking out difficult edge-cases. As the number of disengagements and the severity of them goes down over time they can roll out to even more users. Eventually they'll be safer than humans on average. I think that is likely to happen in a few years based on their rate of progress. What happens at that point is completely up to politics and human psychology.

In summary, if you actually go through and watch a couple of videos, I challenge you to find examples of disengagements that are complete show stoppers that can't be solved within 2-3 years. Also, go check out videos of earlier version so you can get a sense of the rate of improvement, which really is the only thing that matters (and it's massive). You can argue that they will plateau somewhere below human level, and that's certainly a possibility, but I wouldn't bet on it.


I'm not bullish on Tesla's or Waymos approach. A Tesla may be able to predict other drivers, but how can the planner handle a situation like i.e a cop directing traffic in a particular way, or a complicated /non-standard sign that hasn't been seen before? I know these are edge cases but even one disengagement is a big fail if your car doesn't have a steering wheel (Elon's hope)


Fair concerns, here's my take: there will be edge cases for a long time to come, so there needs to be a way to handle this. The car needs to detect these out-of-distribution events and hand over control to a person. Now, if we require a driver with a license to be behind the wheel ready to take over in 100ms, this will obviously not be worth the trouble.

The first part of the solution I think is first to make the car fail gracefully in unknown situations, e.g. slow down curbside. This would be dangerous on the free way, but free way driving is also the simplest part of driving.

The second part is allowing human control of the car without "driving it". For example, the if there is a police directing traffic, the car can alert the occupant and ask what it should do. You could even "paint" out the desired path to take on the screen. Basically you're just telling the car where to go and not controlling the steering wheel, gas and brake directly.

I'm sure you can conjure up cases where this will also not be enough, but at some point when the tech is good enough and there are enough cars on the road, then people will naturally adapt to self-driving cars in mind. If you're doing road-side work and you know there may be self-driving cars passing by, you'd do your best to make the signage and cones as clear as possible. Perhaps laws will be passed to make it illegal to try to fool a self-driving car, or to put up new signs without first giving car companies time to retrain their models. It's a small cost compared to making all roads digitally adapted to cars using sensors and what not. It's also a small cost compared to the 1.3 million people that die in accidents every year.


A work crew was repairing potholes on my street recently. They didn't even have stop signs, just used vague and confusing hand gestures to control traffic. Good luck getting those guys to use clear signage.

Sometimes it seems like software developers are just blind to what happens in the real world and expect others to conform to their idealistic notions. Not going to happen.


You could say the same thing for cars when they were new: why would people ever adapt to not "jaywalking"? They're used to just crossing the street when they feel like it, you think they would ever change?

It's simple: 1. habits change because people can foresee the consequences of their actions 2. regulation and laws

The benefit of cars outweighs the rights of pedestrians, just like the benefits of self-driving cars outweighs the rights of work crews to do whatever they feel like.


> feed more data in the machine learning pot, stir it, and see what comes out.

Very reductive and dismissive way of describing what they are doing.


Do you have a better explanation for the fact that each release seems to fix a few things that used to not work, and break a few things that used to work fine (like "not deciding there's an object right in front of the car when there isn't")?


Automotive EE here:

Agree, except you are missing / under selling something.

It’s the car sensors that are great, and the human with the crappy cameras.

Our eyes suck. Our attention spans suck. Our reaction times suck. To paraphrase my one of my favorite xkcds, “There is nothing in our evolution that could have prepared us for driving heavy boxes at high speeds”.

However… somehow, we excel at it. We drive in all conditions despite the crappy inputs and processing. The IDEA that we could be able to make computers do it is sound.

The execution is not! And I don’t think it will be for a long time.

We had self driving vehicles 100 years ago, your horse would take you home despite how drunk you were. No one wants their car to get spooked of a shadow or floating bag though.

The REAL solution to this is that cars need to be trains with figurative tracks. There needs to be an infrastructure in place that reports to the vehicle. V2I (Vehicle to Infrastructure) and V2V.

Without that, this is all pointless and I’m surprised how much tome and power is being wasted on it.

If the big brains behind this were truly serious, it would be in place now for semis that do the same A-B-A route over and over and can slow down to speed that would annoy a consumer car owner if it isn’t sure.

There is just not going to be enough sensors and machine learning to handle no road paint, other drivers, new trolley problem, blinding whiteout, etc. Never will be.


> Our eyes suck. Our attention spans suck. Our reaction times suck. To paraphrase my one of my favorite xkcds, “There is nothing in our evolution that could have prepared us for driving heavy boxes at high speeds”.

I disagree. Our brains have massive plasticity, even into adulthood, and can quite literally rewire themselves for how they're being used.

The first time you really get on the throttle on a sportbike, the effect is almost universally a "Woooooaoaaaaaaah!" sort of thing - the universe around you "goes plaid," and it feels like the hand of God has grabbed the horizon and yanked it to you. Hard. And then you get used to it, and people who ride powerful motorcycles or race them are perfectly capable of doing so for long periods of time, at high concentration. Go watch some clips of a motorcycle race, if you haven't. That's not born, that's learned.

Same goes for driving cars, for flying planes at Mach 2, for... really everything humans do. I wasn't born knowing how to use a keyboard, but I've been using one so long that I don't even think about it anymore.

The human brain is insanely adaptable, which is how it works well in all these cases.

And I'll also disagree that our eyes suck. The human visual processing system is damned good at handling a 3D world. It's got a few quirks, but overall? It's properly impressive.


Our eyes suck in some ways, but they still have better dynamic range than the cheap cameras used in Teslas. This is important in difficult lighting conditions with a mix of bright sunlight, glare, reflections, and shadows.


When we talk about cameras and sensors in this case I’m talking your forward facing cameras, stereo scopic, LiDAR, radar, etc.

Yes our eyes have very good dynamic range, but we’ll never have LIDAR and low light.


Well Teslas don't have lidar or radar either.


Agreed. There is no way you can have full autonomy.

Driving data is cultural data, its not some natural phenomenon. Its highly regulated and its more orderly than nature. Drive in the developing world to know how true that is.

Plus driving behavior will adapt with the increase in autonomous cars. I wouldn’t be surprised if people purposely started shunting cars in self drive mode off the road, out of fun or spite.

There is no way full autonomy happens, unless you throw singularity level processing at it.


I'm not buying FSD. The non-FSD features in the car are way too unpolished. If Tesla can't get the easy stuff right, there is no hope for the more complex. Some issues I've noted:

1. Voice recognition fails to match the stated operation described in the manuals since 2019;

2. Cameras are unable to reliably distinguish traffic signals, namely red; yellow; and green, using 2018 camera hardware; and

3. It makes absolutely no sense for the battery to 'brick' or declare 'out-of-range' while regeneration is adding charge back to the battery -- which is how the Model S behaved in 2020. In other words, as the car nears '0 range' or low/no charge, it absolutely should not shut down while kinetic energy is being converted to charge as the car slows down.

Based on may 5 years of experience driving two Teslas, I can't imagine this 'FSD problem' being solved prior to 2026.


I don't think it's a Tesla problem, ultimately.

I look at it this way. Every day I plug in my Android phone to my car and use the navigation. I think it's the best option out there.

However, it's far from perfect. For instance, it frequently wants me to turn left on to a busy street from a side street with no traffic light. There are many other issues which aren't that big a deal because I'm in control. Sometimes it gets quite weird. One time my location was displaced like a block and a half, scrambling the directions.

Even while acknowledging I'm ignorant of 99.9% of the research and what's being done, there is a simple equation that I believe in - a nigh-on perfect navigation system must precede self-driving. If nobody knows how to walk, running isn't going to happen.

I don't have to know what people are doing in private to know that what they are doing in public isn't consistent with the aspirations.


> I don't think it's a Tesla problem, ultimately

Idk, I don't see any other companies claiming to have "full self-driving" at a time when that clearly doesn't exist and likely isn't possible right now


The only people claiming Tesla has full self driving are internet commenters who are mistakenly confusing things.

Tesla does not claim to have full self driving. They claim to be working on the technology and they have a beta of an early version of it. A beta of a thing is not the thing.


> Based on may 5 years of experience driving two Teslas, I can't imagine this 'FSD problem' being solved prior to 2026.

It's very optimistic estimate. It's been already more than 5 years already since Tesla declared FSD a solved problem AND started selling it.


To Tesla's credit, they are building a simulation environment, 'dojo', which, theoretically, will modify driving environments to include varying visibility, temperatures, precipitation, jay-walkers, windshield occlusions (bird-crap)... which was not in their toolkit 5 years ago.

5-years is a glacial age in a software development environment... so 5 years I'll say it could happen (30-45% chance), with maybe 10% chance earlier, and 45% in year 6 or later.


5 years isn't that long, software notwithstanding, when it comes to life-critical systems in the physical world.

5 years may seem like a glacial age if you're 25 years old. When you're 50 it seems like a month sometimes.


5 years with things like FSD seems to me the timeframe from when it is actually ready to allow on roads after all safety checks and proofs.


> 3. It makes absolutely no sense for the battery to 'brick' or declare 'out-of-range' while regeneration is adding charge back to the battery

Wait so how often are you in this situation? Like one ever 1000 drives? Or am I missing something?


Happened [1] after 1250 (or so) drives. As seems to happen in these situations, I got very close to the supercharger. So close, that I convinced a passerby and a policeman to push my car the remaining 200 yards to the charger.

Not going to do that again... heh. I sold my S70D.

1. https://news.ycombinator.com/item?id=28629060


Well don't let your car run so low. Same thing applies to ICE engines.


Having such bad regression that it seems like all users noticed with new update shouldn't be possible for this kind of technology. Whatever your level of faith in Tesla, and some of you have a lot, you have to admit that it is reckless to have a release process that fails to test this internally enough that they would widely release dangerous behaviour that tons of users noticed immediately


> bad regression

I wonder what their testing suite looks like...


We just found out


Also it's interesting that they now canary it on drivers with "high" safety rating. I wonder if that introduces bias into the data they collect and "bad" drivers and driving environments become "out of domain", in which case the system will suck with those drivers and in those environments. Glad I didn't pay $10K for this though. Super satisfied with the car otherwise.


Interesting.

How do they judge a drivers safety rating? Can a driver view their own safety rating?


I'm not sure how they do it exactly, but I imagine you could, as a first approximation, infer something like that from how hard the driver brakes and how reliably they keep distance from other cars, both of which Tesla's telemetry can and does collect. Tesla will beep at you if it thinks you're forgetting to brake. It also knows if you ran a red light or a stop sign, since it recognizes traffic lights and signs. It knows if you're speeding, too. And it knows if you're speeding near a school or in a construction area, as well. AI is a bit of a double edged sword, since if Tesla knows this, it's only a matter of time before this data makes its way into some sort of a "social score" metric, that will affect things like car and health insurance.



AI doesn't scale well. Problems get worst as you make your model bigger and more generalized. To make things worst, data, model architecture, precision, hardware, affect your model performance in ways that are hard or impossible to anticipate.

If you watch Tesla's AI presentation, https://www.youtube.com/watch?v=HUP6Z5voiS8, you will notice that they have multiple AI's stacked on each other, which IMO is a step back from truly e2e multimodal AI system. So even with their custom fancy hardware, multimodal is too hard.

I wonder, wouldn't it be better to use geo fencing (using H3), and have the car download the model depending on the zone where it is driving? And optimize multiple models based on "driver engagements"? This could fix the problem of zones where there are particularities in the driving, road, or human activities, and allow for model optimization to happen on a smaller vector space than the whole world. For example, why not have a model for US highways, LA, New Deli, UK, so on.

Tesla also knows where the cars are, and control their expansion plans worldwide, which could inform model prioritization roadmap.

In my mind, it will be easier to test, debug, label, optimize, and guarantee quality to users, that at the end of the day, without knowing exact statistics, I am dare to say spend more than 70% of the time driving around the same county/city/area/town?


yikes after watching the one of the videos in the thread this actually looks dangerous to be using on roads. Im not against public testing but if someone gets killed wont that be a huge setback? The video in this thread shows like 4 failures in 2 minutes, almost driving directly into a pole, almost removing the driver door of a parked vehicle, turning onto the wrong road, and missing an intersection turn. A month ago i saw similar videos of FSD turning towards pedestrians... im actually surprised things havent been worse


correction: they do speed up some portions of the video. so "2 minutes" is inaccurate.


When do they start refunding people for this “feature” so many paid for but are not getting?


It does seem ridiculous that they're still selling "full-self driving" capability. How have they avoided legal trouble for false advertising?


Tesla and Elon are tech darlings, and nothing sticks to them. For now.


Prosecuting FSD claims, historical or present, seems like a slam-dunk case for the FTC…


I’ve never understood this either. I guess it’s technically under a “beta” label so it gets a pass for now. But I can’t imagine this continuing without consequence forever, unless Tesla actually manages to solve self-driving with a vision-only approach.


Maybe once there's a class action for it


Yeah, in all honesty i hope they figure it out. We have fsd on our model 3, and i am really looking forward to it, but I’d be lying if i didn’t feel a bit duped…especially since i found out it could be bought on subscription after we already paid for it.


You bought a car from PT Barnum.


Ha, yeah i guess that might be.


Sorry to hear that. I do think they screwed a lot of people over with the marketing - I'm sure they will get their punishment at some point...


What will be the last straw for people to demand a refund for the "full self driving" that is still not delivered? Will it be when Tesla says they now need lidar after all?


At present, it's _not_ clear that lidar would solve their issues, because from the FSD beta footage that we've seen it doesn't look like the problem is with _perception_. Perception-wise, the car seems pretty solid, but its ability to shoehorn a given world state into something it knows how to react to seems shaky. We can see examples of this where the car gets confused at funky intersections (e.g., multiple sets of lines that look like crosswalks) or challenging left turns (it can resolve cars with enough fidelity it seems, but is missing a policy for turning into the median first).

If it were as simple as "their system would just work if there was lidar" I'd bet there'd be a lidar test fleet of FSD teslas that would train the lidar-blind cars in a sort of shadow-mode, similar to what was done for vision-only replacement of radar.


> from the FSD beta footage that we've seen it doesn't look like the problem is with _perception_. Perception-wise, the car seems pretty solid

Here is "full self driving" perceiving that it would be a great idea to drive directly into a pole:

https://www.youtube.com/watch?v=bbyNg9kYEq4&t=340s


James Douma explains why lidar is a waste of time for FSD: https://www.youtube.com/watch?v=urKMwhzivs8


His argument makes sense and I think he's completely right. However, his argument also includes, "If the depth prediction is arbitrarily close to lidar". So we have to ask ourselves a bunch of questions related to it. Can we get it close enough? How feasible, reasonable, and cheap is that? How can we tell if it's good enough? And I'm sure there's more.


> Tesla CEO Elon Musk has himself even said that he believes the “feature complete” version of the software his company calls “Full Self-Driving” will, at best, only be “likely” to drive someone from their home to work without human intervention and will still require supervision.

Especially given this. Even once it works, “full self driving” won’t do fully autonomous driving?!


There's a big disconnect between people who do not own FSD and those who do.

Those that complain are 100% in the "don't own FSD" bucket. Prove me wrong. Find me one owner of FSD that is so dis-satisfied that he demanded a refund.

Those who do pay for FSD use terms like "my mind is blown", "it's amazing". They do videos, evangelize the product and wait until midnight on Friday (that's when Tesla releases updates) so that they can test it at 2 am in the morning.

You can make what you will out of this.

To me it shows that product is great already and will be very successful when finished.

The complainers that never used the product are noise.


> Those who do pay for FSD use terms like "my mind is blown", "it's amazing"

Absolutely. Then you tell them you heard the car does odd things like randomly coming to a full stop and they say "well, yeah it does and it sucks, but it will totally be perfect one day."

It's funny how people can change their attitudes towards something when they've sunk $10,000 into it.


They paid $10,000 for intelligent cruise control which comes on a Toyota Camry and other cars at no additional cost.


Actually, it's the opposite.

Autopilot comes for free with Tesla.

I don't know of any car that gives Autopilot-like functionality for free. I'm certain BMW doesn't, I'm certain GM's Blue Cruise and Ford's Co-Pilot is a paid add on.

FSD provides functionality that isn't available on any car you can buy.

Specifically the $10k pays for:

- Navigate on Autopilot

- Auto Lane Change

- Autopark

- Summon

- Full Self-Driving Computer

- Traffic Light and Stop Sign Control

Those are FSD feature that are fully released.

The beta adds autosteer on city streets.


The functionality you mention isn't available on the Tesla either, like full self-driving computer, since it's not full self driving.

Navigate on autopilot I dont know what that means.

Self parking is available on other cars, Volkswagen, Audi, and more. Even the base Prius practically parks itself.

Traffic light and stop sign: what's the point of this if you have to maintain awareness anyway since it's not really full self driving?


Navigate on autopilot = FSD for highways. It's actually pretty useful in a new city etc and trying to navigate on highways


And yet no one else is even close, or trying very hard.


That's absolute nonsense.

Lots of other people have various levels of automation working reasonably well. None of them have the absolute hubris to sell it as "full self driving" before it's any of those things.

Waymo is doing a decent job of navigating, slowly but reasonably safely, around an insanely well mapped and limited region.

GM's Cruise products seem reasonable enough, from what I've read. They're limited to limited access highways, have some good driver attention monitoring to make sure the human is paying attention, and seem to limit the regions to "that which they have high confidence they can actually do."

Tesla, if you listen to their own hype, makes things that resemble SAE Level 5 claims, except they deliver remarkably little of it. Having charged obscenely for it in the process.

Your claim has absolutely nothing to do with the reality outside Tesla's bubble of their own making. Except, they've chosen an approach that seems more and more unlikely to actually work.


Let me know how BlueCruise works when the highway changes, because its all pre-mapped and will shit the bed.


Sorry what?


I bought FSD for my Model 3 and my mind is not blown. Amusingly, I prefer the known behavior/bugs of the pre-beta to the totally bananas mistakes that the beta makes by trying to be smart. And that’s a bad state for Tesla to be in, because other manufacturers are catching up.

Automatic lanekeeping on the highway and stoplight stop/go were groundbreaking when Tesla launched them, and these are the features that I get the most value from. The rest of FSD as currently demonstrated is parlor tricks.


> Automatic lanekeeping on the highway and stoplight stop/go were groundbreaking when Tesla launched them

Automatic lanekeeping was originally made by Mobileye and available on Teslas and Nissans. I don't know about stoplight stop/go, but perhaps that is one of the reasons Teslas randomly make unwarranted stops?

https://en.wikipedia.org/wiki/Lane_centering


Your link agrees with me. Tesla provided some of the first commercially available lane centering in the Model S/X.

For the stoplights, no, phantom braking sadly predates that feature. It occurs during normal driving when the car erroneously perceives a phantom object appearing in front of the car and hits the brakes. The stoplight response, in my experience, is actually much more reliable, inclusive of stopping for reds / stopsigns, and automatically proceeding if there is a green light and a car in front of you.


Ok, here's one owner.

https://www.plainsite.org/dockets/4lmsbyy8k/albuquerque-dist...

Also, just check Tesla forums if you want high volume evidence. You are hereby proven wrong.


And the evangelists who never used the product are also noise.


So Tesla HQ issues a "revert" and everyone's car rolls back? Is it not optional?


Lol, as if anything's optional.


So how do these systems rank?

1. Tesla 2. Super cruise 3. Ford? BMW? 4. Anyone else?


Car journalists seem enamored with Super Cruise, though I've never tried it. I compared a few, including Autopilot, before I bought my last car and actually liked Volvo's the best. It doesn't try to be more than it is. It's just a very strong bias of the wheel for the middle of the lane. You can change your lane positioning by turning the wheel, and you'll get pulled back to the middle if you let off. But it won't disengage unless you really torque it.

I wish is was less strong, actually, but then you wouldn't be able to rest you hand on the wheel for the presence detection.


Its a difficult problem! And i hope that they are able to figure out their issues and push forward the era of FSD


Is it just me or do people tend to ignore that traffic data is cultural data, not some observation of the periodicity of a pendulum.

Self driving level 5 will not work and cannot work.


> It is impossible to test all hardware configs in all conditions with internal QA, hence public beta.

10k to be a beta tester


Depends when you bought.. 6k here :P


Hopefully it's not 6k + your personal safety.


Correction: $10k to be a beta tester on a technology that's worth $100k or more per car when perfected.

I'd also like to point out that you can get it through a subscription for $99/month if you already have Enhanced Autopilot ($199/month if not).


I bought FSD, but I’m really confused by the idea that it would be worth $100k when perfected. What’s the math look like on that?


Especially when, in most areas, you could literally pay a full time driver for $100k a year. Granted, I suppose your Tesla could drive for 24x7 hours a week minus charging time, but I don't think it would hold up very well in those conditions.

I really want to like Tesla, but the wacko things coming from the mouths of Musk and Tesla fanboys make it so difficult.


Yea. I <3 my Model 3, I have a reservation in for a CyberTruck, but the idea that a completed FSD is valued at $100k is almost as crazy as the idea that we’ll see a completed FSD within the next decade.


Probably some optimistic estimates involving being able to send your car out to make money by doing self-driving Uber while you aren't using it.


because all Uber passengers are good honest people and will surely not take advantage of an unmanned vehicle


There's a bunch of if/then statements that you have to fall into.


This is from 2019, and they say a robotaxi could make an owner around $30k in a year.

https://medium.com/swlh/30-000-from-a-tesla-robotaxi-not-as-...


What, those robotaxis that would "for sure" happen by the end of 2020 but somehow, mysteriously, didn't:

https://www.thedrive.com/news/38129/elon-musk-promised-1-mil...

How many units of belief do you have left in the true believer tank?


As a rule, I don’t trust vendor estimates for amount of savings/revenue their product will cause :D


There are multiple ways to arrive at that.

Here's one version of the napkin math.

According to AAA average cost of owning a car in US is $8k per year.

Average driving in US is 12k miles per year.

A $25k robotaxi with 1 million mile battery can provide the service for 10 cents per mile.

That's $8k of value delivered for $1.2k. Let's round it up: $8k - $2k is $6k of profit per year.

This is to replace a single car which is, on average, 2 hours of driving per year.

So assume 8 hours of "paid" driving gives us $6 * 4 = $24 k of profit per year.

You can plug your own assumptions. I think those are pretty conservative but let's round it down to $10k a year profit.

At 10 years per car that's $100k profit from FSD software.


So I think the confusion to my mind is the following:

1. I didn’t buy my car to have it drive around on its own working a day job

2. I don’t want to absorb the insurance / maintenance hit of having my car be a road warrior while I’m working

3. My Tesla cost closer to $50k than $25k

4. I do not live in an area where taxis are a major mode of transport

I’m sure there are people in the world that could get $100k of value out of a robotaxi. But I don’t think they’re the median or mean purchaser of a self-driving car.


The market value would still go up under this fantasy scenario that was actually sold to people, whether you value it personally or not.


Missing from the calculation: increased maintenance cost from doing the additional miles, damage to the car from luggage and extra (ab)use, repairs after accidents (300-500 per million miles), personal costs from the car not coming back to you when you need it. Reminds me of people getting into Uber driving underestimating the extra costs incurred.


> Reminds me of people getting into Uber driving underestimating the extra costs incurred.

I thought that was the bulk of their target market - people who don't really do the math on vehicle depreciation and such.

I know people who make money doing it, but they're driving very cheap to run cars, and are pretty careful about how they do it, and are spreadsheet-heavy on proving it.


Does any other company exist in this world?


This is a per-car math.

It doesn't have to be a Tesla car.

The math works just as well for Ford robotaxi or Waymo robotaxi.


This is like joining an MLM. Let’s assume the math holds: anybody can buy a car and make $100k in profit over 10 years. So you buy one… except it turns out that since anybody can do it, enough other people do it to drive up supply. So prices drop and you’re getting fewer rides for less money than your original equation.

The only way this doesn’t happen is if robotaxis cost more to buy or operate than it’s worth for the average person to do so.


The obvious future is that robotaxi network will be operated by the company that makes the software, not individuals.

Be it Tesla or Waymo or GM.

So unless you can develop your own FSD software and build millions of cars cheaply, you won't be competing with a robotaxi network.

And they won't be selling you robotaxi software at any price.

And if someone does develop FSD software and sells it to individuals, there's still the issue of competing with a network. You won't compete with Uber even if you build similar iPhone and Android apps. The size of the network matters.

There might (or might not) be a transitory period when this software is sold to individuals and there's a rev share agreement.

Musk mused about such scheme but he also hinted that this is not going to last forever.

This is simply a financing and cash flow decision of the CFO.

If Tesla goes unprofitable and cash flow negative, the wall street might loose their minds even if this money is simply an investment into robotaxis that might return 10x in the future.

A simple way to fund robotaxi network without going negative cash flow is the sell the car and the software for, say $20k, and then do 30-70% revshare on the $100k of profit (i.e. $30k for Tesla, 70k to car owner, which would end up with 50k profit to Tesla and 50k profit to the owner while keeping Tesla profitable on a quarterly basis).

But eventually Tesla will want all of the $100k profit and will be making more than enough to pay for the cars upfront.

At such time they'll stop selling FSD and cars altogether and go exclusively robotaxi.

The logic doesn't change for GM or Waymo.


It seems we’ve gone back to where we started. I said I didn’t see how my car would be worth $100k with complete FSD, you gave an equation claiming it would be, and now we’ve looped to you agreeing that it won’t be?


SpaceX, I assume.


I doubt there will ever be much profit in this from individual car owners, even if it comes to pass. As soon as robotaxis can exist en masse the market will be flooded by sub-$30k dedicated platforms; boxes with wheels and batteries.


> worth $100k or more per car when perfected

I think the burden of proof would be on you for this to be a "when" rather than an "if"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: