I bought FSD version of Model 3 in the spring of 2020. I'm still waiting for what I purchased to be turned on. Frankly, I'd be psyched if they would expand on the little things instead of the big things. "Park the car in that spot". "Exit the parking spot". These would be worth the price I paid.
I bought FSD version of Model 3 in the spring of 2020
I don't understand how people keep falling for this. Sure, it seemed realistic enough at first but how many cracks in the facade are too many to ignore? In 2015 Musk said two years. In 2016 he said two years. In 2017 he said two years. Tesla did a 180 on the fundamental requirements of FSD and decided it doesn't need lidar just because they had a falling-out with their lidar supplier. That level of ego-driven horseshit is dangerous.
• In January 2021 he said end of 2021 [2].
• In May 2022 he said May 2023 [1].
So he moved from around 2 years to 1 year.
Looking at the rate of progress from Tesla FSD videos on YouTube, I wouldn't bet on 2023.
Whenever people talk about Tesla FSD, I always like to point to Cruise who just got a permit to carry paying riders in California [3]. Honestly, their software looks significantly smarter than Tesla's -- Highly recommend their 'Cruise Under the Hood 2021' video for anyone interested in self-driving![4]
It's still not actually available to people who paid for it, and it doesn't actually work* (at least not to a degree where it can be trusted not to crash in to street signs or parked cars). I have no idea why anyone pays a $10k premium for vaporware.
$10k is insane. That’s 1/4 the price of my nicely loaded Honda Ridgeline (or pretty much any well-loaded mainstream sedan or crossover). Yah, I don’t have an AI trying to drive (and crash into fire engines) for me, but I have basic lane-keeping, auto braking, and assisted cruise. I still have to drive the car. The horror.
A non-transferrable $10K premium, at that. Unless something has changed. I've always wondered what is going to happen when the early adopters start to move on to their next car without ever having received any value for the FSD license.
I suspect the venn diagram of people paying $10k for Tesla vaporware and people who've realized >$10k in gains trading TSLA stock is pretty close to a circle.
I've made enough off TSLA to afford a Tesla, but I'm actually now considering the Ioniq 5 over the Model Y. Specs are very similar and it's like 2/3 the price.
I've been a believer in Tesla for a decade now but it is starting to seem like the competition is catching up.
Gotta love the apologists as well, because hey, if I admit I was conned out of money for some buggy-ass deadly software, then who's the idiot, and I know I'm not an idiot, so this software works, yeah it has a few kinks, but it's great, and soon it'll be chaffeuring me all over the place! Soon! He tweeted!
I paid for it and have been using FSD beta for over 6 months, the most recent 10.12 is a big improvement on smoothness and further improves my confidence in the system
> I don't understand how people keep falling for this.
Every successful fraud has people it's tuned for. For example, consider how terribly written most spam is. That selects for people who are not fussy about writing. Conversely, a lot of the people doing high-end financial fraud is done by people who are very polished, very good at presenting the impression of success. Or some years back I knew of a US gang running the Pigeon Drop [1] on young East Asian women in a way that was tuned to take advantage of how they are often raised.
Telsa's only has ~3% of the US car market, so they're definitely in the "fool some of the people all of the time" bucket. Musk's fan base seems to be early adopters and starry-eyed techno-utopians [2]. He's not selling transportation. He's selling a dream. They don't care that experts can spot him as a liar [3] because listening to experts would, like, totally harsh their mellow.
Although it's much closer to legal fraud, I don't think that's otherwise hugely different than how many cars are marketed. E.g., all of the people who are buying associations of wealth when they sign up for a BMW they can't afford. Or the ocean of people buying rugged, cowboy-associated vehicles that never use them for anything more challenging than suburban cul de sacs.
Interesting combo of very impressive, and also clearly not ready for general availability. It’s doing something noticeably wrong every few minutes. It’s so hard to guess how close this really is, but I’d guess … a few years or so? I’d imagine all those edge cases where it’s stopping too early, stopping too late, getting stuck making certain decisions, etc. will take quite awhile to iron out.
Yes, you are right. Not quite ready for GA just yet, at least not for the insanity of San Francisco driving. I would settle for having to intervene every now and then though. I have a base Tesla Model 3 and just the free "self-steer" mode is very helpful. I definitely miss it when driving one of my kids' cars.
For me, for a “fully self driving” feature to be ready for wide market sales, it needs to be as good or better than a human. A more “drive assist” feature, like current Tesla autopilot, where there has to be a human driver ready to take over at any moment, that’s different, but full self driving, where there doesn’t even need to be a human driver at all, the standards are a lot higher.
Also, by “as good or better than a human”, for me the biggest things are:
- Involved in the same number, or fewer, accidents
- Does not piss off other drivers any more than a human (like stopping way back from stop signs, getting “stuck” on a decision and not making progress, etc.)
In this vid the Tesla was nowhere close to the above standard. Still really impressive, but lots of work to do. Hard to say how close it is - maybe a few quarters away, but could also be a decade or more.
Google/Waymo is closer in terms of safety and pissing off other drivers, but it’s also much more conservative from the rider’s point of view. Like it will do 3 right turns to avoid a tricky left, will take side streets over the highway, etc. Waymo vids seem much safer and more predictable, fewer clear bugs (e.g. all the times the Tesla FSD makes the wrong decision on where to stop at an intersection are bad, “hard dealbreaker” bugs, that Google/Waymo don’t have), but I think it would just get you to your destination too much slower than a human driver, so buyers wouldn’t like it.
Insanity ?
Driving anywhere in the US is an order of magnitude easier than in Europe so if it struggles there, i would like to see how it drives in Paris or Rome
While Rome and Istanbul are really bad, most of Europe is good. Consistent use of traffic circles are a dream (instead of America's infatuation with signalled t-bone collision intersections).
No basic mistakes. No shut downs in the middle of the road. And is approved for use as a taxi service.
And you clearly see the benefits of LiDAR by the stability and consistency with which it identifies other objects e.g. cars, pedestrians on the road. The inability of Tesla FSD to accurately identify the bounding box of the truck at 6:06 is extremely concerning for example.
Damn, that's wild and really bad. If I had a forum for every time people post on these forums: 'it only needs to be better than a human".
But it misses that the kinds of erros AI can make are so wild, other drivers and pedestrians can't even imagine or anticipate them. Imagine all kinds of different responces the AI might trigger on sudden appearence of a car, where a dog should be.
Human drivers could be drunk, but they don't go from perfect driving to batshit crazy in a split second.
There's got to be a sizeable number of people who have paid for this feature and never received it before selling the car. And of course Tesla is willing to re-sell the feature to the next driver. Are they willing to pay back customers for a function they haven't shipped?
They had a falling out with MobileEye, who provided the AP 1.0 hardware. It never used LIDAR. Tesla doesn’t use LIDAR because you need to solve vision anyway. And once you solve that, LIDAR makes no more sense.
(You need to solve vision anyway, because for that object, of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things. It can only tell you how far they are away. It is not some magical sensor which Tesla is just too cheap to employ.)
If you can accurately determine the 3D geometry of the scene(e.g. with LiDAR), the 3D object detection task becomes much easier.
That being said, most tasks for self-driving such as object detection can be robustly “solved” by LiDAR-only (to the extent that the important actors will be recognized) but adding in cameras obviously helps to distinguish between some classes.
Trying to do camera only (specifically monocular) runs into the issue of no 3D ground truth meaning it’s a lot more likely to accidentally not detect something in frame (say a white truck).
That’s why you can have LiDAR and partially-“solved” vision but need fully solved vision if it’s the only input.
This claim is just as false as the one about Tesla having had a falling out with their “LIDAR supplier”. Elon has explained many times why he considers LIDAR, while useful for things like docking to the Space Station, to be a fool’s errand for self driving cars.
> you still need to figure out whether it is a trashcan or a child
No you don't. You just need to avoid hitting it.
The problem with a vision-only system is that you need to know what an object is to determine the bounding box and thus how to avoid hitting it. Which is the problem we've seen with FSD where if you combine two options e.g. a boat on a truck then it gets confused because it hasn't been trained on it yet.
You think a vision system cannot detect that something is there at all unless it can also correctly categorize the object? In other words, you think if the object just happens to be one which the system wasn’t trained on, a vision based system will necessarily report “nothing there”?
The big problem for vision systems (in particular if they don't use a lot of cameras) is that it's very difficult for them to determine movement and distance. This becomes exaggerated when objects move perpendicular to camera, one of the reasons is that the distinctive features of cars, trucks, buses are not so clear anymore. There are quite a few examples of hilarious mischaracterisations in these sort of cases.
I'm not sure you know what lidar is. It gives you 3D images while telling you exactly how far away things are. In fact you can much more easily determine what an object is from Lidar data.
That is not to say Lidar doesn't have its issues (and there are quite a few), we likely will need a combination of sensors including cameras, lidar and radar.
Some better examples of his point are how do you determine the color of lights (stoplights, blinkers, brake lights, cop car lights, and so on) so as to make legally correct driving decisions with LIDAR? Some other things LIDAR won't give you: the need to read signs (parking legality, stop signs, speed limits, construction crews with flashing detour directions, painted information on the road like lane markers and speed limits, wrong way signs, and so on). In general, you can't, because the sensor doesn't give you enough information - like color - to let you solve the problem. If you have LIDAR, but not vision you literally can't make a legally correct driving decision in the general case, because you lack the relevant data to make that decision.
LIDAR certainly can see painted information on the road (possibly better than cameras in some situations) see e.g. this [1]. They can also read some road signs and there are proposals how to make them more readable for LIDAR. That said I don't say LIDAR is sufficient for autonomous driving, we will need a suite of sensors.
cycomanic is claiming that leobg is ignorant. They do this despite leobg displaying accurate historical knowledge and stating the way that the sensor works in a way which does not fundamentally disagree with the correction that cycomanic implied leobg needed. As a reader of the comment chain, I have to ask why cycomanic thinks leobg is ignorant - he failed to articulate why. It seems to me that the most contentious and debatable claim that leobg made was the claim that full self driving requires solving vision regardless of whether or not you have LiDAR. If this was the reason - maybe it isn't, but if it was, the fact that everyone uses vision isn't evidence for cycomanic's position - it is evidence for leobg's position.
You've retreated from this as the reason that leobg is frightfully ignorant on behalf of cycomanic. That means the next most contentious claim is the claim that builds upon the foundation of the first claim - that vision being required makes LiDAR irrelevant. The problem for you though is that when you make the concession that vision is necessary, you run into a problem. The sensor situations under which LiDAR is much better than vision tend to involve a vision failure through a lack of light or due to heavy occlusion. There is definitely and necessarily a cut off point at which leobg's claim becomes somewhat true. This denies the right to call him ignorant, because the law of charity demands that his point be the thing that maximizes the truth of his comments. So the claim of ignorance - which amounts to a character attack - becomes unjustified.
>of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things.
That is ignorant, because LIDAR together with processing obviously can tell you if the thing is a trashcan or a child. The post by is ignorant, because to my understanding it implies that LIDAR does not provide enough information to make that determination, which is untrue and not how LIDAR works.
Now if they mean we still need some way to process this information and make decisions of what the different things are, that's a bit disingenuous because that's completely orthogonal to LIDAR vs cameras vs RADAR and using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Thanks for the response. I agree that LiDAR can make that determination. I think he was confused about what it was possible to learn from the LiDAR sensors rather than what LiDAR provides. His ability to distinguish between radar in former Tesla vehicles and LiDAR in former Tesla vehicles wouldn't be present if he thought they were the same sensor. I figured you would be responding to his argument, which was outside the () rather than his fallacious support for a premise that was true which was inside the ().
> using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Bellman gave us the bellman equations, but also gave us the term curse of dimensionality. The equations he shared and the modeling problems he encountered are fundamentally related to modern reinforcement learning. More data doesn't come without cost. So often I hear people speak of the introduction of lower resolution as equivalent to the introduction of error, but this is a false equivocation. Latency in decision making means the introduction of a lower resolution can increase the resolution error, but still decrease the solution error. This is so fundamental a result that it applies even to games which don't have latency in their definition. Consider poker. The game tree is too big. Operating with respect to it as an agent is a mistake. You need to create a blueprint abstraction. That abstraction applied to the game introduces an error. It is lower resolution view of the game and in some ways it is wrong. Yet if two programs try to compete with each other, the one that calculated with respect to the lower resolution version of the game will do better than the one that did its calculations with respect to the higher resolution view of the game. High resolution was worse. The resolution without error was worse. Yet the game under consideration was orders of magnitude simpler than the game of a self driving car.
I've been paying some attention to this debate and I'm not convinced yet that the situations under which LiDAR is superior are sufficient. I think we agree on that already. For me, this reduces the set of situations under which LiDAR is able to be considered superior - if vision is bad, but you need vision, then better to avoid the situation then use the wrong thing [1]. So the situations under which LiDAR becomes superior becomes a subset of the situations that it is actually superior. That subset doesn't seem very large to me, because both LiDAR + vision and vision alone are both necessarily going to be reducing the dimensionality of the data so that the computation becomes more tractable.
[1]: This isn't exactly uncommon as an optimization choice. It'll get dark later and you'll stop operating for a time. Then light will come. You'll resume operation. This is true across most of the species on this planet. If you are trying to avoid death by car accident you could do worse than striving to operate in situations where your sensors will serve you well.
Just a note Lidar can read traffic signs. There are plenty of examples of this. This is based on the different reflectivity of the different colors on the sign.
The same change, in the other direction, turns LIDAR into a source of lethal ionising radiation that goes straight through a dozen cars without attenuation.
The ability to use the phone or remote to move the car forward or back in a straight line is super useful and a cool, novel feature by itself. It’s also a buggy piece of shit that a few engineers could probably greatly improve in a month. Doesn’t seem like Tesla cares, it’s been stagnant for years.
Meanwhile Tesla is still charging people $10,000 for an FSD function that doesn’t exist.
> The ability to use the phone or remote to move the car forward or back in a straight line is super useful and a cool, novel feature by itself.
Is it? It’s hard to think of a situation where moving a car I’m at most a couple of hundred feet from backward or forwards in a straight line by fiddling with my phone is superior to just getting into the car and moving it myself. Maybe I’m not finding myself and my car on opposite sides of a gorge often enough?
Someone parked too close to your driver's door? Just back the car out remotely and get in.
Parallel parked, then walked away from your car and noticed you didn't leave the car behind enough space to exit without scraping your bumper? Pull it forward a little without getting back in.
Parked in your driveway and need to move it three feet so you have space to get the lawnmower out of the garage? Use the remote.
All of these use cases depend on the functionality being fast and hassle-free to use. It works that well about 60% of the time - the other 40% the phone fails to connect to the car, or seems to connect but the car inexplicably doesn't move, or gives a useless "Something went wrong" error message, etc.
Of all the scenarios you describe, they happen to me once in 5 years, even then, I can’t help but feel it’s just being lazy using a remote instead of just moving the car ?
It's not even lazy, it's just plain gimmicky. Aside from the door being blocked in scenario, it would take longer to get your phone out, open the app and tap through the interface than it would to just get in the car and move it. It's just a toy feature with very few genuinely valuable uses.
Being physically lazy is the point? Your examples all describe things that help with daily routines by doing things faster, efficiently (for the users perspective) and often perform the task better.
Some people (me) have tandem parking spaces, where you have to move one car to get the other out. Super useful feature for me on a daily basis. But as stated earlier, it's buggy and frustrating to use and could be a whole lot better without much work. Tesla doesn't care about it, they are focused solely on FSD.
That doesn’t appear to be true; on Tesla’s site, the delivery estimate doesn’t change if you add the FSD option. Unless you’re saying it’s an undocumented policy?
They're claiming that the FSD equipped Tesla waitlist is resolved before the non-FSD waitlist. It seems trivial to verify, but it is possible that FSD cars are shipped faster without It being noticed. Maybe by shipping FSD cars at something like twice the rate they are ordered.
By default you have to continuously press a button to move the car. Also the car has cameras and ultrasonic sensors. You can set thresholds for how close you want the car to get to obstacles. I think the default is 2 feet. The car will refuse to move if it thinks it's too close to an obstacle. I think the top speed is 1 or 2 mph, and the car emits a warning sound as it moves.
I mostly use the summon feature for annoyances such as someone parking too close to me or rain causing a puddle to form around the car.
I made the same mistake in 2019. $6000 for FSD. The car is an enigma. It is simultaneously wonderful and a total scam. My next one will be some other EV from another manufacturer. I would never buy another Tesla or anything else from Elon. Like the car, he is an enigma. I oscillate between believing he is brilliant and seeing him as a lying snake oil salesman. I can’t exactly say I’m disappointed with my purchase overall because it is a damn good EV, but I can’t help but feel like I was conned.
I got it this week. It's not really worth using in its current state. It makes minor mistakes every minute or two, major ones every five minutes or so.
I bought it years ago mostly for the guaranteed computer upgrade and the novelty of testing it as it develops. It's great as a novelty, really cool! But it is dangerous to use right now, and I don't care what anyone says, it's not going to be truly ready for years. In fact I think it needs another computer upgrade and probably a camera upgrade too.
I agree that they should have nailed autopark and summon before moving on to FSD. As it stands they are both useless. But if I could record and play back summon paths in known locations, that would be actually useful.
> I bought it years ago mostly for the guaranteed computer upgrade and the novelty of testing it as it develops.
I did the same, and it was significantly less expensive. I wouldn't buy it for a $1 today.
Also, your idea of novelty is my idea of a nightmare. There has never been a time when I used it where it didn't do something completely insane. The last time I used it (which will truly be the last time), it waited patiently to make a left hand turn. It waited far longer than I would have, and it was clear of oncoming cars for ages. When it did decide to turn, it did it when there were several cars coming, though it was still safe. Except then half way through its turn, it literally stopped, then turned a bit to the right and centered itself in a lane going the opposite direction of traffic flow, right into oncoming cars. Thankfully I was able to take over and two of the three oncoming cars stopped. Had I done nothing, we would have all been in a head on collision.
Having been in a t-bone collision (other driver's fault and he got his parole revoked for that) the USA government traffic engineer love affair with unsafe intersection design is horrific. Traffic circles fix most turning issues, and do it without a big control box and expensive poles for mounting signal lights.
In other words, many collision chances could be removed with safer intersection design, with benefits to both computer control and human control.
doesn’t the car have multiple video cams built in? Take the footage and upload it online, if you have it.
I see Elon constantly interacting with fanboys hyping up autopilot features. One would get the impression that all features were perfect with the hype videos online.
Not really. There is no shortage of YouTube videos of it doing stupid and dangerous things, mixed with boring regular driving. If you watch the popular YouTube channels it will give you a very accurate picture of how it works and fails. Based on that I had a very good idea of what to expect when I tried it out for the first time and my expectations turned out to be entirely accurate.
It's really quite surprising how open Tesla is about it. They could have added all sorts of rules to the beta legal terms about publicity and posting videos. But they did nothing and have not tried to take down any of the many unflattering videos, or kick people out of the beta for posting them, AFAIK.
Customer and employee are different situations. He was a data labeler and test operator working on Autopilot, and he had a YouTube channel where he criticized Autopilot using the free FSD beta access he was granted as a Tesla employee, and sold "Full self-driving beta" branded merchandise. Tesla fired him for the conflict of interest and for dangerous use of Autopilot, and he lost his free FSD beta access. I think it is fair for a company to not want to further employ someone with a conflict of interest like that. But you'll note that they didn't make him take down the videos. And he's still making new ones.
The fact that we don't have reliable, fully automated parking yet is bizarre. I'd love a solution that automatically parallel parks for me, and a computer with sensors around the vehicle should be able to do a better job. Plus, the problem is of limited scope, and low speed, so you don't have to deal with most of the potential issues and dangers with full self driving on a road.
BMWs have had that for quite a while. My 2016 5 series does it perfectly 100% of the time. It even gets into some spots that I consider way too short, but it surprises me.
My 2015 Model S does parallel parking between two other cars quite well. It will also reverse into the space between two cars parked perpendicular to the road.
Now that I see your list of manufacturers, I can only think: “of course, manufacturers don’t do squat in a car, Bosh, or some other equipment company did it and sold it to all the manufacturers”
That's pretty typical. And sometimes when the car manufacturer gets particularly good at parts subsystems, they spin that off into a separate company. E.g. Delphi. It's nice, because then we see awesome things like magnetic ride show up on more than just GM performance cars.
Tesla has autopark on all their models. It is terrible. I can barely ever activate it because it doesn't recognize parking spaces. It will curb your wheels. It's very slow. I'm certain that if I used it on a daily basis it would have collided with something by now.
If you have been following their updates, it looks like that they have finally found the correct approach for FSD and it is improving very quickly. At this rate, it seems that it'll be close to level 5 this year or next year.
> If you have been following their updates, it looks like that they have finally found the correct approach for FSD and it is improving very quickly. At this rate, it seems that it’ll be close to level 5 this year or next year.
Yeah, their perpetual 2-year estimate dropped to 1 year around a year and half ago, after being at 2 years for at least 6 years. So we’re probably four and half years from it either being ready…or dropping to 6 months off for the next several years.
Like, trying to change lanes into a "flush median" (aka yellow stripes across) to turn onto a one way in the wrong direction. Or randomly swerving back and forth (hard) when a lane splits into 2. Or trying to take 90deg turns at 45mph.
There are times it works amazingly, I've had it slow down and swerve out of the way of someone barreling out of a parking lot without stopping. It's also great at finding a gap to fit into between cars to turn, but it also got itself into that situation by not getting over sooner and instead trying to merge ~500ft before the turn.
Also for some reason on a 2 lane road (in each direction) it would constantly try to be in the "passing" lane.