I bought FSD version of Model 3 in the spring of 2020
I don't understand how people keep falling for this. Sure, it seemed realistic enough at first but how many cracks in the facade are too many to ignore? In 2015 Musk said two years. In 2016 he said two years. In 2017 he said two years. Tesla did a 180 on the fundamental requirements of FSD and decided it doesn't need lidar just because they had a falling-out with their lidar supplier. That level of ego-driven horseshit is dangerous.
• In January 2021 he said end of 2021 [2].
• In May 2022 he said May 2023 [1].
So he moved from around 2 years to 1 year.
Looking at the rate of progress from Tesla FSD videos on YouTube, I wouldn't bet on 2023.
Whenever people talk about Tesla FSD, I always like to point to Cruise who just got a permit to carry paying riders in California [3]. Honestly, their software looks significantly smarter than Tesla's -- Highly recommend their 'Cruise Under the Hood 2021' video for anyone interested in self-driving![4]
It's still not actually available to people who paid for it, and it doesn't actually work* (at least not to a degree where it can be trusted not to crash in to street signs or parked cars). I have no idea why anyone pays a $10k premium for vaporware.
$10k is insane. That’s 1/4 the price of my nicely loaded Honda Ridgeline (or pretty much any well-loaded mainstream sedan or crossover). Yah, I don’t have an AI trying to drive (and crash into fire engines) for me, but I have basic lane-keeping, auto braking, and assisted cruise. I still have to drive the car. The horror.
A non-transferrable $10K premium, at that. Unless something has changed. I've always wondered what is going to happen when the early adopters start to move on to their next car without ever having received any value for the FSD license.
I suspect the venn diagram of people paying $10k for Tesla vaporware and people who've realized >$10k in gains trading TSLA stock is pretty close to a circle.
I've made enough off TSLA to afford a Tesla, but I'm actually now considering the Ioniq 5 over the Model Y. Specs are very similar and it's like 2/3 the price.
I've been a believer in Tesla for a decade now but it is starting to seem like the competition is catching up.
Gotta love the apologists as well, because hey, if I admit I was conned out of money for some buggy-ass deadly software, then who's the idiot, and I know I'm not an idiot, so this software works, yeah it has a few kinks, but it's great, and soon it'll be chaffeuring me all over the place! Soon! He tweeted!
I paid for it and have been using FSD beta for over 6 months, the most recent 10.12 is a big improvement on smoothness and further improves my confidence in the system
> I don't understand how people keep falling for this.
Every successful fraud has people it's tuned for. For example, consider how terribly written most spam is. That selects for people who are not fussy about writing. Conversely, a lot of the people doing high-end financial fraud is done by people who are very polished, very good at presenting the impression of success. Or some years back I knew of a US gang running the Pigeon Drop [1] on young East Asian women in a way that was tuned to take advantage of how they are often raised.
Telsa's only has ~3% of the US car market, so they're definitely in the "fool some of the people all of the time" bucket. Musk's fan base seems to be early adopters and starry-eyed techno-utopians [2]. He's not selling transportation. He's selling a dream. They don't care that experts can spot him as a liar [3] because listening to experts would, like, totally harsh their mellow.
Although it's much closer to legal fraud, I don't think that's otherwise hugely different than how many cars are marketed. E.g., all of the people who are buying associations of wealth when they sign up for a BMW they can't afford. Or the ocean of people buying rugged, cowboy-associated vehicles that never use them for anything more challenging than suburban cul de sacs.
Interesting combo of very impressive, and also clearly not ready for general availability. It’s doing something noticeably wrong every few minutes. It’s so hard to guess how close this really is, but I’d guess … a few years or so? I’d imagine all those edge cases where it’s stopping too early, stopping too late, getting stuck making certain decisions, etc. will take quite awhile to iron out.
Yes, you are right. Not quite ready for GA just yet, at least not for the insanity of San Francisco driving. I would settle for having to intervene every now and then though. I have a base Tesla Model 3 and just the free "self-steer" mode is very helpful. I definitely miss it when driving one of my kids' cars.
For me, for a “fully self driving” feature to be ready for wide market sales, it needs to be as good or better than a human. A more “drive assist” feature, like current Tesla autopilot, where there has to be a human driver ready to take over at any moment, that’s different, but full self driving, where there doesn’t even need to be a human driver at all, the standards are a lot higher.
Also, by “as good or better than a human”, for me the biggest things are:
- Involved in the same number, or fewer, accidents
- Does not piss off other drivers any more than a human (like stopping way back from stop signs, getting “stuck” on a decision and not making progress, etc.)
In this vid the Tesla was nowhere close to the above standard. Still really impressive, but lots of work to do. Hard to say how close it is - maybe a few quarters away, but could also be a decade or more.
Google/Waymo is closer in terms of safety and pissing off other drivers, but it’s also much more conservative from the rider’s point of view. Like it will do 3 right turns to avoid a tricky left, will take side streets over the highway, etc. Waymo vids seem much safer and more predictable, fewer clear bugs (e.g. all the times the Tesla FSD makes the wrong decision on where to stop at an intersection are bad, “hard dealbreaker” bugs, that Google/Waymo don’t have), but I think it would just get you to your destination too much slower than a human driver, so buyers wouldn’t like it.
Insanity ?
Driving anywhere in the US is an order of magnitude easier than in Europe so if it struggles there, i would like to see how it drives in Paris or Rome
While Rome and Istanbul are really bad, most of Europe is good. Consistent use of traffic circles are a dream (instead of America's infatuation with signalled t-bone collision intersections).
No basic mistakes. No shut downs in the middle of the road. And is approved for use as a taxi service.
And you clearly see the benefits of LiDAR by the stability and consistency with which it identifies other objects e.g. cars, pedestrians on the road. The inability of Tesla FSD to accurately identify the bounding box of the truck at 6:06 is extremely concerning for example.
Damn, that's wild and really bad. If I had a forum for every time people post on these forums: 'it only needs to be better than a human".
But it misses that the kinds of erros AI can make are so wild, other drivers and pedestrians can't even imagine or anticipate them. Imagine all kinds of different responces the AI might trigger on sudden appearence of a car, where a dog should be.
Human drivers could be drunk, but they don't go from perfect driving to batshit crazy in a split second.
There's got to be a sizeable number of people who have paid for this feature and never received it before selling the car. And of course Tesla is willing to re-sell the feature to the next driver. Are they willing to pay back customers for a function they haven't shipped?
They had a falling out with MobileEye, who provided the AP 1.0 hardware. It never used LIDAR. Tesla doesn’t use LIDAR because you need to solve vision anyway. And once you solve that, LIDAR makes no more sense.
(You need to solve vision anyway, because for that object, of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things. It can only tell you how far they are away. It is not some magical sensor which Tesla is just too cheap to employ.)
If you can accurately determine the 3D geometry of the scene(e.g. with LiDAR), the 3D object detection task becomes much easier.
That being said, most tasks for self-driving such as object detection can be robustly “solved” by LiDAR-only (to the extent that the important actors will be recognized) but adding in cameras obviously helps to distinguish between some classes.
Trying to do camera only (specifically monocular) runs into the issue of no 3D ground truth meaning it’s a lot more likely to accidentally not detect something in frame (say a white truck).
That’s why you can have LiDAR and partially-“solved” vision but need fully solved vision if it’s the only input.
This claim is just as false as the one about Tesla having had a falling out with their “LIDAR supplier”. Elon has explained many times why he considers LIDAR, while useful for things like docking to the Space Station, to be a fool’s errand for self driving cars.
> you still need to figure out whether it is a trashcan or a child
No you don't. You just need to avoid hitting it.
The problem with a vision-only system is that you need to know what an object is to determine the bounding box and thus how to avoid hitting it. Which is the problem we've seen with FSD where if you combine two options e.g. a boat on a truck then it gets confused because it hasn't been trained on it yet.
You think a vision system cannot detect that something is there at all unless it can also correctly categorize the object? In other words, you think if the object just happens to be one which the system wasn’t trained on, a vision based system will necessarily report “nothing there”?
The big problem for vision systems (in particular if they don't use a lot of cameras) is that it's very difficult for them to determine movement and distance. This becomes exaggerated when objects move perpendicular to camera, one of the reasons is that the distinctive features of cars, trucks, buses are not so clear anymore. There are quite a few examples of hilarious mischaracterisations in these sort of cases.
I'm not sure you know what lidar is. It gives you 3D images while telling you exactly how far away things are. In fact you can much more easily determine what an object is from Lidar data.
That is not to say Lidar doesn't have its issues (and there are quite a few), we likely will need a combination of sensors including cameras, lidar and radar.
Some better examples of his point are how do you determine the color of lights (stoplights, blinkers, brake lights, cop car lights, and so on) so as to make legally correct driving decisions with LIDAR? Some other things LIDAR won't give you: the need to read signs (parking legality, stop signs, speed limits, construction crews with flashing detour directions, painted information on the road like lane markers and speed limits, wrong way signs, and so on). In general, you can't, because the sensor doesn't give you enough information - like color - to let you solve the problem. If you have LIDAR, but not vision you literally can't make a legally correct driving decision in the general case, because you lack the relevant data to make that decision.
LIDAR certainly can see painted information on the road (possibly better than cameras in some situations) see e.g. this [1]. They can also read some road signs and there are proposals how to make them more readable for LIDAR. That said I don't say LIDAR is sufficient for autonomous driving, we will need a suite of sensors.
cycomanic is claiming that leobg is ignorant. They do this despite leobg displaying accurate historical knowledge and stating the way that the sensor works in a way which does not fundamentally disagree with the correction that cycomanic implied leobg needed. As a reader of the comment chain, I have to ask why cycomanic thinks leobg is ignorant - he failed to articulate why. It seems to me that the most contentious and debatable claim that leobg made was the claim that full self driving requires solving vision regardless of whether or not you have LiDAR. If this was the reason - maybe it isn't, but if it was, the fact that everyone uses vision isn't evidence for cycomanic's position - it is evidence for leobg's position.
You've retreated from this as the reason that leobg is frightfully ignorant on behalf of cycomanic. That means the next most contentious claim is the claim that builds upon the foundation of the first claim - that vision being required makes LiDAR irrelevant. The problem for you though is that when you make the concession that vision is necessary, you run into a problem. The sensor situations under which LiDAR is much better than vision tend to involve a vision failure through a lack of light or due to heavy occlusion. There is definitely and necessarily a cut off point at which leobg's claim becomes somewhat true. This denies the right to call him ignorant, because the law of charity demands that his point be the thing that maximizes the truth of his comments. So the claim of ignorance - which amounts to a character attack - becomes unjustified.
>of which LIDAR tells you is exactly 12.327 ft away, you still need to figure out whether it is a trashcan or a child. And if it is a child, whether it is about to jump on the road or walking away from the road. LIDAR does not tell you these things.
That is ignorant, because LIDAR together with processing obviously can tell you if the thing is a trashcan or a child. The post by is ignorant, because to my understanding it implies that LIDAR does not provide enough information to make that determination, which is untrue and not how LIDAR works.
Now if they mean we still need some way to process this information and make decisions of what the different things are, that's a bit disingenuous because that's completely orthogonal to LIDAR vs cameras vs RADAR and using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Thanks for the response. I agree that LiDAR can make that determination. I think he was confused about what it was possible to learn from the LiDAR sensors rather than what LiDAR provides. His ability to distinguish between radar in former Tesla vehicles and LiDAR in former Tesla vehicles wouldn't be present if he thought they were the same sensor. I figured you would be responding to his argument, which was outside the () rather than his fallacious support for a premise that was true which was inside the ().
> using that argument we could dismiss any of the other technologies ignoring the fact that more (and different) data typically allows you to make better decisions.
Bellman gave us the bellman equations, but also gave us the term curse of dimensionality. The equations he shared and the modeling problems he encountered are fundamentally related to modern reinforcement learning. More data doesn't come without cost. So often I hear people speak of the introduction of lower resolution as equivalent to the introduction of error, but this is a false equivocation. Latency in decision making means the introduction of a lower resolution can increase the resolution error, but still decrease the solution error. This is so fundamental a result that it applies even to games which don't have latency in their definition. Consider poker. The game tree is too big. Operating with respect to it as an agent is a mistake. You need to create a blueprint abstraction. That abstraction applied to the game introduces an error. It is lower resolution view of the game and in some ways it is wrong. Yet if two programs try to compete with each other, the one that calculated with respect to the lower resolution version of the game will do better than the one that did its calculations with respect to the higher resolution view of the game. High resolution was worse. The resolution without error was worse. Yet the game under consideration was orders of magnitude simpler than the game of a self driving car.
I've been paying some attention to this debate and I'm not convinced yet that the situations under which LiDAR is superior are sufficient. I think we agree on that already. For me, this reduces the set of situations under which LiDAR is able to be considered superior - if vision is bad, but you need vision, then better to avoid the situation then use the wrong thing [1]. So the situations under which LiDAR becomes superior becomes a subset of the situations that it is actually superior. That subset doesn't seem very large to me, because both LiDAR + vision and vision alone are both necessarily going to be reducing the dimensionality of the data so that the computation becomes more tractable.
[1]: This isn't exactly uncommon as an optimization choice. It'll get dark later and you'll stop operating for a time. Then light will come. You'll resume operation. This is true across most of the species on this planet. If you are trying to avoid death by car accident you could do worse than striving to operate in situations where your sensors will serve you well.
Just a note Lidar can read traffic signs. There are plenty of examples of this. This is based on the different reflectivity of the different colors on the sign.
The same change, in the other direction, turns LIDAR into a source of lethal ionising radiation that goes straight through a dozen cars without attenuation.
I don't understand how people keep falling for this. Sure, it seemed realistic enough at first but how many cracks in the facade are too many to ignore? In 2015 Musk said two years. In 2016 he said two years. In 2017 he said two years. Tesla did a 180 on the fundamental requirements of FSD and decided it doesn't need lidar just because they had a falling-out with their lidar supplier. That level of ego-driven horseshit is dangerous.