The only difference between level 2 and up is whether the system needs attention should it make a mistake, and ATM Tesla must have a fully attentive driver at all times. The problem with the SAE levels is that level 2 covers such a broad category of autonomous capabilities that it’s kind of meaningless calling a car level 2. There are cars at level two manufactured a decade ago and Tesla FSD is also level 2 and the latter may as well be a spaceship in comparison. Knowing there are 5 levels and Tesla is at level 2 does not in any way give one a notion of how autonomous the car is, only that it may make mistakes x% of the time.
An analogy with a SaaS is having a full featured service that’s available 99% of time and dismissing it as every other toy service out there, only because it’s not available 99.99% of the time. That’s the situation with Tesla more or less. Granted the march towards more 9s is going to take some time.
EDIT: I looked up the "Tesla admitting level 2" and it's in regards to their *current* Autopilot software that's been around for years, not FSD Beta which is currently only rolled out to a few thousand people in US. So seems like you're not understanding the difference between Autopilot and FSD Beta. I recommend again that you watch "FSD Beta 10" videos on Youtube since you evidently have not.
> Knowing there are 5 levels and Tesla is at level 2 does not in any way give one a notion of how autonomous the car is
It literally does. SAE levels [1] are very clear in terms of who is in control, human supervision/take over expectations and Operational Design Domain. It’s not at all about mistake percentage, so the SaaS analogy is irrelevant.
At the end of the day, if you’re still in level 2, it means you’re not fully autonomous because you can’t take passengers from point A to point B and handle mistakes, including safely pulling over when necessary. Tesla is nowhere close to doing this.
Yes it is and SaaS is very relevant. The very page you've linked lists all sorts of "features" up to level 2 but level 3 and up is all about whether the driver should take over if the car asks them to. In fact, re-reading the levels, FSD Beta is at level 3 because:
- You are NOT driving when the features are active (the car accelerates, stops, turns, gives way, negotiates with traffic, waits for pedestrians, changes lanes, follows navigation, etc.).
- When the feature requests, you must drive. Check. FSD Beta will prompt the user to take over at times when it can't figure out the way forward.
- These features drive under limited condition and won't do so if conditions aren't met. Check too. FSD Beta will only become available when it has a good sense of the environment.
Well, thanks for the refresher. So FSD Beta is clearly at level 3 now.
> The very page you've linked lists all sorts of "features" up to level 2 but level 3 and up is all about whether the driver should take over if the car asks them to.
Thanks for making my point for me and conceding autonomy is, in fact, determined by who's responsible for safe operations and not just mistake percentage improvement.
> In fact, re-reading the levels, FSD Beta is at level 3 because:
> - You are NOT driving when the features are active (the car accelerates, stops, turns, gives way, negotiates with traffic, waits for pedestrians, changes lanes, follows navigation, etc.).
Clearly, you either didn't read the chart fully or didn't comprehend it well because you are going against Tesla's own claim that FSD is level 2. You also spectacularly missed the biggest condition why it's still level 2.
SAE level 2 in the chart says says:
> You are driving whenever these driver support features are engaged — even if your feet are off the pedals and you are not steering
> You must constantly supervise these support features; you must steer, brake or accelerate as needed to maintain safety.
Last I checked, FSD requires constant driver supervision and hands on the wheel. Tesla is very clear about this. So it is, in fact, level 2 by definition.
The key difference with level 3 is it doesn't require constant supervision — meaning a driver can read a book or watch a movie — and the system will alert in advance within a reasonable amount of time for a driver take over. Of course, nobody can really determine what "reasonable amount of time" is for a safety alert, which is why level 3 remains the most ambiguous of all the SAE levels.
You should really try to inform yourself of SAE level basics before spreading misinformation about a safety critical system.
You seem to be hung up on SAE definitions and missing my point which I'm going re-iterate, maybe from a different angle.
Moving from supervising the car to not supervising the car isn't a binary flip where suddenly in one software revision, you can take a nap while yesterday you couldn't. It's a spectrum. And that's why SAE levels between 2 and 3 are poorly describing this (not to mention how poorly level 2 itself is defined, as it covers a huge range of functionality, from something a bunch of cheap sensors can achieve with graduate level of CS knowledge to something Tesla AI has achieved with FSD Beta which has required custom computers, millions of miles of driving data and some of the biggest brains in AI world).
Since it's a spectrum, the only variable changing is how likely it is you as the supervisor need to take over control *because* your car either made a mistake or was about to. That's all it's reduced to — how often does you car make a mistake. That's all level 3 and up is. All the descriptions and charts do nothing but fog up this, which is unfortunate. Once you have a car that makes very few mistakes, you don't need to supervise it because the probability of it making mistakes is less than you as a human driver at which point it's a better driver than you anyway.
You can of course argue that reduction in mistakes is itself functionality. Well, I make a distinction between continuous and minor refinements and major enabling technology, like vector reconstruction of 3D space from images of cameras, or AI based route planning, which given more data, can plan better.
As far as I understand from Tesla's progress, they need to merely cover ever more corner cases to go up the levels.
And on the topic of supervision: whether you have to keep your hands on the wheel or otherwise supervise the car has a lot to do with policy and regulation. You can have a car today that is safe enough to drive while you asleep and good luck trying to sell it without telling the customers they must stay alert. This basically makes level 3 as defined in SAE subject not only to actual capabilities of a car but the regulatory environment in which it's sold.
> As far as I understand from Tesla's progress, they need to merely cover ever more corner cases to go up the levels.
I think this is the crux of the disagreements here. You say they need to merely cover more corner cases, while I think many (including myself) think that this endless list of corner cases is the primary almost-insurmountable problem.
From what I've seen of Tesla FSD (and competitors), these systems do pretty well in highly structured and orderly environments during clear desert weather. In order to deal with chaos in a blizzard etc, we're going to need far more than just a few tweaks. At this point, none of these companies are even doing any testing in extreme environments. They're still trying to stop their cars from hitting pedestrians in well lit areas on known crossing points. [1]