Tesla’s Chief Executive Elon Musk said this month the electric car manufacturer was close to making its cars capable of automated driving without any need for driver input, so-called Level 5 autonomy.
Yeah, right. Tesla has only Level 2 self-driving. They've never demonstrated Level 3 outside a video of a demo in Palo Alto. They wouldn't even let the press take a ride. Google/Waymo has hundreds of self-driving vehicles running around in test. Go to Mountain View and you'll probably see some. If Tesla had anything that really worked, they'd have enough test vehicles running around that it would be visible to the industry.
Waymo One is offering driverless rides in Phoenix AZ. Interestingly, they are now offering only autonomous rides - the "safety driver" is out, to prevent epidemic spread.
Exactly. He said this in 2014. 2016. 2018. Even at the end of 2019:
> "You'll be able to FSD, coast to coast, by the end of this year".
Meanwhile Teslas are still having a multitude of autopilot accidents that, had a human been paying attention (their fault, I understand), would not have happened - plowing at speed into fire engines with emergency lighting on, plowing into overturned semi trailers blocking multiple freeway lanes, aiming at barriers.
Humans are horrendous at maintaining constant attention to a task that only requires sporadic attention. If there is a human in the control loop, then it needs to take into account human limitations. Therefore, I'd be very hesitant to assign these as being the fault of the passenger who happens to be sitting in a driver seat.
I am honestly not sure how to interpret your comment but I'll try to respond. The fault is not with the AP. That's just a computer doing what it was programmed to do. It's with how the system is presented by the people selling it. The marketing is intentionally deceptive for profit and people died for believing it. And Germany didn't ban the system, they banned the deception.
If Wells Fargo or Comcast did the same pitchforks would be out. But Comcast can only hurt you, misusing AP can hurt others too. So having the government intervene is a matter of public safety and doing what a government should do: look out for people's interests, not the corporations'.
When you build a system you have to compensate for inherent weaknesses. Tesla does the opposite and exploits them for profit (as many other companies do) by capitalizing on these well known human weaknesses (lack of attention, gullibility especially with complex topics). They use easy to fool systems that assume drivers are always engaged because they allow the AP to disconnect less and look more capable. And they capitalize on the gullibility by branding the AP as something it's clearly not:
> full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat
> All you will need to do is get in and tell your car where to go.
> manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed.
> use of these features without supervision is dependent [...] regulatory approval, which may take longer in some jurisdictions.
I think the parent poster is trying to make the point that it's unreasonable to ask people to pay attention.
In any case the fault is most certainly not merely marketing, it's a fundamental design flaw. You cannot expect people to maintain constant attention when that attention almost never leads to anything; people simply aren't capable of that. That's also why similarly "boring" tasks are so failure prone (e.g. https://www.forbes.com/sites/michaelgoldstein/2017/11/09/tsa...) and why all kinds of creative measures are needed if you really need people to stare at stuff looking for black swans until their eyes glaze over.
Better marketing is not a solution to that problem. That's merely a solution to the fraudulent advertising (also worth solving - sure!)
> In any case the fault is most certainly not merely marketing, it's a fundamental design flaw.
I was actually referring to the ruling. The court did not take issue with the autopilot system, just with the way it's marketed.
For the rest of your comment I agree and I implied something along the same lines. Tesla should have built a system that works to compensate for human failings, not take advantage of them to sell more.
New roads are built to be more wavy to keep the drivers more engaged, some traffic light systems that are synchronized as a "green wave" are sometimes stunted to not create this automation of running through lights (especially true of public transport vehicles where lights sync to the vehicle, except when an emergency vehicle is coming), etc.
Tesla should have installed an advanced driver attention monitoring system even before turning on the AP and they should have set very, very clear expectations from the system. In reality they did the opposite with people ending up believing that the AP can handle driving by itself so they can "disconnect", but the attention monitoring being completely unfit for the purpose of making sure the human is "connected".
> > In any case the fault is most certainly not merely marketing, it's a fundamental design flaw.
> I was actually referring to the ruling. The court did not take issue with the autopilot system, just with the way it's marketed.
I meant the parent post you were replying to, i.e. what I think MereInterest meant when he said "If there is a human in the control loop, then it needs to take into account human limitations", to which you replied "I am honestly not sure how to interpret your comment but I'll try to respond. The fault is not with the AP."
I did not mean to imply you thought it was a fundamental design flaw - sorry!
Where I live yes. On some lines especially in the city center and more crowded intersections public transport gets priority. When they have stops before an intersection no matter how long they have to wait there for passengers to get on or off, the moment they are ready to move the light turns red for everyone else. There are also places where a special light turns red (there's no green) when they pass in order to guarantee the priority while the rest of the time traffic uses the right of way for example.
This makes sense, the light sync system is already in place and they carry dozens or more passengers at once. This kind of advantage over a car carrying one person gives people more incentive to use public transport.
(And I can also confirm the effect that at intersections where full public transport priority usually guarantees a green some drivers do react a bit... surprised on the rare occasion when it doesn't work for whatever reasons)
> the rare occasion when it doesn't work for whatever reasons
That's actually intentional (or at least I know for a fact that in some places it's by design). It keeps drivers alert and break the habit of just going ahead. Sometime for whatever reason the light has to be red and in some cases the public transport drivers were so used to just going through the intersection that they completely miss the lights. Also many of them get in their personal cars at the end of the day to the same effect. It's a known, studied effect and the solution is to "mix it up" a bit.
Wow. If you don't mind me asking, where is this? I live in a city that is considered relatively bike-friendly, but the timing of the lights is maximally bad for bikers, requiring us to stop at every light.
You are correct. There is a concept in more traditional engineering called “human factors”, meaning you have to include humans (and their limitations) in the design.
Expecting a human to remain attentive all the time is like using a sensor outside it’s linearity range to detect and mitigate a hazard. Good initiative but bad implementation and possibly a gap in the design culture
I find it easier to pay attention when autopilot is on, for a few reasons. The first reason will fade eventually: it's fascinating to see it in action, and to observe how it reacts to situations. Another reason is I am a bit more free to vary my attention safely, checking mirrors more often, seeing more of the scenery (don't worry, with fleeting glances, not long stares that would lead me to miss a truck crossing the road). Finally it eases the cognitive load of keeping safe follow distance and makes bumper to bumper traffic dreamy easy to the point where I no longer dread it, which is amazing! So I would not call it a design flaw. Checking the mirrors more often is good for safety too btw.
>the fraudulent advertising
purported / alleged. The claims of this have been debunked.
That’s your interpretation of the article’s interpretation of the court’s interpretation I guess. Making all the foregoing my interpretation of your interpretation.
Having read the Tesla website myself and understood its claims, they are not false.
I believe the court was concerned that it was theoretically possible that some people might get confused.
Or to put it in a harsh (to Tesla) way, the court feared some consumers could be misled (I would add, misled only due to the consumer’s own failure to read and listen to the information offered by Tesla with several different opportunities in varied and timely settings).
So with this in mind, the court took action based on those fears.
The numbers are hard to trivially analyze, because there is no source that tracks "deaths caused by tesla cars" - NHTSA tracks deaths in cars and pedestrians, but e.g. not in other cars involved in the accident, even if potentially there car without the fatality was a contributing factor (fault is kind of irrelevant).
But the numbers we do have make Tesla's look pretty dangerous. Tesla's overall fatality rate may be low, but that's true in general of luxury cars (perhaps because they're driven in safer areas, by safer drivers, have fewer safety related faults, or protect their occupants better). Compared to similarly priced cars, tesla does poorly (https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...).
Finally, specifically the autopilot is often compared incorrectly - autopilot isn't used on busy city streets or chaotic traffic situations (it's not capable of that yet) - but those are where most traffic fatalities occur. On the highway, fatalities are comparitively rare, even in "normal" cars. On interstates, in 2012 (I can't find more recent numbers), the fatality rate was 3.38 per billion miles travelled (source: http://www.bast.de/EN/Publications/Media/Unfallkarten-intern...) - and in some countries, much lower even than that (e.g. 0.72 in denmark). Since that includes old & cheap cars, you'd expect luxury cars to do better than that - that's the baseline autopilot should be compared against.
Comparing https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal... with https://cleantechnica.com/2019/10/14/lex-fridmans-tesla-auto... I'd say a lower bound on autopilot's record is around 2 per billion miles - that's OK, but not very good. We may be missing autopilot related fatalities; if we missed even a single one, that would make these numbers look outright poor. That's a very rough estimate; but if we want any kind of reliable estimate on autopilot safety, you'd need much more data that we have (also more data on competitors!) Of course the very fact that we can't tell because data is so sparse is good! It means there are few deaths in this category; while driving in general may be dangerous, driving a luxury sedan on an interstate is fairly safe - exactly how safe doesn't really matter.
----
All in all - Tesla's safety record looks good compared to the average car, but not very convincing compared to similarly aged similarly priced cars - i.e. Tesla's competition. If you were to base your buying choice on safety, you probably should not buy a Tesla; but definitely should not buy the marketing nonsense. On the upside; it simply doesn't matter much, driving on autopilot-enabled roads is fairly safe anyhow, regardless of whether autopilot helps or harms safety.
As of a recent update a month or two back, it is now. It is ever improving as they roll out new features.
Some of which are safety features. It’s a moving target. Whatever stats or safety records or flaws or concerns you think you know about today can be invalidated tomorrow.
In fact we just got an update tonight. I can’t tell you what’s in it since I haven’t gone downstairs to the car to read the release notes yet.
Oh definitely. The flaws may well be fixed, and aren't huge anyhow. But they're real enough to consider Tesla's past claims to be super-duper safe to be misleadingly cherry-picked (effectively they were lying), so it's a little hard to take new claims at face value.
Overall, I agree with the decision, that Tesla did falsely advertise by overstating what their autopilot could do. The point I was attempting to make was that that was not the only issue wrong. Specifically, the "their fault" in the GP's parenthetical I took to be referring to the safety driver, implying that the crashes were the fault of the human in the car not paying sufficient attention. I disagree with that statement, as paying high levels of attention to tasks that require zero attention the majority of the time is fundamentally not something that humans can do.
That's a bit cynical, "single-digit number" of people died simply because they believed the marketing. Ultimately it is their fault for taking the marketing over common sense and legal requirements but you can't handwave it away because "not enough people died". You can't blame a government for intervening to make sure more people don't fall for this kind of deceptive advertising tactics.
No car manufacturer today offers anything resembling true self driving. Not even close. But Tesla is the only manufacturer who insists they do and that any issues are of legal and regulatory nature. Wink wink nudge nudge. That is simply not true and if it weren't for drivers repeatedly intervening the AP would cause far more than single-digit number of accidents or deaths.
And this is not an effort against Tesla, it's an effort for consumers and safety. And the right to be informed about what you;re actually buying.
That's really not true. Those are people who consciously chose to ignore numerous warnings presented, both in the marketing (where the exact phrase "requires conscious human attention" is used), and also the warnings of the car itself where it says all the time things like "keep the eyes on the road", "need to stay alert", warnings every 10 seconds, audio signals, turning off in case the car knows the human is non-attentive etc.
It is very unfortunate when people die, but you have to take into account how many people die in other cars in general. If we want to get 0 traffic deaths, we need to ban all cars, period. As a society, we do not want this.
> But Tesla is the only manufacturer who insists they do
This is just blatantly false. No they don't. Show me any marketing material where tesla claims to have operational Level 5 self-driving in currently available to consumers cars. Show me where it says that. Not some article in some mainstream media where a journalist tries to get attention, but in Tesla-distributed materials, marketing or otherwise, where they claim to have Level 5. Where is it? Why would you even make such a claim that is so easy to check...
Yes: "All new Tesla cars come standard with advanced hardware capable of providing Autopilot features today..."
No: "...and full self-driving capabilities in the future—through software updates designed to improve functionality over time."
Yes: "The person in the driver's seat is only there for legal reasons. He is not doing anything. The car is driving itself."
Yes: Video showing self driving car on real roads with no hands on wheel or feet on pedals.
No "Autopilot advanced safety and convenience features are designed to assist you with the most burdensome parts of driving."
Yes: "The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat."
Yes: "All you will need to do is get in and tell your car where to go."
No: "The future use of these features without supervision is dependent on achieving reliability far in excess of human drivers as demonstrated by billions of miles of experience, as well as regulatory approval, which may take longer in some jurisdictions."
This is not what I would describe as clear. It's a contrived dance around the truth, designed to help you arrive at a conclusion without ever quite saying it outright - a deliberate marketing strategy. To be honest, combining this with the use of Autopilot with a capital A, we're not that far off subliminal messaging at this point.
A more honest marketing message would improve safety.
My point was that the page as a whole is not clear, but even the sample you have selected is problematic.
"will need" also implies certainty. This feature may never be available within the lifetime of the product being sold today, or indeed the lifetime of the customer or the company. This is misleading.
Tesla is making unverifiable claims about the future of a product which is on sale now.
It is a fact of the digital distribution age that traditional marketing doesn't fit the model of products which can change post-purchase. They are certainly not alone in this, with some very famous cases like No Man's Sky launching with less features than advertised or iPhone performance degrading after an iOS update.
However, this is a machine which can kill people, not a computer game missing content. Let's err on the side of caution with the marketing and not allow companies to talk about what their products can't do.
> but you have to take into account how many people die in other cars in general
I do and I see a completely different topic. What's your point? That people die in accidents? Sure. That companies should be allowed to mislead even if this leads to deaths? Those deaths in "other cars" are not because the manufacturer mislead the consumer. And if they do they'll probably get the same treatment since there are laws against false advertising. It's done all the time [0]. The end result of all those practices is that the consumer is deliberately mislead into getting the wrong impression even without explicitly lying. There's nothing special about Tesla's case in this context.
> This is just blatantly false. No they don't.
> Why would you even make such a claim that is so easy to check
I checked and quoted from the Tesla website literally 5 cm above on your screen [1]. Feel free to point me to any other manufacturer who describes their technology like that. Or to any other 150.000 E car with advanced self driving tech whose driver attention detector can be tricked with an orange because the manufacturer refuses to implement proper systems based on cameras or capacitive sensors. Or to any manufacturer whose driver assists caused deaths.
This isn't about technology failing. It's about intentionally proliferating an image of that technology that makes it look more capable than it actually is. Tesla's marketing material consistently makes people paint a false impression of the system. That in itself is a good enough reason to label it deceptive.
I’m genuinely surprised there hasn’t been a class action lawsuit over selling “full self driving” on Tesla vehicles. That’s been going on long enough to tip over from “over-optimistic” to “straight forward fraud”.
Tesla has only been promising "FSD" on cars sold after 2016. The service life for most vehicles is on the order of about 10 years or so, meaning I don't think you'll see serious pressure for a class action until the mid 2020's (but I do think it's coming).
About 50% of luxury car owners lease their cars, and 80% of non-Tesla EVs, compared to country wide averages in the 25-30% range. So while most of the Tesla’s sold with FSD are still on the road, a fairly sizable chunk of the 2016 vehicles may already be on their second owner, with the first owner paying for FSD they never got.
Now what’s less clear is how many Tesla owners have re-upped their lease, and how many bought the car they leased. Typically luxury owners lease their cars so they can get the latest features in 3 years, but Tesla hasn’t offered many physical changes to the 3 since it was introduced, with a lot of the changes being done in software.
What I meant is Tesla has an easy out until the first cars they sold the FSD update reach the end of their service life. Shitty for the early adopters yes, but not definitively false advertising yet.
technically, but only a handful (<5) of vehicles are run without a safety driver, each has a dedicated remote operator and a follower vehicle, and the area covered is much smaller than their regular operation (a small area in the suburbs).
I would reallyhope the remote operator is just there to punch a kill switch and force the vehicle to stop.
No idea if that's the case, remote-piloted drone cars just seem like a bad idea -- lag, loss of signal, hijacking, etc.
(Although if it is technologically feasible and we're willing to go that route, it seems like you could make eg long distance truck driving safer that way -- if you have truck pilots working from an office complex in suburbia, you could change drivers without stopping the vehicle, allowing drivers to take frequent breaks and work ~8 hour shifts before going home to sleep in their own beds, while the trucks drive non-stop. If most big-rig accidents are caused by drivers driving too long without enough sleep and rest, you should be able to get substantial decreases without the overhead of sending multiple drivers with each truck.)
Given Alphabet/Google's record of shutting down moonshots--or really services more broadly--what are the odds that Waymo is going to be around in a few years, especially if there'a a serious ad-tech fallout?
ADDED: Google has been pretty clear that they're only interested in door-to-door self-driving and not in incremental driver assistance or freeway-only hands-off. If that goal turns out to be something for an indefinite future--as seems entirely possible--I'm not sure why Google (of all companies) would continue to invest for decades towards an uncertain eventual goal.
It probably wouldn't go to zero. The IP and other assets are likely worth something to an automaker or auto supplier. But that number would doubtless be a lot less than valuation based on being the pre-eminent self-driving company in the very near future.
Dude you are ignoring a whole host of other factors here . Since you ignored them let me clarify . Waymo does not do freeways . It uses HD maps , bulky lidar systems which have many moving parts , speed limits, etc . Waymo will never be a real thing solely based on the fact of power draw for all those electronics . If that system is mounted on a EV it will drain batteries quickly and add weight if it's added on ICE car it will suck the range, same is true for all other car manufacturers attempting full autonomy. Tesla is the only company who even has a remote possibility of getting the feature on the road if they market that realty , as something of a pre-order kind of thing I'm not sure I have a problem with that. Personally I feel they would have to have a another revision of Thier FSD chip and refresh Thier current ML model to realize it .
Yeah, right. Tesla has only Level 2 self-driving. They've never demonstrated Level 3 outside a video of a demo in Palo Alto. They wouldn't even let the press take a ride. Google/Waymo has hundreds of self-driving vehicles running around in test. Go to Mountain View and you'll probably see some. If Tesla had anything that really worked, they'd have enough test vehicles running around that it would be visible to the industry.
Waymo One is offering driverless rides in Phoenix AZ. Interestingly, they are now offering only autonomous rides - the "safety driver" is out, to prevent epidemic spread.