I am honestly not sure how to interpret your comment but I'll try to respond. The fault is not with the AP. That's just a computer doing what it was programmed to do. It's with how the system is presented by the people selling it. The marketing is intentionally deceptive for profit and people died for believing it. And Germany didn't ban the system, they banned the deception.
If Wells Fargo or Comcast did the same pitchforks would be out. But Comcast can only hurt you, misusing AP can hurt others too. So having the government intervene is a matter of public safety and doing what a government should do: look out for people's interests, not the corporations'.
When you build a system you have to compensate for inherent weaknesses. Tesla does the opposite and exploits them for profit (as many other companies do) by capitalizing on these well known human weaknesses (lack of attention, gullibility especially with complex topics). They use easy to fool systems that assume drivers are always engaged because they allow the AP to disconnect less and look more capable. And they capitalize on the gullibility by branding the AP as something it's clearly not:
> full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat
> All you will need to do is get in and tell your car where to go.
> manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed.
> use of these features without supervision is dependent [...] regulatory approval, which may take longer in some jurisdictions.
I think the parent poster is trying to make the point that it's unreasonable to ask people to pay attention.
In any case the fault is most certainly not merely marketing, it's a fundamental design flaw. You cannot expect people to maintain constant attention when that attention almost never leads to anything; people simply aren't capable of that. That's also why similarly "boring" tasks are so failure prone (e.g. https://www.forbes.com/sites/michaelgoldstein/2017/11/09/tsa...) and why all kinds of creative measures are needed if you really need people to stare at stuff looking for black swans until their eyes glaze over.
Better marketing is not a solution to that problem. That's merely a solution to the fraudulent advertising (also worth solving - sure!)
> In any case the fault is most certainly not merely marketing, it's a fundamental design flaw.
I was actually referring to the ruling. The court did not take issue with the autopilot system, just with the way it's marketed.
For the rest of your comment I agree and I implied something along the same lines. Tesla should have built a system that works to compensate for human failings, not take advantage of them to sell more.
New roads are built to be more wavy to keep the drivers more engaged, some traffic light systems that are synchronized as a "green wave" are sometimes stunted to not create this automation of running through lights (especially true of public transport vehicles where lights sync to the vehicle, except when an emergency vehicle is coming), etc.
Tesla should have installed an advanced driver attention monitoring system even before turning on the AP and they should have set very, very clear expectations from the system. In reality they did the opposite with people ending up believing that the AP can handle driving by itself so they can "disconnect", but the attention monitoring being completely unfit for the purpose of making sure the human is "connected".
> > In any case the fault is most certainly not merely marketing, it's a fundamental design flaw.
> I was actually referring to the ruling. The court did not take issue with the autopilot system, just with the way it's marketed.
I meant the parent post you were replying to, i.e. what I think MereInterest meant when he said "If there is a human in the control loop, then it needs to take into account human limitations", to which you replied "I am honestly not sure how to interpret your comment but I'll try to respond. The fault is not with the AP."
I did not mean to imply you thought it was a fundamental design flaw - sorry!
Where I live yes. On some lines especially in the city center and more crowded intersections public transport gets priority. When they have stops before an intersection no matter how long they have to wait there for passengers to get on or off, the moment they are ready to move the light turns red for everyone else. There are also places where a special light turns red (there's no green) when they pass in order to guarantee the priority while the rest of the time traffic uses the right of way for example.
This makes sense, the light sync system is already in place and they carry dozens or more passengers at once. This kind of advantage over a car carrying one person gives people more incentive to use public transport.
(And I can also confirm the effect that at intersections where full public transport priority usually guarantees a green some drivers do react a bit... surprised on the rare occasion when it doesn't work for whatever reasons)
> the rare occasion when it doesn't work for whatever reasons
That's actually intentional (or at least I know for a fact that in some places it's by design). It keeps drivers alert and break the habit of just going ahead. Sometime for whatever reason the light has to be red and in some cases the public transport drivers were so used to just going through the intersection that they completely miss the lights. Also many of them get in their personal cars at the end of the day to the same effect. It's a known, studied effect and the solution is to "mix it up" a bit.
Wow. If you don't mind me asking, where is this? I live in a city that is considered relatively bike-friendly, but the timing of the lights is maximally bad for bikers, requiring us to stop at every light.
You are correct. There is a concept in more traditional engineering called “human factors”, meaning you have to include humans (and their limitations) in the design.
Expecting a human to remain attentive all the time is like using a sensor outside it’s linearity range to detect and mitigate a hazard. Good initiative but bad implementation and possibly a gap in the design culture
I find it easier to pay attention when autopilot is on, for a few reasons. The first reason will fade eventually: it's fascinating to see it in action, and to observe how it reacts to situations. Another reason is I am a bit more free to vary my attention safely, checking mirrors more often, seeing more of the scenery (don't worry, with fleeting glances, not long stares that would lead me to miss a truck crossing the road). Finally it eases the cognitive load of keeping safe follow distance and makes bumper to bumper traffic dreamy easy to the point where I no longer dread it, which is amazing! So I would not call it a design flaw. Checking the mirrors more often is good for safety too btw.
>the fraudulent advertising
purported / alleged. The claims of this have been debunked.
That’s your interpretation of the article’s interpretation of the court’s interpretation I guess. Making all the foregoing my interpretation of your interpretation.
Having read the Tesla website myself and understood its claims, they are not false.
I believe the court was concerned that it was theoretically possible that some people might get confused.
Or to put it in a harsh (to Tesla) way, the court feared some consumers could be misled (I would add, misled only due to the consumer’s own failure to read and listen to the information offered by Tesla with several different opportunities in varied and timely settings).
So with this in mind, the court took action based on those fears.
The numbers are hard to trivially analyze, because there is no source that tracks "deaths caused by tesla cars" - NHTSA tracks deaths in cars and pedestrians, but e.g. not in other cars involved in the accident, even if potentially there car without the fatality was a contributing factor (fault is kind of irrelevant).
But the numbers we do have make Tesla's look pretty dangerous. Tesla's overall fatality rate may be low, but that's true in general of luxury cars (perhaps because they're driven in safer areas, by safer drivers, have fewer safety related faults, or protect their occupants better). Compared to similarly priced cars, tesla does poorly (https://medium.com/@MidwesternHedgi/teslas-driver-fatality-r...).
Finally, specifically the autopilot is often compared incorrectly - autopilot isn't used on busy city streets or chaotic traffic situations (it's not capable of that yet) - but those are where most traffic fatalities occur. On the highway, fatalities are comparitively rare, even in "normal" cars. On interstates, in 2012 (I can't find more recent numbers), the fatality rate was 3.38 per billion miles travelled (source: http://www.bast.de/EN/Publications/Media/Unfallkarten-intern...) - and in some countries, much lower even than that (e.g. 0.72 in denmark). Since that includes old & cheap cars, you'd expect luxury cars to do better than that - that's the baseline autopilot should be compared against.
Comparing https://en.wikipedia.org/wiki/List_of_self-driving_car_fatal... with https://cleantechnica.com/2019/10/14/lex-fridmans-tesla-auto... I'd say a lower bound on autopilot's record is around 2 per billion miles - that's OK, but not very good. We may be missing autopilot related fatalities; if we missed even a single one, that would make these numbers look outright poor. That's a very rough estimate; but if we want any kind of reliable estimate on autopilot safety, you'd need much more data that we have (also more data on competitors!) Of course the very fact that we can't tell because data is so sparse is good! It means there are few deaths in this category; while driving in general may be dangerous, driving a luxury sedan on an interstate is fairly safe - exactly how safe doesn't really matter.
----
All in all - Tesla's safety record looks good compared to the average car, but not very convincing compared to similarly aged similarly priced cars - i.e. Tesla's competition. If you were to base your buying choice on safety, you probably should not buy a Tesla; but definitely should not buy the marketing nonsense. On the upside; it simply doesn't matter much, driving on autopilot-enabled roads is fairly safe anyhow, regardless of whether autopilot helps or harms safety.
As of a recent update a month or two back, it is now. It is ever improving as they roll out new features.
Some of which are safety features. It’s a moving target. Whatever stats or safety records or flaws or concerns you think you know about today can be invalidated tomorrow.
In fact we just got an update tonight. I can’t tell you what’s in it since I haven’t gone downstairs to the car to read the release notes yet.
Oh definitely. The flaws may well be fixed, and aren't huge anyhow. But they're real enough to consider Tesla's past claims to be super-duper safe to be misleadingly cherry-picked (effectively they were lying), so it's a little hard to take new claims at face value.
Overall, I agree with the decision, that Tesla did falsely advertise by overstating what their autopilot could do. The point I was attempting to make was that that was not the only issue wrong. Specifically, the "their fault" in the GP's parenthetical I took to be referring to the safety driver, implying that the crashes were the fault of the human in the car not paying sufficient attention. I disagree with that statement, as paying high levels of attention to tasks that require zero attention the majority of the time is fundamentally not something that humans can do.
If Wells Fargo or Comcast did the same pitchforks would be out. But Comcast can only hurt you, misusing AP can hurt others too. So having the government intervene is a matter of public safety and doing what a government should do: look out for people's interests, not the corporations'.
When you build a system you have to compensate for inherent weaknesses. Tesla does the opposite and exploits them for profit (as many other companies do) by capitalizing on these well known human weaknesses (lack of attention, gullibility especially with complex topics). They use easy to fool systems that assume drivers are always engaged because they allow the AP to disconnect less and look more capable. And they capitalize on the gullibility by branding the AP as something it's clearly not:
> full self-driving in almost all circumstances. The system is designed to be able to conduct short and long distance trips with no action required by the person in the driver’s seat
> All you will need to do is get in and tell your car where to go.
> manage complex intersections with traffic lights, stop signs and roundabouts, and handle densely packed freeways with cars moving at high speed.
> use of these features without supervision is dependent [...] regulatory approval, which may take longer in some jurisdictions.