Stuff like this is completely expected of nonlinear, high dimensional controllers (and neural networks used for object detection and decision anlaysis fit into this category)...
NN's are not fit for high-predictability, high-safety-factor control. I wonder if, in fact, we cannot construct a NN controller with the level of unusual object detection and edge-case scenario recognition required to match an average human drivers capability to recognize and react appropriately to unusual situations. I mean to say that I think this is something like an NP-hard problem... to make it one-log better you need 100x the data and 100x the NN dimensionality. Model complexity explodes...
I also think we are spending a lot of time and effort solving the wrong part of the transportation problem. Cars have a thermodynamic problem for society --> a 1500kg car to move 1-2 80kg people is very energy inefficient and resource intensive to manufacturer.
We need remote work, more home delivery options (e-truck/vehicles), greater incentive for electric bikes, and better/more public transportation.
Cars should not detect "obstacles". They should detect roads. They should not seek _exceptions_ to driveable space through classification- or segmentation-first strategies. They should only detect driveable space. Full stop.
You either detect a full-stopping-distance's worth of clear road through reliable (non visual) ranging with dense enough samples to preclude wheel-catching voids ... or you will continue to run into "objects" which slipped through your classifier.
I can accept reduction in this margin (e.g. in dense traffic flow) when it is first shown to work with margin.
Still there are some exceptions. Kid has collapsed on a lane and lying down on the lane. In a way that line markings are NOT crossed and head is further than legs from Car's (and your) point of view. Car MUST stop.
Or maybe that's just a fox that's lying down there - and depending on road, speed, etc - you take more risk dodging, breaking and car SHOULD just go.
Any anomaly makes the space non-driveable. It’s a road. An autopilot should not be entrusted to decide further, beyond anomaly detection sufficiently far ahead to allow for braking.
If there were anywhere that could have had full autonomy even a decade ago it would be train networks, and it's not for lack of obstacle detection technology that they don't. The problem is contextually dependent enough even on an entirely linear rail network that humans need to add their problem solving finesse regularly.
I just don't think autonomous cars is a problem worth working on if the requirement is that they need to work on existing road infrastructure. We are going to get half baked, not-really-autonomous vehicles and the manufacturers are going to shift blame to the drivers because they should have known better than to use their vehicle's advertised autonomy in (x) specific circumstances, where x is the infinitely variable realities of the outside world.
The idea originally expressed by jvanderbot presumes dense non-visual (e.g. LIDAR) scanning and simple anomaly detection without AI “black boxes”.
With that in mind, is this really a question of context? The presence of an entity that reflects LIDAR should be enough of a reason for the system to brake.
Yes, this would imply lower threshold of what would be a situation that warrants braking, and hence on many ordinary streets autopilot may become unusable or car movement could be perceptually too slow.
To your point about rail network, autonomous rail lines do exist—I’ve used the new light rail system in Macau, and trains are entirely driverless. (From my observation of their behavior, they do not appear to be controlled remotely either.) I imagine some factors that impede innovation in this space do apply to cars (e.g., track security), but others don’t (guaranteeing stopping distance for long trains with heavy cargo, keeping people employed, arguably less flexible infrastructure).
Of course it's still a question of context, there are more factors are play than "Should I stop, Y/N", such as "Will stopping actually work given the conditions, my payload, is the object moving toward me. Should I swerve instead?"
My point is if the best we can do is a LIDAR detection auto-stop then that's not full autonomy, and I question how close we'll actually get to contextually aware autonomy.
> To your point about rail network, there are autonomous rail lines—I’ve used the new light rail system in Macau, and trains are entirely driverless.
That is actually really cool to hear. I know I come across as a Luddite by having a pessimistic view on car autonomy, but I will be happy if we can crack it.
> Will stopping actually work given the conditions, my payload
If there is rain and/or your payload is heavy, that increases your braking distance, so accordingly detection should either work further ahead or your speed would have to be reduced to allow for safe braking.
> is the object moving toward me
This is a very valid point. If not all vehicles are autonomous, even on a well-protected highway it is possible to have a cascading accident caused by human mistake. This would even be an issue if all vehicles are autonomous, since some could be hijacked by owners.
In my view this type of “dumb” object detection for emergency braking purposes should work together with contextually aware AI-based overall autonomy, not instead of it. Upthread I wrote “an autopilot should not be entrusted to decide further”, which was perhaps ambiguous—meant that in context of an anomaly ahead.
Without some incredible breakthrough in AI, trying to adapt autonomous cars to the regular road network is like swimming up a cascade. We're far more qualified to adapt the road infrastructure to autonomous driving, perhaps even remove most of the cars and driving in the process, leaving only the bare minimum.
But of road becomes non "drivable" at a point ahead where you can't stop in time you need to evaluate what to do (hit it, try evade, etc.) which are all hard decision.
> An autopilot should not be entrusted to decide further, beyond anomaly detection sufficiently far ahead to allow for braking.
This isn't possible. Not just technical but theoretical. Situations in streets can change way to fast. Both through external factors and other cars. Even on a straight highway with fences on the side to prevent anything from going on it this would still but work. Once it happens an intermediate decision is needed which can not be reached on time by delegation to the driver (as the driver want driving you can't except a good reaction speed even if that person had the eyes on the road).
> This isn't possible. Not just technical but theoretical. Situations in streets can change way to fast.
Then drive slower. It's a speed limit not a speed minimum.
When I was training for my drivers license (in Denmark) my instructor told me to never drive so fast that I wouldn't be able to break to a complete stop within the current visible distance.
My instructor (in the UK) gave me the same instruction. To be fair, it's pretty intuitive. It's also remarkable how few people follow it (e.g. they don't slow down sufficiently around blind bends, or when there are pedestrians close by).
Eliminate situations where you can’t stop in time. If impossible, then autopilot should not be marketed as such.
On highways, enough clearance from the sides of the road and wide enough detection beam cone may be able to help. Yes, I imagine in such a case high-speed motorways would warrant security measures similar to those around high-speed train tracks.
On denser city streets where restricting pedestrian access is not feasible nor desirable[0], appropriate (yes, probably pretty low) speed limits could reduce braking distance and hence the chance of a tragic accident to below what currently happens with human drivers.
[0] This concerns existing streets. For new areas, infrastructure that separates car and pedestrian traffic with tunnels/bridges and accessible over-/underpasses could address that naturally.
Love this idea. If autopilot cannot be engaged on an unsafe road, then maybe we will actually complain enough to improve roads and supporting infrastructure
This doesn't work because laterally moving obstacles can enter the roadway within your stopping distance buffer. A self-driving car MUST be able to identify obstacles, especially moving obstacles. Your stopping distance at highway speed may be equivalent to one football field, but you may be able to anticipate obstacles if you identify laterally moving objects before they enter the roadway.
Your suggested scheme for driving offers zero protection from cross traffic cutting in front of the autonomous vehicle, whether at a stoplight or otherwise.
Moreover, not all non-driveable space is created equal, so if the vehicle has to swerve to avoid a collision, it should be able to identify the difference between, say, a clearing, a pedestrian, and a pond.
These are totally basic principles with which any driver (and especially any motorcyclist) would have no trouble. The problem with your imperative statement is that it produces a pointlessly elegant, practically bad solution.
"First" does not mean "exclusively". But detection of an impending obstacle should not start with classifying that obstacle. You need only know that some geometry that looks larger than trivial is moving at a vector that will impeded your stopping distance requirement. Until that is proven to work, it is pointless to try to "understand" the object or its intent.
I'll go further. To avoid clearings, ponds, and pedestrians, Self-driving cars should be highway-daytime only until proven reliable. Starting with the edge cases and building back to the easy ones is ridiculous engineering policy.
Ok, a wind blows a empty paper bag on the road. A human will see by the way it moved that it is empty and safe to drive over. What will AI see? A driveable road or a road with a rock on it?
I still think that without the "model of the world" no AI will be capable of driving. And that model requires GAI.
Interestingly, I remember this exact scenario was a noted problem for some sort of computer-controlled shock absorbers that I saw on the "Beyond 2000" TV show in the mid-90s. In many scenarios, the shock absorbers could detect bumps and holes in the road and compensate perfectly, resulting in a ride that was smooth as glass.
But they still hadn't figured out how to differentiate between a crumpled paper bag and a rock, or a pothole filled with water vs a smooth surface. Otherwise, it was really impressive.
I feel like Level 4 or 5 Autonomous Driving will be the same -- there are just some problems that will never be solved well enough to make it work acceptably.
I think the vehicle should alert the driver, and being controlled slowdown until it is overridden. Sorry, driver, you are inconvenienced. Maybe you are in the carpool / AI-only lane, so it's not a big deal. Yes, there probably should be an AI-only lane for now.
This could exist now, but we have set sights on Level 5 at the expense of pragmatism.
Peripheral vision and detection is extremely important to avoiding accidents... what about cross traffic or children running into the road (as others have mentioned). Detecting traffic lights that are out... etc.
If you turn up the forward looking anomaly detection and safety factor too much... well, among other things, the non-linear controller is going to emergency brake for no good reason (Tesla owners have already reported this). Spurious emergency braking is extremely dangerous for traffic behind a vehicle and the danger increases at higher speeds where the controller will be even more touchy.
Methods for predicting "drivability" or "clearness" of roads based on moving object detection are not different than what I said and are not novel in R&D / literature or practice. I'm asking for exactly that, absolute conservatism first.
In my opinion, autonomous driving should be highway only, daytime only until proven effective in mass produced vehicles. The hype about rushing to level 5 is marketing, not a feasible technology roadmap.
So, turn roads into closed train tracks? That will take even longer than creating a true AI... people drive cars, people cross roads, stuff gets on roads, there are so many vehicles some do break down... What you suggest would lead to self-driving cars turning most roads into endless traffic jams.
That sounds like speculation. But anyway, there's no reason to open the floodgates in an unprincipled way just because being principled and iterative is challenging.
The difficulty of the situation in general is why limitations should be placed on the system and a conservative approach taken in practice. Trying to circumvent difficulty with complexity is not an option for the first roll-out of a product.
Deploy self-driving on marked single-lane AI-only highways. Maybe let self-driving cars use bus lanes. But that's it for now, until we can certify and understand limitations of a system. It's only fair to the public to have slow rollouts that require mostly-attentive driving.
However one big problem with deploying systems using this approach today is that it happens to be very hard to do this with enough nines that your car isn't slowing down or emergency braking for spurious inputs. Floating plastic bag / leaf - trash on the road - asphalt line that looks wrong - camera glare - a puddle with a reflection - giant sticker of a photo of a traffic cone - etc. etc. etc. State of the art would probably be undriveably cautious today.
Most things that don't look like road, between lane markers on a highway, are still drivable space. Probably 99%.
That doesn't mean it's the right approach. Just that the alternative isn't really deployable today.
I agree. To elaborate, I think that "Looks" is the problem. Relying on appearance is a fundamental mistake. A road is a predictable flat surface that is safe for wheels at speed. If the map matches your ranging probes, and no geometry that is non-trivial sized impedes or will impeded your progress, you can drive.
This should immediately preclude any approach to self driving cars that has pedestrians, bags of trash, debris, cross traffic, etc. Highway (and probably daytime) only. I don't understand the obsession with starting with corner cases and working back to the very real good that self-driving autonomy can do on highways at speed. Build back with lessons learned, rather than solving the whole problem in one shot. That seems like fundamental engineering, but the marketing team is in charge, not the engineers.
The problem that almost all current AI research has is that nobody knows if your model is detecting the obstacle or the absence of a road.
The cliché example is when people tried to build a tank classifier. Since they took the pictures of the different tank types on different days, the AI learned to detect the weather conditions instead.
So to determine if your AI is detecting the roads absence, you would first need a lot of testing images that look realistic in general, but with absent road. As far as I know, such a dataset does not publicly exist yet.
Computer can draw what it sees at windshield, for driver, by tracing position of driver head, to position projected image correctly. E.g. draw red outline around obstacles, draw green outline for clear road, draw predicted trajectories in orange, draw selected path in blue. Human will see what computers sees, so it will be able to react quickly.
It may or may not have happened with tanks; it sure happened with horses:
To understand how their AI reached decisions, Müller and his team developed an inspection program known as Layerwise Relevance Propagation, or LRP. It can take an AI’s decision and work backwards through the program’s neural network to reveal how a decision was made.
In a simple test, Müller’s team used LRP to work out how two top-performing AIs recognised horses in a vast library of images used by computer vision scientists. While one AI focused rightly on the animal’s features, the other based its decision wholly on a bunch of pixels at the bottom left corner of each horse image. The pixels turned out to contain a copyright tag for the horse pictures. The AI worked perfectly for entirely spurious reasons. “This is why opening the black box is important,” says Müller. “We have to make sure we get the right answers for the right reasons.”
There is, in general, a great deal of work on explaining the decisions of neural net. Explainable AI is a thing, with much funding and research activity and there's books and papers etc, e.g. https://link.springer.com/book/10.1007/978-3-030-28954-6.
And all this is becaue, quite regardless of whether that tank story is real or not, figuring out what a neural network has actually learned is very, very difficult.
One might even say that it is completely, er, irrelevant, whether the tank story really happened or not, because it certainly captures the reality of working with neural networks very precisely.
Incidentally, (human) kids should never be allowed to hug sheep or goats like that. They can easily catch something nasty (enterotoxic E. coli, mostly). See e.g.:
As I say above, even if the tank story is apocryphal, it captures the tendency of neural nets (modern or ancient, doesn't matter) to overfit to irrelevant details (which, btw, is what Layerwise Relevance Propagation from my comment above is trying to determine).
This is probably the reason why this story has been repeated so many times (and with so many variations): beause it rings true to anyone who has ever trained a neural net, or interacted with a neural net for any significant amount of time. Unfortunately, the article you cite chooses to suggest otherwise.
In any case, if the tank story is an urban legend it has its roots firmly in reality.
I understand that the scenario rings true, but Plyphon_ specifically asked for "a link to that tank/weather study", so the fact that it doesn't seem to exist is of primary importance.
From the tone of Plyphon_'s comment it seems to me pretty obvious that they had at least read the article you link above and the point of the comment was not to actually get a source for OP's comment, but to score some kind of internet burn points. Judging from the earlier greying-out of Plyphon_'s comment I'm not the only person who thought so. I don't think that kind of comment should be validated with an actual response. It is borderline snark and certainly does not contribute anything to the conversation.
I'm happy to accept my mistake if I have misunderstood Plyphon_'s comment.
Until I started peeking into how self-driving cars worked, I understood that this was the standard way to build a self-driving vehicle. It still is, outside the marketing-driven industry.
> a 1500kg car to move 1-2 80kg people is very energy inefficient and resource intensive to manufacturer.
Cars used to weigh /less/. The weight is the price of all of our drastically improved safety systems. Either way, you're stuck with a lot of that infrastructure if you want to move food and other goods between cities. All of that is the price of flexibility in our logistics.
The great wonder of our time is that we can do all of that _and_ start the long process of deprecating using fossil fuels. Now, if we can build the cars to last much longer than we currently do, we're really starting to win.
> and better/more public transportation.
I feel like there's a hidden trade off here as well. I think the "better" and "more available" you make public transportation, the less dense your ridership gets on certain lines, and the less likely you are to truly gain any efficiency across the system.
I think that your definition of "NP-Hard" is incorrect. It would be better to instead say "Exponentially bound in required training data size"
I say this because having a solid model of these terms is required when making "intuitive conjecture" about what is and isn't feasibly possible. Driving is NP-Hard? It would hint that humans have brains capable of computation more powerful than a turing machine. Which... suffice it to say we probably don't. I'm not aware of any proven algorithms that humans can compute which computers cannot.
In images free of distortion and noise, computers were outperforming humans in image recognition 2 years ago! [0]. Since then, with advances in GANs (which specifically addresses the noise/distortion issue), I suspect we are close to achieving super-human image recognition. The last missing piece is "context" or prior knowledge. If you only see a whisp of hair sticking out from behind an object, you have prior knowledge that hair grows on humans, and that there's probably a person there. This last piece is being addressed by multi-modal networks [1].
If you have any doubt of the power of computer vision, my last hope is to link you to this paper from 2017, look at the "regions-hierarchical" results:
The mistake is believing this is a training data problem. You may have never seen an overturned truck before, but if you spot one on the road in front of you you'll easily figure out you need to stop. You don't even need to know exactly what it is, you just need to know A) you are going to do damage to it or vice versa if you hit it, and B) it's not getting out of your way.
-- Cars have a thermodynamic problem for society --> a 1500kg car to move 1-2 80kg people is very energy inefficient and resource intensive to manufacturer.
I think there's a good gap in the market to be made by E-Bikes, but it needs regulatory help (Bike lans with barricades) to make people feel safe enough to commute.
My reasonably old and frail father has an E-Bike and he zips around town on it. The hill assist is great (Though i did enjoy the slightest bit of schadenfreude when he's battery died before the return uphill to his house on day) But he lives in a town with a lot of bike paths, so he's not really on the road.
I think if you can break open that market you could make driving safer and make biking safer. It's just right now driving is a wealthy gabmit. Rich people drive tesla's and value signal with them it's not a new issue but getting people onto bicycles would be great for everyone on the road.
Yeah except this crash was on the highway. No one is taking ebikes on the highway. It’s arguably the most controlled environment there is for cars outside of a test track.
Regular use of (e)bikes within a city can make people sell their cars, and use other means of transport for occasional longer rides. Since I replaced my car with a bike, I also spend much less time in a car on the highways.
Cars don’t really require all that much energy. A 15,000 mile per year car at 4 kWh per mile is 10.3 kWh per day. You can get that much solar power for the fraction of the price of the average new car.
Roads, human lives, traffic congestion, etc are much larger issues than energy. Over 20% of NYC is devoted to cars and they don’t even come close to covering the cities transportation needs.
Even my Volvo XC60 PHEV(2.2 tonne SUV which also+ carries around a petrol motor and a tank of fuel) averages about 2.5-3miles/kWh in pure electric mode. These numbers can only be improved going forward.
Where does your estimate of 50 kWh per day come from? According to [1], Tesla's 2017 manufacturing and delivery released 186000 metric tons of CO2, most of that "indirect" (as in at their suppliers). (146000 for "Facilities" and 39000 for Sales, Service, and Delivery.) According to [2], they delivered 101312 vehicles in the same period. That is 1.8 metric tons of CO2 per delivered car. Your rate of conversion between kWh and metric ton of CO2 may vary, but the US average is about .99 lbs/kWh [3] or 3640 kWh per car. If we assume a car lasts 10 years, that is about 1 kWh/day. Also these numbers are from back when the made almost entirely Model S and X, so it is probably with a 3-filled fleet.
I was assuming a car that cost $30K and lasted 15 years.
The parent seemed to be talking about cars in general, so I wasn't referring to Teslas in particular.
I can try this again using some different sources and methods.
- GDP of $1000 equates to about 1540 pounds CO2.[1]
- Each pound of CO2 implies about a kWh based on average power generation intensity.[2]
- Therefore, a $30K car implies 30 x 1540 or about 46,200 kWh.
- Divided by 15 years and by 365 days gives about 8.5 kWh per day.
So, I was probably off by a lot, and may still be off, but I feel like my point is intact, because 8.5 kWh still almost doubles the energy requirement originally stated. It's not negligible.
I see. For a regular gas car, the 4 miles per kWh is wildly optimistic -- 134 miles per gallon. The kWh per mile (just from tank to wheels) would be something like 4 times higher than that. (1 kWh per mile = 33 miles per gallon). So that 8.5 kWh per day from manufacturing would be on top of something like 41 kWh per day of driving, not the 10.3 from the post up there.
Sure, I get kind of lost trying to figure out what you are saying when you're addressing me about someone else's comments that neither of us may fully understand the thinking behind.
However you look at it, Tesla or not, 10 kWh leaves out a lot which was my original point.
Cars last far longer than 15 years on average. For the average car in the US to be almost 12 years old means they must last on average about 25 years as the rate of car production has been increasing.
Secondly, GDP to CO2 figures are a poor fit. Agriculture for example is directly 10% of US CO2 emissions but only 5.4% of GDP.
"While an average lifespan of vehicles is not given in the IHS report, a 2014 Automotive News article stated that, at that time, the peak lifespan or “scrapping age” of a vehicle was 13 to 17 years old"
Scrapping age relates to when the car is no longer worth repairing after an accident. In the US many cars also get exported when their not worth keeping in used car lots. Still a huge number of older cars are on the road and many of them are far older than 25 years.
Bear case for self-driving: in 1 million miles of driving, roughly the number between major human accidents, do we really think these cars will not encounter one AI-complete problem.
It is perfectly reasonable to expect these cars to do at least as well as humans at avoiding mind-bogglingly stupid scenarios that the least experienced driver could avoid while drunk, talking on the phone, and applying makeup. This is absolutely one of those scenarios.
We hold machines to much higher standards. We accept human imperfection and human error. People killing people on the road because they are sleepy or not paying attention is a risk we are willing to accept. The reason is that people have a skin in the game themselves. Being in a accident means they risk injury, fines, jail time.
I think few are willing to accept autopilots that are only as bad as (or as good as) humans. They must be significantly better than human drivers, simply because we don't accept their errors like we do human errors.
This is completely irrational when looked at purely from a safety point of view (Who doesn't want safer roads?), but this obstacle is very real.
Worse: that autonomous cars are better in many scenarios or aspects (they don't get sleepy, they are never drunk) doesn't mean we accept that they are ever worse in any other aspect, despite the total safety being better.
That is: autonomous vehicles must not just be as good as human drivers, they must be significantly better (or at least safer) than human drivers. And not only that, they must be significatly better in every aspect of driving, for them to gain acceptance.
It is not irrational because self driving cars are not interchangeable with human drivers 1-1. If autonomous cars are put on roads it is very likely net traffic increases. We put up with bad driving because we have to, this is absolutely not the case for machines. I, personally, don't want more average drivers on the roads.
The problem is that the flaws of humans tend to be random and stochastic, while the flaws of machines tend to be consistent and systematic. This is the at least the fifth example of Tesla missing stationary objects on the road.
Humans might have a judgment failure every so often, but they might realize how unsafe they are and change their behavior, or drive less, or get their license taken away. A machine consistently crashing cars has no self correction protocol, and will consistently make the same mistake over and over.
If there was an autopilot that nailed every scenario in any weather condition except clearly overturned trucks in good weather, should that system be approved for general public use?
Considering the fact that accidents often attract bystanders and emergency personnel to help the incapacitated driver, the inability to avoid an existing wreck in clear weather is a glaring hazard to human life.
Moreover, trucks sometimes contain hazardous or flammable materials.
So the answer to your question is a resounding no, in my view.
You can bring in new safety features without having full self-driving. I think the safest option is to have human drivers, with the car always monitoring to avoid any serious accidents. People will still be attentive, since they are in full control, but you avoid just as many accidents as you would if the car was driving itself.
I agree. By establishing an "auto-pilot," you actually eliminate redundancy by putting the driver in a passive position. (No matter how many times you tell the driver to remain attentive)
The machine ought to be an error handler for specific failures of an active human driver (sudden braking of lead vehicle, lane straying, etc). This is the only way to get both machine and human to pay full attention. A person in a co-pilot role will struggle to react quickly enough to handle errors of the machine auto-pilot.
So, yeah, you're right - there aren't any alternative systems that manage to avoid this problem. Even humans suck at it. And if even humans can't handle construction areas safely, how can you possibly expect computers to be perfect?
There are. Use two or three alternative AI implementations, which will watch each other. Kind of famous "Predator" algo, but for driving. If one will fail, second will pick up. If one makes mistakes, second will teach it.
Nope, it shouldn't. A system that happily plows into a massive static object that blocks 3 lanes of traffic just because the system didn't recognize it as a danger should never be allowed on the road. In fact Tesla should remotely disable autopilot on every single car out there until this is investigated, tested and patched. And even then I have my reservations about the naming, it's not an autopilot.
Why not? In that hypothetical, ramming every overturned truck while avoiding all other accidents would be a massive improvement in safety. What about the truck scenario makes it so bad as to outweigh avoiding the other accidents?
The other scenario is whether fully autonomous vehicles out-perform humans in sufficiently many other scenarios such that we will consider the comparatively rare scenario of an overturned truck not that big of a deal compared to the massive numbers killed due to raw human carelessness. Furthermore, autonomous vehicles will likely get better and better at driving collectively, something which I would make a heavy bet against for human drivers.
1. Technology should not make mind-bogglingly stupid mistakes like this, regardless of how rare they are in the grand scheme of things.
2. Because these systems use neural nets, they're black boxes and it's impossible to conduct a proper engineering analysis on them to find out when they will fail. Fuzzing the system with live humans is not engineering; it's callous disregard for human life and anybody who practices it should lose the right to call themselves an engineer.
In order for there to be a major accident, there not only has to be a mistake, it has to be a mistake with a major consequence, and it has to be that nobody else there avoided it either.
In general a car that doesn't see a pedestrian isn't a major accident unless the pedestrian also doesn't see the car. If not for that there would be a whole lot more fatalities with human drivers as well. A self-driving car obviously shouldn't do that, but neither should a human, and they do. Which makes it less hard for the computer to do it at least as well.
You can also expect the computers to improve over time by learning from each others' mistakes, which human drivers generally don't.
While it is the case that it usually takes two people to cause an accident, there are plenty of scenarios where there is only one person who is capable of taking the action necessary to avoid it. For example, an elderly person in the crosswalk is not going to be capable of getting out of the way of a speeding car.
It's already the case that pedestrian/car collisions are usually written off as the pedestrian's fault until proven otherwise--the Uber self-driving car that killed a pedestrian is a great example of that in action.
And the same is true of human drivers, is my point. You don't just need the mistake, you need all the other things to go wrong too. The pedestrian entered the crosswalk even though a car was coming, instead of waiting to make sure the car stops first (as pedestrians often do). The car failed to identify something. The thing it failed to identify was a pedestrian and not a mailbox or a trashcan. The pedestrian was elderly and couldn't jump out of the way fast enough etc.
It all has to go wrong at once. Mistakes are common but most mistakes aren't fatal.
> Mistakes are common but most mistakes aren't fatal.
And by choosing to drive a car, you are dramatically raising the odds that any mistake you make is fatal (not necessarily to you).
While my personal experience on the matter is thankfully limited, I strongly suspect that the vast majority of vehicular-pedestrian accidents are mostly (or entirely) the fault of the driver, where the pedestrian started an action where it was safe to do so and the driver rapidly caused it to become unsafe before the pedestrian could react.
Is 1 million miles really the approximate distance between major accidents? Because in a lifetime, many people would approach that, and yet most people don't die in auto accidents.
People don't have to be forced to get rid of cars, they just have to be incentivized to get rid of cars. That can be done by increasing the convenience of other options compared to cars or by decreasing the price of other options compared to cars.
It would go a long way to even make it possible to get rid of cars. There are huge swathes of America which aren't within a mile of mass transit. And you can't fix that with more mass transit, because the density there isn't high enough to justify it -- a bus with one rider is worse than a car.
What you need is to get rid of zoning density restrictions. That's the only way for more people to live close enough to things for walking and mass transit to be viable.
> There are huge swathes of America which aren't within a mile of mass transit.
> What you need is to get rid of zoning density restrictions.
Most of those places were built for and are still dominated by the votes of people who don't want more density and transit options, for a variety of reasons, some reasonable, and others based in the nation's sordid racial history.
I don't agree with those people, but until the settled, voting population inhabiting those vast swathes gets significantly younger and more pro-transit and density, I don't see what you call for happening soon.
The problem with the US is that people are scared of buses. A bunch of people from SF told me the buses were not good, but when I visited I took several and they were consistently good, and were never empty, just the frequency could be a lot better. They were a lot noisier than the buses in Europe, though.
My understanding as an average American (in this context) is that in European cities there are nice, fairly frequent, commuter trains and rail; and often also busses.
In the US, mostly the only 'good' bus service is geared for 9-5 workers in big offices in big cities. Anything outside of that tends to run across the economic divide that: people who take the bus _have_ to take the bus, while taking the car is a luxury in time, freedom, and isolation from possibly literally unwashed masses (who might also literally be taking the bus in the summer for a place to have some AC and safety with a seat).
The issue is very much related to classism and a lack of social safety nets and inclusion.
You say classism like it was extrinsic. But the classes (in smaller cities, not like NYC) are people who can afford a car and people who can't, or have some other reason they can't drive.
In the sort of place I'm used to, an urban area of about a million people, taking the bus is not logical if you have a car anyway. You don't get an economic benefit if you own and insure a car at all. And you pay a big penalty in time and flexibility.
That's what creates the class divide, not something arbitrary in peoples' heads. The people who ride the bus because it's economically rational are inherently the people who have no choice.
Talking about general social problems as the source of the issue is misguided, in my opinion.
You're conflating being able to afford a car with having a car. Ideally more people would choose not to have cars. Or, easier, a household could only have one car instead of multiple.
And a good transit system can beat many car trips, even for people that do have cars. Especially if parking spaces stop being subsidized.
Another factor to consider is the age of the cities involved. Most European cities predate cars, while many American cities were built up after or during the the rise of the automobile.
I suspect that only applies to the cores of European cities - most have got much larger since the car was invented and the newer areas were definitely designed with cars in mind.
Massive difference between “I visited and took a couple” and “I rely on them to get to work every day”. I can spin up a few VM’s on any cloud provider in the world and they’ll all work fine. That doesn’t mean every provider will be equally suitable for my production load.
I lived in SF for a period. I was lucky enough to be able to walk to work, but my roommate had to take two busses both ways. At least once/week one of those would go offline and they’d have to walk.
You are greatly minimizing the problems that most Americans face. Most American cities are not dense enough to require good public transportation options (the exception being NYC); without enough density (and lack of public funding), public transportation is short on cash and often very inconvenient. If you miss your bus, you have to wait an absurd amount of time for the next available one. Also good luck with hauling in all the groceries and stuff you need into your suburban home using only buses.... simply not possible.
So there are multiple factors at play. Short of a massive change in the way American cities are designed or some innovation that greatly reduces the cost of building public transportation, cars will continue to be something Americans absolutely need for surviving.
> when I visited I took several and they were consistently good, and were never empty, just the frequency could be a lot better. They were a lot noisier than the buses in Europe, though.
If "noisy and infrequent" doesn't mean "not good" to you, what does?
Taking a bus would add anywhere from 15-45 minutes to my ~5mi commute, with the uncertainty mainly due to the infrequency. That's not good.
> If "noisy and infrequent" doesn't mean "not good" to you, what does?
It's inconvenient / not ideal. I took a loud, full, public transport bus to school in my home town every day. It was not always on time. It got me to the destination every day though.
It was not great, but it was good enough. Sure, a car would be better, but if everyone did that everyone would be even later and busses would be even more infrequent.
My main issue with bay area public transportation is the public. I've had a person threaten to stab me because I was white. He got away. I've had someone smoke meth and then scream in my face, wanting to fight me because I was being racist. He got away. I've had someone pull out a taser and start playing with it. He was arrested, fortunately. One of my acquaintances has been robbed at gunpoint. Not sure if they ever caught the guy.
In addition to the risk of violence, there are also the annoyances. Smelly homeless people. People smoking. People blasting music. (The buskers are the worst.) In a car I have none of those issues. I don't even have to listen to someone loudly talking on their phone. It's quiet. It's clean. It leaves when I'm ready. It's far less stressful.
Nowadays there are only a couple of commuter bus lines that I'll take. Though with the lockdowns and the curfews, it's been months since I've used them.
More specifically, the problem is that public transportation is public and therefore can't or won't enforce the standards for ridership that any sane private company would--the violent, the antisocial, the mentally ill, the people who believe personal hygiene is optional. When my girlfriend of a few years ago was forced to take the bus to work regularly, she was harassed on a daily basis. Public availability doesn't have to mean no accountability, but somehow it devolves to that. And as long as that holds, folks who have the means will pay to avoid it.
There's something wrong about this. I grew up in Scotland where we had plenty of smelly, drunk, drugged and violent people, but they'd be dragged off the train by the police.
Police have been ordered by the city government to avoid enforcing certain laws. It basically means that homeless people can get away with theft and any nuisance. Also criminals are quickly released re-offend.
Most of the bay area is like this and criminals know it. It's hard to describe just how bad property crime is here. Everyone I know has given up reporting when their bike is stolen or when their car has its windows smashed. They know the police won't do anything.
Or this guy, in Seattle, charged three times for violent assaults and released because he was deemed not competent to stand trial before attempting to throw a woman off a 40-foot overpass into rush hour traffic:
https://www.seattlepi.com/local/crime/article/Charge-Man-att...
> the problem is that public transportation is public and therefore can't or won't enforce the standards for ridership that any sane private company would
It is weird that most of the world seems to manage that just fine. At least in Germany they also have no issue with calling the police on misbehaving passengers.
Buses are a great place to catch airborne disease. As a whole, the deal with public transportation is a lot less rosy than some of the proponents would like one to believe.
The problem with buses isn't that you can't use the ones that exist, it's that the ones you need don't always exist. The bus goes where it goes but not everywhere.
Then you have to go places the bus doesn't go often enough that you need a car, and once you've already paid for a car and taxes and insurance, it costs more to take the bus than the incremental cost of driving everywhere.
"once you've already paid for a car and taxes and insurance, it costs more to take the bus than the incremental cost of driving everywhere"
I agree and mentioned this in my other comment, but I'd also add that in my experience there are arbitrary* gaps in what routes are available.
Nearly all the buses in the city where I live go west to east, from a certain midpoint to down town. But I live about 10 minutes south of a point on the western side and there are really no routes that go north-south here. And I work for a major employer.
*Seemingly arbitrary to me, but of course they might rationally determine there's not enough riders.
I used to live 5 miles from work with a straight shot down a bus corridor. It was cheaper to drive it. Occasionally I'd set out for work by bike at the same time the bus arrived at my stop and still beat them there.
A typical city bus weighs 8-10x more than of a car and causes a multiple of that in pavement wear. There are many cases where a car would be the more efficient option. The only question is what the car is doing when you aren’t in it.
So this is a great point - but - we're not bound by AI ... can we not start to narrow the issues with regular tech? For example, in this situation, something that more predictably tells the driver that there is 'literally something in the middle of the road', or at least possibly trigger an alarm?
I wonder if someone can comment on the relevant advances on giving AI the proper human-designed boundaries that may help?
> I wonder if, in fact, we cannot construct a NN controller with the level of unusual object detection and edge-case scenario recognition required to match an average human drivers capability to recognize and react appropriately to unusual situations
Well, I mean, according to some philosophers, our existence is proof that it can at least be done in the physical world...
> to make it one-log better you need 100x the data and 100x the NN dimensionality.
I'm not sure why you think so. At some point the dimensions will get diminishing returns. Maybe you need only 100x the data at this point (or maybe not). But assuming it's both without knowing the current configuration / limits is just a random guess.
I don't think its the wrong problem. If we had perfect self-driving today it would still make sense to have all the things you mention, that is more remote work, etc, but you would also have safe long distance cargo hauling, safer road trips, cheaper home delivery, etc.
All previous mentions of autopilot were for the exact same technology that Tesla uses: assist the pilot but require him/her to keep monitoring its action and intervene when necessary. https://en.wikipedia.org/wiki/Autopilot
Ahh yes, people are definitely more familiar with aircraft functionality than they are with hollywood buttons that do magic, tesla definitely did not imply that the car could drive itself.
Surely a Tesla has a front radar to measure distance though, no? If it's rapidly approaching to an object that's not moving, who cares what it looks like?
Can it actually distinguish reliably between an overnight truck and a manhole cover, an overpass, and an overhead sign? These are far more common occurrences.
No, but I mean all cars for some time now have to include anti-collision radar as standard, it doesn't use cameras to recognize anything, it just goes "you're approaching a stationary object without slowing down - SLAM BRAKES". I'm guessing what's happening here is that Tesla is aware that it's approaching a stationary object, but the camera system overrides it because it doesn't recognize it as a danger. So in a way, a much simpler system would have prevented this accident,.
You can do that, but what level of false positives are you willing to accept?
How often are willing to have your car perform an emergency stop for an overhead sign? How safe is it to have your car brake aggressively in a middle of the highway for nothing?
I don't know why you keep mentioning the overhead signs? The anti-collision radars that are mandated in every new car are mounted on the bonnet level, they won't react to an overhead sign. And yeah, emergency braking in the middle of the highway is not safe at all, but seeing as literally all new cars have to have that system and we're not seeing people randomly braking in the middle of the motorway for no reason I can only guess it's fine. Besides, the system doesn't start braking if you're actively applying throttle, it will start beeping at you but it doesn't start braking until you lift your foot off the gas.
> but seeing as literally all new cars have to have that system and we're not seeing people randomly braking in the middle of the motorway for no reason I can only guess it's fine.
The price we pay for having relatively few false positives is a high false negative rate. For example, a Tesla won't stop for an overturned truck or the pedestrian standing next to it.
>No, but I mean all cars for some time now have to include anti-collision radar as standard,
The way these work is by tracking other moving objects around you rather than stationary objects you are approaching.
It'll track another car to see if it emergency brakes or changes lane too close to you, but it won't track a parked car on the road that you side swipe or head into etc.
The Subaru’s camera system definitely tracks stationary objects, and it’s obnoxiously sensitive. The vehicle throws a beeping tantrum when approaching steep hills or roundabouts at any speed above a crawl. It will auto-brake for a birthday balloon in the road. Pretty sure this overturned truck would set it off.
If you watch the video, it looks like it did slam on the brakes right before collision. Maybe that was the human driver intervening, or maybe it was last-minute emergency system kicking in.
Somewhere along the line the marketing of Autopilot fell apart. As an airline pilot when I hear the word autopilot I think of a system that generally drives the aircraft along a programmed routing and vertical profile. It is most definitely not a set it and forget it device. Training repeatedly stresses that someone must be monitoring the actions of the autopilot at all times and it’s taken very seriously as it can rapidly place you in a very undesirable state.
Somewhere along the way Tesla Autopilot became marketed as a fully self-driving system freeing us to tune out. I think the criticism about the capabilities of the system could be toned down some but we most definitely should be critical of what capabilities are advertised and what instruction is given for the person operating the system. I think it’s very likely that an Autopilot like system could make driving significantly safer but only if we figure out the human interaction aspects and fully and truthfully communicate the capabilities of that system.
> Somewhere along the way Tesla Autopilot became marketed as a fully self-driving system freeing us to tune out.
Maybe it was somewhere around the point that they decided to call the top level autopilot option "Full Self-Driving"? They also had their demo day with a "full self driving" demo and basically claimed they could do that and that Autopilot would do it "soon". Elon's said repeatedly over the years that "by the end of the year" they'd have full eyes-off self driving. I can see how some average consumers could misconstrue this as Autopilot being more-or-less eyes off.
That is an aspect that is admittedly different from a car. It really depends on the phase of flight. In cruise there are rarely instances that an immediate reaction is needed. In the terminal environment though where you’re closer to the ground and other traffic it can definitely be split second.
Autopilots will incorrectly intercept approach guidance and fly through the course or altitude when you’re low to the ground or blow through the lateral course into a parallel runway’s course. That’s the most frequent failure I’ve encountered. I’ve also seen it just completely lose its mind on autoland at low altitude requiring intervention.
Point is split second decisions are needed during critical phases of flight. Driving a car on autopilot is likely going to require that level of vigilance during the entire drive unless we get highway systems that present hyper controlled environments where self-driving cars have higher reliability. This can kind of be likened to why cruise is less critical in an aircraft. You’re at high altitude, traffic is exclusively controlled by ATC (unlike low altitude where there is traffic not under ATC control), and the aircraft are all required to operate on autopilot and have collision avoidance systems where the aircraft communicate with each other (TCAS).
Aviation has dealt with the integration of these systems for the last 5 decades and the human/machine interface. There have been many failures and successes and I think it would be a great place to start for figuring out how this works with cars.
Well but it sounds like the way we started is to take all of those lessons and chuck them out the window! Don't fly airliners, but I fly a trusty Cirrus with plenty of "advanced" avionics. The fact is that during the pillot training so much of your focus is on the failure points of the automation and how to recover. How we're building this into cars makes that an almost impossibility. The domain certainly makes it challenging, and it may prove to ultimately be impossible with current tech, but the analogy to me is like we're selling "autopilot" in cars in the same way we'd sell CAT III landing equipment in an airliner if what were powering that CAT III automation was your DIY Stratus.
It's not "somewhere" it's from Tesla and all other self-driving car companies hyping their eventual product to get stock boosts or get bought. Nobody that I've ever met in industry or tech seriously thought that self-driving was possible en masse. There's just too many unknowns.
They literally call it "full self driving". I had a Model 3 with FSD last year, and it's really pretty pointless beyond a party trick. The actually useful stuff (lane-keeping and dynamic cruise control) is available in the base autopilot option, and is not much different from features in other high end cars. The FSD features are so unreliable as to be totally useless. Lane-changing was often scary, with it not spotting vehicles in the blind spot more than once. It somehow managed to miss exits on navigate on autopilot, and in any case the feature is available in so few places it's barely worth using unless you're only driving on highways. They also don't make it clear that advanced summon is basically useless outside the US, because it only works when you're within 10 meters of the car.
When I had to return that car when I changed jobs, I immediately ordered a new Model 3, because it's a great car and I really like driving it, but I didn't get the FSD option because it's dangerous, useless, and a waste of money. I've not missed it once. Now the only issue is that whenever I get in the car and I'm reminded of Elon Musk's antics, which makes me cringe a little, but that's another story.
> They also don't make it clear that advanced summon is basically useless outside the US, because it only works when you're within 10 meters of the car.
Hmm, is >10m a geoblocked feature or is <10m somehow more useful in US than elsewhere?
>This is a misconception I hear from folks who never planned to own one but not from owners.
And it's a misconception that Tesla happily took advantage of in their marketing for years.
Their autopilot webpage used to be titled "Full Self-Driving Hardware on All Cars". The marketing copy said stuff like "all Tesla vehicles... have the hardware needed for full self-driving capability at a safety level substantially greater than that of a human driver."
They neglected to mention that the software wasn't at that level yet, which I think is incredibly misleading since the hardware by itself is useless (and without corresponding software, it remains to be seen if the hardware they are using actually is capable of full self-driving)
I’ve test driven one and seriously considered buying one. It’s still on my list but other financial goals are taking priority. I had some opportunity to play with the autopilot and I think it does quite well but I stayed engaged with driving the whole time. Mashing the pedal was just too much fun to leave autopilot on for long.
These repeated accidents of hitting things that should be avoided though seem to indicate that the drivers are out of the loop enough that they don’t intervene when they clearly should. There is definitely a disconnect somewhere either in the attentiveness of the driver or the driver’s understanding of when to intervene or both.
-> These few crashes are direct examples of owners with the misconception.
I fixed that for you.
If the news headlined every crash a Toyota was involved in, it would seem overwhelming. But when you compare to the total number of cars sold in the same day of each wreck, you realize this is all theatrical outrage.
This says nothing to Tesla deliberately ignoring the misleading marketing they put out, but MOST (arguably 99% and above) of us owners, know exactly the limitations of AP in our cars.
As a Tesla owner I personally find it abundantly clear the system is extremely limited. First, it reminds you every time you turn it on. Second, it reminds you during the drive based on perceived torque on the wheel. But more to the point, the guidance display helps you understand its limits of its perception just driving it day-to-day.
AutoPilot changes how you drive, but does not change the fact that you are indeed the driver. I observe other drivers every day who are distracted with other tasks, impaired, or just driving aggressively, and many thousands die on the road each year because of this.
I fully expect that AutoPilot software driving in a mode where there is not supposed to be a responsible driver with hands on the wheel and looking out the window would actually operate significantly different than the current system. I would not be surprised if false-negative threshold behavior is configured higher in the current software based on operational requirements.
This driver was effectively asleep at the wheel. I’m very glad they were unharmed. Because the current software is expressly designed to require an attentive driver, it’s hard to say even that the software failed.
Teslas on AutoPilot drive safely, nonaggresively, and highly predictably. They are by no means a fully self-driving system freeing the driver the tune out. There are a hundred ways you can crash a Tesla on AutoPilot in a hot minute if you want to. This is clearly apparent upon operating a Tesla AP for the first time, even if you never open the user manual.
Humans don’t alway make smart choices however, whether their car has assistive technology or not. The data does show the average Tesla driver with AP is about 10x less likely to have an accident, and less likely to die or be injured in an accident if they do have one, than the average overall driver. So if a driver is going to make a terrible choice to be distracted or inattentive while driving, I’d rather they did it in a Tesla with AP enabled.
> The data does show the average Tesla driver with AP is about 10x less likely to have an accident, and less likely to die or be injured in an accident if they do have one, than the average overall driver. So if a driver is going to make a terrible choice to be distracted or inattentive while driving, I’d rather they did it in a Tesla with AP enabled
Do you have a source for those statistics? The last time I saw Autopilot safety statistics, they were all very misleading - e.g. comparing "autopilot on" miles [mostly highway] to all miles drive in the US, or comparing pre-autopilot Teslas to post-autopilot Teslas (the pre-Autopilot Teslas didn't even have basic AEB, which is now standard on even entry-level cars), stuff like that
This is not a self-driving car and it is not cruise-control (that demands you pay attention to the lane ahead of you). This is a savant 8 year old driving your car. Seems to behave fine in common cases but if you relax you might well die if an uncommon case comes up that this child has no basis to handle.
I personally would rather have an actual 8year take the car, since then I would have the correct amount of attention on the road out of well founded fear for the limitations of the child's abilities
"Self-driving" and "autonomous" are dangerously misleading, "autopilot" has exactly the right connotation: something which can help to achieve and maintain some kind of equilibrium (heading, altitude, lane placement, following distance) but is not smart enough to be left alone.
It's exactly the right connotation for people who know the connotation. I suspect a large fraction of the population thinks that "autopilot" means "no human pilot needed."
It's unreasonable to assume that.
Autopilot is a term most people would he familiar from where? Commercial aviation, right?
I would argue most people know it's not a system that achieves that goal. Just ask yourself. Do you think airline passengers would feel comfortable if a cabin announcement told passengers: Both pilots are going to take a nap now for the next 10 hours.
I do think Tesla should do a better job making sure it's understood that there are limitations. I don't think they need to change the name of the feature because some people claim to have a certain expectation of it which at closer inspection is not justified.
Most people don't learn the term from commercial aviation, they learn they term from movies where someone turns on the "autopilot" then goes to handle whatever is happening at the back of the plane.
I assumed autopilots in planes could actually fly by themselves and the pilots were really only there for take-off, landing and turbulence.
But neither planes nor cars with autopilot + autoland/park functionality are fully autonomous.
The only place where we have this today is on some on rails systems (trains like in Paris, buses on fixed rails like in Tokyo's Yurikamome or some amusement parks, etc.).
That's in the context of an action film though, if people based their ideas on those we would live in a strange world indeed.
Hmmm, actually come to think of it...
Nevertheless it's true. Autopilot as seen in countless movies now means "set it and forget it" to more people than a rudimentary aeronautical cruise control.
"Why do they think that?
It's unreasonable to assume that. Autopilot is a term most people would he familiar from where? Commercial aviation, right?"
When you think of 'most people' - think 'Grade 9 education'.
Swaths of Americans grauduate HS with difficulty reading.
I used to market to retail mobile stores, like Sprint, and store managers had difficulty understanding the concept of 'percentage'.
The lowest common denominator is low, and when it comes to safety ... it's the fool among us (or within us because we all have faults) that is the target.
I think it should just be called 'cruise control' and that's that. Tesla can market it as being 'better' but that's about it.
I cringe every time I hear the term autopilot used in that context.
So the arguments against my comment (and downvotes) so far have been about how people are stupid or how they base their reality on action movie knowledge.
I don't think people are that stupid, really. And I don't think they don't understand that Tesla's autopilot is not a chauffeur.
As I said, I do think Tesla can do a better job and hey they should probably consider a new name just to get this over with, but I simply cannot agree with everyone in all these threads going on and on about the name being at fault.
People would find some other thing to blame if the name were drive assistant or cruise control, as long as Tesla is advertising or wants to advertise the more modern features of it. Say Tesla says it's advanced cruise control doing a, b, and c that others don't provide, people will still blame a, b, or c when those linked with driver neglect cause an accident.
Note, I'm also not saying Tesla is great here and not at fault. If their systems don't do a, b, and c properly or steer you into a concrete wall then it's their fault. But IMO it's just not because it's called autopilot.
What I think instead is that the fact that there is a system that let's people zone out in 90 per cent of traffic without annoying the driver about paying attention is the issue. It really doesn't matter what the name is as long as there are measures in place to keep the driver focused.
I think the whole issue clashes though. It's difficult to have these auto-steering features for comfort but then demand reduced comfort. So bottom line for me is that those systems all aren't ready and the naming is simple a bikeshedding topic because it is easy to attack and also brings put this "people are stupid" superiority feeling in people.
Autopilot means automatic pilot to every layperson on the street. It doesn't mean maintain heading at this altitude under these specific circumstances. It means the plane flies itself.
It's like trying to argue that a smartphone is a computer. Technically it is but nobody other than those from a technology background would agree.
This is only because Tesla changed the meaning with its "fully self-driving but the government won't let us" marketing lies.
In the common parlance, saying "someone is running on autopilot" means they are going through the usual motions but not really paying attention to what happening or responsive to surprises. Pretty close to the aviation meaning and what Tesla car does.
I believe the point is nobody expects planes, where a layperson would know the term autopilot from, to be left unattended by the pilots. So why should this be assumed on the road?
Because human nature. No matter what they call it, the better it works the more the human driver will relax and let the machine take over. No matter how many times you warn them that that's inappropriate or unsafe.
Correct. Which correctly conveys the accurate impression that it is a technology which makes the driving experience somewhat easier in a limited number of ways.
Which is why Cadillac calling their autopilot "Super Cruise" accurately conveyed what it does and doesn't do, and doesn't have an endless news cycle about how the "automatic pilot" is neither.
If you go back through the Wayback Machine’s archive of Tesla’s “autopilot” page it’s sickening how misleading their advertising of that feature used to be. They even worded it perfectly so they could weasel their way out of it when any average person would have a completely different interpretation.
Also the Tesla network. While they were happy to make it seem like it would be driverless and was covered so in the media. On the website it never cared to clarify that. It could have been just a rent your car to a stranger app.
And then saying that "all our cars have full self driving hardware". I know people who thought that they had it basically done, and only a few regulatory approvals were left.
Do we have any evidence that shows 8 year olds can't drive if given enough practice? I mean as long as they can see and reach the pedal I'd say driving is simple enough that they could do it if there aren't any distractions.
I’ve had an interest in cars for longer than I can remember. Around when I was 8, my father would kindly let me drive the family car around a deserted (private property) car park as a treat. Basic car control was pretty easy to pick up. Avoiding stationary toppled-over trucks on a clear motorway, like the Tesla failed to do, would also be an easily avoidable situation for child driving.
I think the hardest part of driving is learning the psychology of other road users and understanding both the positive and negative consequences of your actions. This requires experience and practice.
There was an incident just a few weeks ago where a 5-year-old in Utah took his parents' SUV for a joyride and managed to make it to the interstate before getting pulled over.
Based on the police footage, he wasn't doing a very good job of maintaining his lane, but he apparently managed to avoid hitting any stationary objects, so... go figure?
Driving lessons pretty much assume an adult can control a car, they're mostly about how to safely and legally drive on shared public roads, and that's where I'd expect an 8 year old to be significantly less capable, without evidence to support it except - if 8 year olds were as capable of dealing with situations as adults are, we'd call them adults.
Special interest groups spent the 1930s-50s hand wringing over young kids driving (which was a pretty rare thing anyway) and got a bunch of laws written so clearly some kids were driving just fine at some point. It wasn't decades later when seat-belts came along and alcohol got taken seriously that the roads got appreciably safer so it's not like they were dangerous enough to have an affect despite being rare.
I'd be interested in whether dogs could be trained. I know they have literally been trained to drive cars but not as far as I'm aware to such a level that you'd let them on the highway. But my impression is that they are better at focusing on the task than humans, when well trained, and maybe would out perform them, given some navigational guidance. Seems like something for a steam punk novel to have.
> I'd be interested in whether dogs could be trained
Trained? Don't think so. I mean, there are dogs riding skateboards, so I believe that, given the proper ergonomic controls, they could learn how to drive a vehicle.
Driving a vehicle safely (as in, not stopping in the middle of a highway to investigate an interesting object) would be a greater challenge.
But as far as navigating a space? Sure, they are amazing at that and far better than anything we can come up with in the near future. Heck, if you hook up an insect brain inputs and outputs, you would have a system that's better than what we currently use.
Forget 8 year olds, my 3.5 year old can drive a tractor through the woods no problem. We obviously only let him steer and have someone else on the throttle, but he can keep it centered, turn around, avoid obstacles, etc just fine. Kids can handle bipedal locomotion just fine, it's not like they don't have the ability to make rapid decisions to avoid obstacles or correct their position. Steering is no different, it just takes some practice and experience to adapt to it. It's the higher functions that children aren't suited to do.
The car did eventually detect the obstacle and brake prior to impact, otherwise it would have been a lot worse.
What seems weird to me from the video is that there was a person beside the road (perhaps the driver of the overturned truck) waving at cars to stop, and the Tesla didn't detect the pedestrian and slow down, even though he/she appeared to be standing on the edge of the car's lane. It's hard to tell for sure, but it looks like the pedestrian would have been hit if he/she hadn't stepped out of the way.
> The car did eventually detect the obstacle and brake prior to impact
The driver is claiming he braked, not the car: "Huang says he slammed on the brakes once he noticed the truck, however, it was too late to stop the nearly two-ton sedan traveling at reportedly 68 miles per hour."
> The driver is claiming he braked, not the car: "Huang says he slammed on the brakes once he noticed the truck, however, it was too late to stop the nearly two-ton sedan traveling at reportedly 68 miles per hour."
The pedestrian is seen in the image, and he has the reaction time to step aside to avoid collision from the Model 3, how was the driver not aware of that and only decided to brake after avoiding him (as seen with locking the brakes)?
This seems entirely negligent, and a combination of both user error as well as Tesla's failure in AP obstacle detection. Not sure how it plays out, but I hope it doesn't deter from AP being refined. I'm sure they will have the onboard footage to detect what the driver was focused on before the accident and it will be settled based on that.
That's kind of the biggest issue I have with Autopilot - it makes drivers feel far more comfortable relying on it than they should. It's gotten to the extent that in Tesla-dense cities there's a good chance you'll pass a Tesla and see the driver not even close to paying attention to the road. Autopilot is in the dangerous Level 3 domain of autonomous driving where it's good enough for drivers to think it's Level 4 while it happily plows into parked objects once in a while.
Personally, on the occasions when I drive a Tesla, I don't even think about turning it on.
> Personally, on the occasions when I drive a Tesla, I don't even think about turning it on.
Same. I only used it when I want to show it to other people. I enjoy the acceleration and the tight steering, especially on the performance model 3, way too much to not play with it. I did 790 miles in one night just driving up from OC to Santa Barbara down to San Diego and back again I was having so much fun. I only stopped for charges along the way and a bite to eat in K-town in LA.
> That's kind of the biggest issue I have with Autopilot - it makes drivers feel far more comfortable relying on it than they should.
I would argue that's an external factor in personal decision making we can't possibly undo, lest me remove Free Will entirely from Humans. I've seen this in other things as well: when I was in university I often drove without insurance to make end-meet and I took so many precautions before I set off, whereas my girlfriend had insurance and her father owned a dealership so if she got in an accident it was no big deal and since her dad would replace her car if needed she'd often text me while driving (this was during the dumb phone era mind you!) and since she worked as a Physical Therapist was often on the phone while driving to and from worksites.
I wouldn't even reach for food or water while driving, let alone a phone, because I was so paranoid of not seeing everything on the road and having a plan B reaction as I was still driving my 75% track car on the street (essentially my entire savings account and Life's savings rolled into one) during my Junior and Senior year of University and I once got pulled over twice in a day before I had enough money to buy a beater to daily. Luckily by then I had reduced my attendance requirements by 1/2 as gas shot up to $5 in SoCal and I got 17mpg highway.
I used to commute 65 miles each way, and then left my car in the student parking lot to take the tram to work.
I live in Boulder now and there are tons of them around here. I never take the time to look anymore since the Model 3 ramp up. I do know that when I ride on 2 wheels (push bike or motorcycle) that just because they're driving them its no indication they're any more competent than a ICE driver.
I just arrived to the conclusion Humans behind wheels are dangerous in general and anything that removes them for them equation is going to be me messy but worth it in the long run.
> however, it was too late to stop the nearly two-ton sedan traveling at reportedly 68 miles per hour.
Its interesting how the brakes get pumped hard by the pedestrian and let go as he passes him. Willing to bet somebody looks at the data and finds the driver was overriding. I don't own a tesla but I do own 2 cars with forward collision detection and braking. Its pretty scary when your car makes a lot of racket and you look up to see something like a guy in the road. Kind of startles and distracts the mind in ways where I could see somebody focusing too much on that to see a box truck in the road.
Driver wasn’t paying attention, not surprising. You have to actively ignore the warnings to pay attention or hack together a rig to put torque on the wheel.
I don't understand how paying attention can prevent this sort of thing.
Let's say you're paying attention with all your might. You see something up ahead, like an overturned tractor trailer. You expect autopilot will sense it, but you're preparing for it not to. But how do you know it's not going to sense it? Well, obviously, when it hasn't started reacting in a reasonable time. But then it's too late!
In order for a human to have time to figure out that the software isn't working it would have to routinely operate with (much) slower reflexes than a human. But then it would be useless, as you can't slow down normal driving, and in any case, that's not how they designed it.
The hypothesis that people have unreasonable expectations of something called "Autopilot" seems unnecessary and irrelevant to me.
There's a notable time gap between the time when the autopilot should start braking at a reasonable rate, and the last possible time when a full-strength brake will be sufficient. That time gap should be sufficient for an attention-paying human to apply the brakes.
I am however, doubtful think it's really reasonable for humans to have this expectation of 100% attentiveness given an autopilot that 99.9% of the time doesn't crash into things .
Ideally there'd be obvious visual feedback that reflects the current state of the Autopilot and what it recognizes. Something that can't be ignored (front and center on the windshield maybe) and that can give the driver sufficient information to make a decision about whether to interfere or not. This doesn't seem to be on the radar for anybody though. It's like everybody assumes they'll figure out a 99.99% working driving AI where driver intervention is never necessary.
It's even more egregious with Tesla, who is already shipping a half-baked self-driving feature in its current state where accidents like in the article are inevitable.
I could even imagine the windshield outlining everything that the Autopilot sees. If you don't see an outline around a object, you know the Autopilot isn't aware of it. No idea how technically feasible that is.
This sort of problem can be generalized I think to a lot of software design problems, including the sort of things I'm assigned at work (even though very mundane and boring as I'm not a real software engineer).
People say "make a program that does <thing> for me". And maybe you whip up something that does 50%, or 80% or 99.5%. But that only makes them more unhappy when it fails. A partial solution is no solution.
So, you have to come up with a model for humans to be augmented by the software. Rather than trusting it to make the decisions, the software needs to take a large amount of data and clarify and distill it so humans can more easily make the decisions.
But people always want to avoid making decisions. That's why managers/leaders have so much power even though they tend to be despised and seem incompetent.
> I don't understand how paying attention can prevent this sort of thing.
The simple answer is that you take over when something absurd is happening in front of you, like a trailer flipped on its side blocking your lane.
You don’t wait till the last moment to take over. You would immediately move over to the right lane, manually. AutoPilot helps in this case by keeping you perfectly in lane while you check your blind spot before getting over. That’s how it’s supposed to work anyway, and that’s how I use it.
In this particular case, maybe it was possible to avoid it, but in general, small changes in trajectory in a car can cause big consequences. This tractor trailer may have been huge and crosswise, but many things you are supposed to come within inches and so long as you miss it by millimeters, there's no problem.
What I tell my mom, who also drives a Tesla with AP, “If the AP isn’t driving better than you could at that very moment, if you think ‘Oh I’d rather be just a bit further over there’ - that’s when you disengage it. Don’t wait to see what will happen next.”
Turns out that driving is 99.9% utter monotony and AP actually drives better than me during all that time.
I'm sorry but if you see a truck on its side and expect autopilot to stop and it doesn't you are 100% at fault.
And if you saw it on its side, why wouldn't you at the very least change lanes?
It's fun to bash on Tesla whenever autopilot fails but they do make it very very obvious and those of you saying otherwise either don't own a Tesla or really don't pay enough attention.
Yes I would say so if it’s something you’d avoid. It’s one of those things where if you’re going to ignore all of the warnings and agreements when you get the car... how is that not negligent on the drivers behalf? Normal ACC doesn’t always stop either but no one yells at those cars. My Passat and egolf would all the time fail to stop when traffic was at a stand still.
It's because pedestrians are not an object normally located on an interstate. They need to stop calling it Autopilot. I want to see the data if the drivers hands were on the wheel at the time of impact, and the eye tracking sensor data for the 30 seconds prior to impact.
Detecting a pedestrian where they are not meant to be is exactly when you should slow down. That's a weird choice, so I assume they were having false positives and couldn't leave the detection on.
Tesla is using a data-engine approach where they source rare situations of interest from the fleet and include them in the training loop. Basically their whole approach is relying on finding and plugging holes. I bet pedestrian detection and accident detection on the highway was covered a long time ago.
I’m imagining that a low-dynamic-range camera may see certain tunnels/underpasses’ exits as bright white objects blocking the road. The above article refers to a now-deleted Musk tweet that says something similar.
Didn't even slow down. Humans often brake late, but seldom fail to brake at all.
It's grossly irresponsible of Tesla to ship a system that can't detect big, solid obstacles. This happens over and over.
So far, they've hit a crossing semitrailer, a street sweeper, a car at the side of the road, a fire truck at the side of the road, and two road barriers. This has been going on for five years now, despite better radars and LIDARs.
There are no brake lights on the Model 3. None at all. Based on this "reverse video", my conclusion is that the M3 doesn't even brake.
-------
Somewhat worrying is the pedestrian (the truck driver?) who jumps back to get out of the way of the M3. Not only did the M3 miss the truck, but the M3 didn't seem to see the pedestrian either.
That's a daytime video with a crappy camera at an abnormal-for-other-drivers angle. It's entirely reasonable for them to not be visible, both due to the low intensity difference and due to the angle not being what lights optimize for.
Also note all the cars that followed and clearly hit their brakes. They're invisible or nearly-invisible, except when they go through the overpass's shadow.
Even the driver behind the Tesla seems to slam on the brakes at the last minute, lose control of the vehicle and then dangerously swerve across two lanes to pass the obstacle. What on earth is going on with these road users? I'd expect a very slow approach, hazard lights on, stop and check that everyone is OK before even thinking about continuing around the wreckage.
Tesla has been very stubborn about avoiding LIDAR and this is what happens as a consequence. The rest of the self-driving scene fundamentally relies on LIDAR to get things working. Believe me, if cameras were sufficient people wouldn't use LIDAR . LIDAR is expensive. I have spoken to quite a few people in the field and the lack of LIDAR in Tesla vehicles baffles them.
Radar is not a substitute for LIDAR. Sound is really high latency and has terrible resolution.
I would like to see how often things like these happen with Waymo.
Maybe, the problem right now is that there is no module that Tesla can purchase and use on their cars. All the Lidar modules currently in use are these large and more importantly expensive spinning devices mounted on top of the car. These make sense for robotaxis, where cost and unsightliness are not an issue, but not an option for Tesla right now. I believe a couple companies have made headway with cheaper, smaller (solid state?) devices that might be an option in the near future, but these are just recent developments.
That's a good point, I didn't know Audi was shipping a model with Lidar out and the price point is around where the Tesla's are. Certainly the reasons to not include Lidar are dropping.
Doesn't look unsightly to me. Also given that every car company and self driving car uses LiDAR suggests that over time the costs will come due to economies of scale.
Also are people not forgetting that Tesla charges a massive premium for Autopilot ? People are willing to pay a lot for self driving more than enough to cover the cost of the LiDAR sensor.
>> Doesn't look unsightly to me. Also given that every car company and self driving car uses LiDAR suggests that over time the costs will come due to economies of scale.
This has been predicted for radar units over optical in other industries. It never happened. Optical became cheap and capable, radar stayed slow and expensive. I see no reason why this would change in car manufacturing.
How about laser rangefinders? Even one of them in the centre of the vehicle in the direction of travel would be useful for high speed impact avoidance - well, assuming they work reliably and how I think they do; I'm not a hardware guy. But would a laser rangefinder pointing forward be useful? They're not that big and bulky and should at least be useful for braking when it detects something closing too rapidly...?
I guess I should be more clear - I'm guessing (but I could be wrong) that a laser rangefinder would consider heavy rain/fog/smoke to be a solid barrier and wouldn't slow down, it'd stop.
That would be my guess too.. but that's probably what you want to happen if automated sensors are unable to perceive the world around them - slow and stop the car until a human takes over again...?
> Believe me, if cameras were sufficient people wouldn't use LIDAR
I don't necessarily think LIDAR is a bad idea, but this logic seems kind of circular. Isn't the issue that nobody knows what's sufficient? At the very least the engineers at Tesla seem to think LIDAR isn't necessary (maybe not a slam-dunk argument but humans don't use LIDAR to drive, so it's at the very least plausible computer vision systems alone are sufficient)
If you're willing to spend DOD type bucks you can get some insanely good (like the old boring legacy tech will keep track of a city's worth of vehicles and dismounts without getting them all confused with each other) radar. The secret sauce is in the algorithms though and lord knows how many millions of dollars and man hours have been poured into that.
Limited aperture on a car means limited angular resolution even with millimeter wave. This can mean the information needed to tell whether something stationary is in your path just isn't there-- especially in the vertical direction where aperture is smaller.
A big flat surface that's not normal to you like the box truck may also not be a very large return.
Doesn't seem like it was necessary here. I'm no computer vision expert, but surely parallax between multiple cameras and/or multiple time points would make detecting such a massive object simple.
Large uniform/featureless surfaces are the bane of stereo vision. And propagating in from the borders is extremely risky if there's any occlusion affecting the borders (read: if you get the borders wrong).
Maybe in this particular instance you could look at the data and go "yep, that white surface is indeed closing in and not just some part of the sky". But how many false positives do you allow for "there's a huge thing in my way that requires emergency braking" in interstate scenarios?
It seems like the color of the truck and that of the road were near identical to each other. Maybe no different that what it looks like when the road changes textures to a worse surface or or hits an odd pot hole.
It is likely the car alerted the owner to take over, but did not brake on its own.
The thing about LIDAR is detecting objects becomes shit easy. One thing is for sure. I can guarantee such an accident would not have happened with LIDAR.
Tesla only uses Cameras (trained for "normal appearance" of cars, that is front and back views) and Radar (notoriously high false positive rate for standstill targets aka clutter, they need to be suppressed to be driving without hard braking every now and then), no Lidar. This overturned truck would have been difficult to incorporate into a training set for semantic labeling if you want to have a safe reaction for all kinds of unusual objects together with a reasonable false positive rate. Similar classification failures in all those cases. Can't be solved reliably with just vision and Radar.
Can't radar and cameras augment each other and help to solve false positives? I just don't understand, it's pretty simple thing to notice, humans can solve it with vision only. It doesn't need to label what it is exactly, it just needs to understand that there's something, and it's probably not a good idea to plow in it full speed. I'm very positive about what Tesla is doing overall but this issue seems to be unresolved for years, and the way they handle these cases information-wise is also bad.
Computer vision is far behind human vision. Human vision includes the human brain, which we have not been able to come close to replicating. I am working on a years long computer vision project right now. We still struggle to do basic tasks that humans can do easily.
Making a computer vision system for a car that always knows what images mean the road is clear and what images mean the road is blocked with 100% accuracy is very hard.
That said, Tesla has also said they can determine the 3D geometry of the scene using only computer vision, which is certainly a known task (called photogrammetry or structure from motion). However it seems their algorithm failed in this case to accurately judge depth. With a clear picture and a well understood camera system it is pretty do-able, but I haven't looked at this article so perhaps the footage was not sufficiently clear.
> it's pretty simple thing to notice, humans can solve it with vision only
Yes.
Unfortunately, this wasn't human vision: it was computer vision.
When both systems succeed, they by definition succeed similarly: I say by definition, because human vision is the only judge of computer vision's success.
yeah, but Tesla being unable to do it doesn't mean that someone can't.
It's a hard problem, but a system with more sensory inputs than the human optical system shouldn't solve the same problem worse -- this points to a real issue in the problem solving methodology, but doesn't do much to condemn the idea of camera/radar fusion.
Now, if the question was "Can Tesla do this reliably?" -- different answer.
This is what symbolic AI skeptics in the 90's dismissed as 'brittle' solutions.
You engineer an AI system, then find it fails under some unforeseen situation.
You fix for that situation and find it fails under some other unforeseen situation.
I was driving on the highway the other day and there was a large black object ahead in my lane. I was about to take evasive action when the object shifted slightly. The motion was enough for my brain to sense it was a lightweight object (not a human fallen on the road) and a further movement in a puff of wind to establish it was an empty garbage bag rolling on the highway. No need for evasive action.
On the other hand if I had seen other debris/leaves tumbling in the wind but the black object was stationary my brain would have sensed it as a potentially heavy object and taken evasive action.
I remain skeptical that we can if-then-else engineer our way to self-driving until cars have the common sense adaptive behavior that humans have.
The human system includes the human brain, which is an important part. We can easily make optical sensors that have better visual acuity than the human eye, but "scene understanding" continues to be a task where humans dramatically beat computers. The brain is an important part of this.
It's perfectly reasonable for a system with more sensory inputs than a human to fail at a task a human does with just the one.
Do you think the car understands cause and effect? Do you think the car actually groks object persistence? Do you think that the car has any sense of value attached to it's own existence? Or an instinct to increase the amount of time it has to resolve a reading it doesn't understand?
This is the failing of your lab trained neural network. It is only trained and proficient in what it knows, and only that. It doesn't have a sense of self. It doesn't have intuition, instinct for survival, or any of the millions of other simultaneous neural subprocesses that even the simplest biological creatures possess to keep themselves alive. It can't forget, neither can it learn in it's runtime state. It's a crapshoot whether it'll do everything it is supposed to how it is supposed to just like it is when a human (coincidently, another neural network driven creature) gets behind the wheel.
You won't build a perfect neural network for X just like you can't make a perfect human by selective lobotomy. There is only so much neural ductility that can be accommodated.
Tesla doesn’t always suppress radar. My Tesla will occasionally phantom brake due to long sweeping left hand curves in the carpool lane with the adjacent lane stopped or low hanging road signs.
> Radar (notoriously high false positive rate for standstill targets aka clutter, they need to be suppressed to be driving without hard braking every now and then)
Really? My adaptive cruise on my bog-standard GM Volt seems to have no problem seeing static obstacles with it's "primitive" radar.
The problem is not "seeing" the stationary obstacles, the problem is distinguishing ones in the lane from those outside. Issues arise when you try to avoid false positives (such as cars in a different lane in a turn, etc.).
> the problem is distinguishing ones in the lane from those outside.
My vehicle is doing adaptive cruise to the car in front of it--including on banked curves and in Los Angeles traffic--sometimes dense-sometimes empty. This means that it is working out which car is in my lane--probably via beam steering and corresponding to my steering wheel position.
This doesn't seem to be an impossible problem. This seems to be a Tesla problem.
It's not the first case I've seen where Tesla's Autopilot seems to override AEB[1]. Definitely confirmed behaviour to-date and seen as early as Q1 of this year. Any car from 2009 or later with AEB would have avoided this collision.
It probably would be great if car software didn't follow the "move fast and break things" mantra.
I don't think that's true. As far as I know, typical AEB systems are radar-based and can't determine what's on or off the road, so they rely on Doppler to pick out the car in front from the environment. If you're driving up towards a completely stationary object, it will have a very hard time detecting it.
When we bought our used car, we were deciding between a slightly newer model and a slightly older model, and one of the differences was the older model used stereo vision for AEB and the newer model used radar. Even the car salesman said that the stereo vision worked better (as it detects stationary objects). I wonder why the move to radar. Is it cheaper?
Modern AEB systems use cameras in addition to radar, and some are actually camera-only. Besides stationary obstacles, you need a camera to detect pedestrians, cyclists, etc.
Looking at the video, a bunch of dust seems to be kicked up around 1.5 seconds before the impact, suggesting hard braking. Clearly it activated too late to avoid the collision, but it certainly would have reduced the energy of the impact, and thus potential injury to occupants.
Doesn't this take us back to the discussion around the word Autopilot? Tesla sells it as driver-assist, no more autonomous than an autopilot system in a Boeing 737. Good for following the obvious path, and otherwise needs input from the actual pilot.
Airliners have the large benefit that solid obstacles don't tend to suddenly appear in their flight paths (and the most likely candidate, other planes, is being kept away by ATC and failing that warned about by transponder systems), so the cruise-altitude "follows the obvious path" is a lot more useful.
That doesn't protect against operator errors of various kinds (set course into a mountain, didn't realize the autopilot was disengaged, ...), but is a large difference from streets. Even in planes pilots taking over when automation encounters a situation it can't handle is a risk due to potential confusion about the current situation, and on streets the reaction times necessarily to react to an autopilot failure are typically way shorter.
> To me, it also looks as if it almost hit a person, too (the driver of the van?)
You’re right, I hadn’t noticed that! That’s even worse. Seeing a person on the side of the road is not that uncommon and is a near-certain fatality if a collision occurs.
If only we had a regulatory agency that monitored this sort of stuff and then punished companies that used exploitative language to dangerously market their products as something they are not.
It seems like the obvious path would be around solid obstacles. I've never driven a Boeing 737, but will the autopilot really fly you directly into an obstacle with no warning at all? My understanding was that these vehicles are equipped with radar and collision avoidance systems.
Aviation auto-pilot will most definitely fly you into things.
Systems exist which will override auto-pilot in the event of such happenings, like Auto-GCAS (ground collision avoidance systems) and equivalents, but these aren't really designed for sudden obstructions -- they are mostly designed to help regain level flight if a pilot becomes unresponsive during maneuvers and is pointed in A Bad Way -- but auto-pilot is not going to be engaged during such flight, anyway. But systems like this do have authority to disengage whatever other systems that may interfere with their operation, like auto-pilot.
There are many onboard systems to allow awareness of such conditions that will result in collision, but not much in control of actual flight characteristics on any but the most advanced or experimental planes.
I once saw a near miss of 2 civilian planes. One was taking off from an airport next to Lake Superior, and the other was a float plane taking off from the adjacent bay. I only noticed because I heard the engines rev up as they took evasive maneuvers.
Generally, non-military aircraft avoid collisions using radio broadcasts from various beacons (their own and those of other planes). Actual radar systems for detecting non-broadcasting obstacles are, per Wikipedia, generally only found on military aircraft. So if you define "an obstacle" as something that is not a plane broadcasting a beacon, then, yes, a Boeing 737 would probably run directly into it if it were somehow in the path of the autopilot.
Yes it will, autopilot just keeps the heading and the altitude.
There may be other sensors onboard that help with detections (it is a lot more expensive after-all) but the autopilot control system does no such thing.
Even with all the sensors onboard, the 747 would most likely run into something without assistance, in fact I've seen a few cases of this on Air Disasters.
That's absolutely wrong. A driver without assist will be fully engaged, while with assist they'll be tempted to divert their attention. Once attention is diverted they're no better than autopilot on its own.
So a teenager is more likely to get in an accident than the average driver. I think the insurance companies have been aware of that forever.
My point is that while driver+autopilot is claimed to be the safest option, I contend that it is less safe than driver only. At least at the current state of technology, and even including teens.
> This happens over and over. So far, they've hit a crossing semitrailer, a street sweeper, a car at the side of the road, a fire truck at the side of the road, and two road barriers. This has been going on for five years now, despite better radars and LIDARs.
And in that time, how many people have crashed into other people, whether they're in cars or not? Buildings? Walls? Children?
Are you claiming that Tesla's safety performance is worse than that of people?
Man, this kind of arguments is getting tiring... It is obvious that 100% of human in the OP situation having no particular medical condition would perform better than tesla autopilot.
Tesla’s autopilot doesn’t fail because of malfunctions such as short-circuits or bad weather conditions, or anything special happening (as opposed to humans), it fails under perfectly normal condition.
You can’t even start to compare autopilot safety performance to people performance unless you work for tesla’s marketing, for the simple reason that no human is in theory authorized to let that thing drive itself without supervision. What you’re comparing is human + autopilot vs only human.
What seems extraordinary, is that human+autopilot actually performs worse than just humans in some conditions, such as OP, simply because of the way autopilot was advertized.
> human+autopilot actually performs worse than just humans in some conditions,
This is well studied in the aviation field, and an entire subfield of psychology to study this exists called "Human Factors".
There are many many fascinating results from studies in Human factors in aviation, but one that I can point out is that humans pay less and less attention as you present them with more and more reliable automation, to a point where automation can start to decrease safety, as humans get so complacent, relaxed, and distracted (my instructor called this "flying fat dumb and happy") that when they ARE called upon to take over and act suddenly, they simply are entirely unaware of the situation and need more time to get the hang of it than there is time before decisive action is needed.
There's a very simple fix, which is to have a human driving and allow autopilot to work only in an emergency, when it detects the car is about to crash. Then you get all of the touted safety advantages, and none of the bizarre accidents.
Tesla has shipped ~1M cars total, USA registers ~250M new cars every year. Accidents happen around 1 in 50 cars per year in the USA. Most of the human accidents happen while drunk, distracted, speeding, or driving recklessly. Suspiciously the DriverKnowledge source adds these up to 103% of causes.
But we can say that a Tesla autopilot is never drunk, distracted, should never be speeding or driving recklessly. Crashing into an upturned truck, a fire truck, a street sweeper, a parked car, while sober, paying attention, driving carefully and attentively, is worse than human driving.
> It's grossly irresponsible of Tesla to ship a system that can't detect big, solid obstacles.
I think this is a case of something that seems grossly irresponsible because it's a task that humans can do extremely easily. Clearly, if LIDAR cost nothing then Tesla would happily use it in their cars. The reality is that a technology that can perform as well as a human at static object detection is cost prohibitive which is why you don't see it in any mass market vehicles. I would imagine in some cases a human driver i.e. a taxi still costs less than LIDAR.
So it becomes a question of how responsible should we make Tesla for the expectations that some of their customers may have when they release any kind of driver assistant technology? You can argue both ways that it would be irresponsible of them to release the technology that they have now without guaranteeing that every driver will fully understand its limits but also that it would be irresponsible of them not to release such technology given the number of lives it can save in its current form.
Presumably it would cost even less not to include this feature at all. I think a reasonable question is if this feature avoids more collisions in more average conditions or causes more in extraordinary ones.
Even if it saves more than it costs it might cost more in pr and court costs than is earned from selling the feature.
It is probably impractical to sell something that saves 2 people that we cannot innumerate or name if it kills one person with a name, address, next of kin, and lawyer.
> For any human driver paying even the slightest bit of attention, this accident is almost an impossibility, assuming the driver had the gift of sight and functional brakes.
No. A friend of mine was driving home up 101. There was a large boat sitting in her lane. She thought her brain was playing tricks on her and did not believe there was a boat sitting on the highway. And she crashed into the boat.
A friend of mine was driving home up 101. There was a large boat sitting in her lane. She thought her brain was playing tricks on her but eventually figured out there was actually a boat sitting on the highway. She was able to avoid the boat.
This is getting a bit ridiculous. Silicon Valley mindset being applied to critical safety focused products is really bad.
I hope they regulate the crap out of Tesla and God I hope comma.ai whose founder seems even more batshit crazy.
Comma.ai's system actually has an eyeball tracker that I think stops things if you are not looking where you are going which would likely prevent this kind of crash.
The level of respect that someone like Andrej Karpathy is jarring: he should publicly apologize instead of being all saintly (oh look ma, I built a toy with 6 cameras, I am so proud of myself) while talking about deep learning or whatever crackpot stuff they are cooking over thinking they are solving world's "problems".
Haven't seen anyone mention the human that almost gets hit. If the person standing in the lane (the truck's driver maybe?) hadn't stepped out of the way, the Tesla would have run him over without slowing down... Looks like multiple clear perception system failures.
Who was it that said (and I'm paraphrasing) - "Self-driving is easy, you could build an AI yourself that will work 90% of the time. It's writing an AI that works 100% of the time that's difficult".
He built a car that drove cross country.
It's not "cameras vs. LADAR", it's about software that needs to work flawlessly.
I love "Lean" development process, and "just get it out there", but I wouldn't write banking software that way.
Or autopilots for that matter. A blatant disregard for human lives.
He's stubborn, but he also realistically I don't they have any choice. Retrofitting all the cars they sold with FSD, with set of LIDAR sensors (or refunding those purchases) would likely bankrupt the company.
Buying a car with FSD HW years before we even know what's needed to get FSD, is like buying a PC, spec'ed to run Duke Nukem Forever, in 1997.
As a 5 day new Tesla driver playing with auto pilot you can just feel the thing is over worked. The road I live on HWY 36 in California was a joke on autopilot. Car can't do it has to slow well below speed limit. Every other turn it tries to drive off the road and gives up steering. Also roads with cracks that are black with pavement sealer confuse the car to no end. The car is color blind and the vision system gives it no time to react.
Assuming autopilot was indeed engaged, this seems to be clear evidence that cameras alone are insufficient to perceive big objects in a car's path OR judge its distance to them.
This car traveled in a straight line in near-perfect weather for at least 10 seconds toward a large motionless object blocking its lane and the next one, but still overlooked the danger.
Imagine if this had happened in bad weather or at night, where risk of failure is substantially greater. This degree of incapability implies that vision alone is not only inadequate to perceive obstacles ahead that should be obvious to any human, but profoundly inadequate.
It doesn't say anything about the limitations of cameras. It says a lot about the current software limitations of Autopilot. As does the owner's manual:
> Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles.
To be fair, Tesla has for years now been pushing the fact that Autopilot doesn't mean you can sit back and let the car drive, and "Full Self Driving" is the term for that. Now that Full Self Driving doesn't look like it will live up to its name (not saying Teslas wont ever be fully self-driving, but the product they marketed as FSD to the first buyers of FSD are clearly never going to enable L4), they are saying FSD is just a collection of features in that direction, and "robotaxi" is the next word they seem to mean "really really self-driving." Given the nonsensical timelines Musk has been giving, they'll have to find a way to subvert the expectations of that word too.
The occurrence of this crash shows that the self-driving system as a whole is insufficient. I don’t see how you can be sure the problem is the camera and not the software.
I don't think it's a problem with cameras. I think it's a problem with autopilot. We don't know exactly how it's coded, but lots of other adaptive cruise control systems don't try to look for stationary objects at all.
> lots of other adaptive cruise control systems don't try to look for stationary objects at all
At highway speeds they tend not to [0] since slamming the brakes at perceived stationary objects (signs on the side of the road, cars veering in front of you, etc.) might be even more dangerous. Maybe the next generations of (reasonably priced) hardware will reliably detect such objects in the path of the car.
P.S. Of course it's not an excuse, it's an explanation for why this happens.
For stationary objects as big as a [fire]truck, that's not an excuse. They could see it early and slow down pretty gently if a lane change is out of the cards.
Edit: "P.S. Of course it's not an excuse, it's an explanation for why this happens."
For objects where you would not have to slam on the brakes, saying that it would be dangerous to slam on the brakes is not an explanation.
Slamming the brakes on the highway, interstate, autobahn is dangerous for the driver behind you. In some countries (maybe all) it's illegal to to this without an immediate danger. The car may see a sign on the side of the highway as "in your path" simply because the road is curving and the car can't tell that there's a radius and the sign is outside it.
Between RADAR, LIDAR, and cameras I'm sure there's a hardware setup that can adequately resolve the issue but that would be too expensive. Most manufacturers don't pretend their cars are self driving. But one stands out, with Musk insisting the car is fully self driving, safer than a human driver, and ready for prime time except for pesky regulation. It is not and it will not be for a long time. They can probably self drive in less than 1% of driving conditions and even then they won't do it successfully 100% of the time. The only reason the statistics don't look much worse is that most drivers aren't suicidal and will keep saving the situation when the car shows that even driving straight between to painted lines is a challenge for a computer.
> Slamming the brakes on the highway, interstate, autobahn is dangerous for the driver behind you. In some countries (maybe all) it's illegal to to this without an immediate danger. The car may see a sign on the side of the highway as "in your path" simply because the road is curving and the car can't tell that there's a radius and the sign is outside it.
I'm asking for a gradual slowdown, not slamming on the brakes.
There wasn't even a curve here. The lane went straight and visibly into the stationary object.
The whole thing about the "car doesn't know" part is that... the car doesn't know. The car just sees an object that just sat there the whole time with speed 0 and is programmed to assume that is a stationary object on the side of the road. Which is the likeliest explanation on a highway. Without LIDAR the car can't do more than either keep braking for every object on the side of the road, then get back up to speed once it realizes it's safe (however long that may take), or just take stationary objects as "most likely not in my pass, will ignore". By the time they realize a collision is imminent it's too late to do more than slow down a bit.
Adaptive cruise control has one selling point: it maintains speed automatically thus providing more comfort. If your car keeps slowing down (or stop) for such objects then nobody would use it. Which is why it's programmed as such, for better or worse. Don't forget that the driver is still driving, the so called "fully self driving" is just a collection of driver assists that require constant attention since they can and will make fatal mistakes even in the simplest driving conditions.
There are points in the road where it's a bit unclear the exact path you will take. This is not one of them. The car has an extremely clear line of sight to the lane, enabling it to know that there are no curves before the obstacle.
Just like "emergency braking is bad" shouldn't remove the option of slowing down, "car can't always tell where the lane goes" shouldn't exclude using precise lane information when it is available.
> Humans do it with vision alone, do they not? This seems to be a failure of processing, not data.
In the best case scenario, sure, but having been in Motorsports since I was 16 and then a lot of my Adult Life in the Auto Industry the reality is a resounding: NO, Humans are horrible behind the wheel of what is essentially a less-volatile bomb with 4 wheels.
I won't even mention my time riding around on a motorcycle on the street, as I have literately looked into the eyes of a driver only to see them make the mistake anyway--it's why I have to have a straight-pipe liter bike for the street, we've tested this and you can seriously hear me 1/3 mile away. Its astonishing how bad 99% of drivers are and realistically don't have the reaction times needed to drive anything with that much weight much less power in public and yet me let them anyway.
Hell, during this weekend's protests saw police and normal citizens running straight into People on the street with signs, why isn't that seen in this same light?
So, it is exactly because of those experiences that I have to side with this failure being just one of the many to get us to a more viable future where cars are fully autonomous. Its a shame, I'm just glad I got to live and learn in era where Humans were able to do what seems an entirely absurd thing.
Yes and no, if you were driving based on a stream of fixed, low-res, low-framerate video you would definitely crash more often than you do now.
You're also disregarding the (hugely important) role of other senses like sound and stuff like "I saw a bike coming from the right 10 seconds ago, where should it be now?".
How do you think the objects are detected with "vision" by autopilot? Are you considerig classifying all the objects in the world in all the possible situations so that that dumb computer can avoid them? So far that seems the plan.
You don't need to train with every object in the world to train in the general concept of road occluders.
Autopilot users are ALWAYS supposed to watch the road and take control in unusual circumstances. Like an autopilot system in an airplane, it will fly you straight into a mountain if you give it that course and don't monitor where you are.
>> Autopilot users are ALWAYS supposed to watch the road and take control in unusual circumstances.
It looks like there is no single general concept of road occluders. If autopilot can't avoid a big object in the middle of the road what is it good for? Tesla claims to provide more than an airplane autopilot, they say its tech would avoid that mountain.
should the car have slowed down by itself? yes. was the driver not paying attention and did not have their hands on the wheel? obviously. it is driver assistance, you agree to the terms and conditions, you still have to DRIVE the car.
> To understand the strengths and weaknesses of these systems and how they differ, we piloted a Cadillac CT6, a Subaru Impreza, a Tesla Model S, and a Toyota Camry through four tests at FT Techno of America's Fowlerville, Michigan, proving ground. The balloon car is built like a bounce house but with the radar reflectivity of a real car, along with a five-figure price and a Volkswagen wrapper. For the tests with a moving target, a heavy-duty pickup tows the balloon car on 42-foot rails, which allow it to slide forward after impact.
> The car companies don't hide the fact that today's AEB systems have blind spots. It's all there in the owner's manuals, typically covered by both an all-encompassing legal disclaimer and explicit examples of why the systems might fail to intervene. For instance, the Camry's AEB system may not work when you're driving on a hill. It might not spot vehicles with high ground clearance or those with low rear ends. It may not work if a wiper blade blocks the camera. Toyota says the system could also fail if the vehicle is wobbling, whatever that means. It may not function when the sun shines directly on the vehicle ahead or into the camera mounted near the rearview mirror.
> There's truth in these legal warnings. AEB isn't intended to address low-visibility conditions or a car that suddenly swerves into your path. These systems do their best work preventing the kind of crashes that are easily avoided by an attentive driver.
> The edge cases cover the gamut from common to complex. Volvo's owner's manuals outline a target-switching problem for adaptive cruise control (ACC), the convenience feature that relies on the same sensors as AEB. In these scenarios, a vehicle just ahead of the Volvo takes an exit or makes a lane change to reveal a stationary vehicle in the Volvo's path. If traveling above 20 mph, the Volvo will not decelerate, according to its maker. We replicated that scenario for AEB testing, with a lead vehicle making a late lane change as it closed in on the parked balloon car. No car in our test could avoid a collision beyond 30 mph, and as we neared that upper limit, the Tesla and the Subaru provided no warning or braking.
> Toyota says the system could also fail if the vehicle is wobbling, whatever that means.
Strange phrasing but they're most likely referring to cross-winds, such that the camera (or the computer behind it, rather) cannot determine whether a movement was a vehicle impinging on the path of travel, versus a momentary shift of angle of attack of the camera vehicle.
The car warns you about this every time you turn on Autopilot. Tesla still has room to improve on their marketing, but let's not pretend the only warnings about Autopilot are buried deep in some EULA.
That's a misnomer. Just as autopilot in an airplane requires active attention by a pilot, so too does autopilot in a Tesla require active driver attention.
That quote is from a demo video showing what capabilities will be rolled out in the future when a) the system is ready for the public, and b) regulatory approvals are granted.
People are bad at driving cars. People are impossibly bad in a fashion that can never on average be improved at monitoring a car they aren't driving and reacting fast enough.
60 mph = 88 fps. Reacting 3 seconds late is reacting 256 feet too late especially give the other 240 feet you are going to need to actually stop.
Blaming people for not doing something we knew people couldn't do seems pointless.
Ask 99 out of 100 people, and they'll say an autopilot "makes the plane fly itself." I'm 100% on board with Tesla and most of Musk's other efforts, but calling this system "Autopilot" was negligent.
Are you (speaking to the audience at large, not to the OP) an engineer at Tesla? Did you speak up when they decided to call this thing "Autopilot"? Why or why not?
> and they'll say an autopilot "makes the plane fly itself."
I don't believe in tailoring your marketing for your most ignorant customers. If you ask a Tesla advisor during a test drive, they will very clearly state the car does not drive itself (ask; I've tried more than once to see if they'd say something other than company stance). It is extremely clear in the vehicle's manual and other documentation put front and center for the driver that Autopilot is meant to be an assist, and the driver is responsible at all times. Don't turn on the feature if you're going to ignore the warning messages and legalese.
> I don't believe in tailoring your marketing for your most ignorant customers.
How about tailoring it for your typical customer? That typical customer only learned about autopilot from movies and shows. Or from the popular phrase for spacing out.
With roughly a million cars sold and only a handful of owners dead (6 [1]) from their own mistakes (not paying attention), sounds pretty successful to me. Lots of folks would kill for a 0.000006% fatality failure rate, considering the human failure rate:
> Distracted driving is dangerous, claiming 2,841 lives in 2018 alone. Among those killed: 1,730 drivers, 605 passengers, 400 pedestrians and 77 bicyclists. [2]
Disclaimer: There is some nuance to this. Early Teslas had no Autopilot, some have Autopilot 1 powered by MobileEye, some have Autopilot 2-3 powered by Tesla. I still believe my point regarding statistics stands. Some people have died, but the amount is noise when considering changing the marketing. Pay attention like you’re supposed to and you don’t die.
>> Pay attention like you’re supposed to and you don’t die.
Instant Ubik has all the fresh flavor of just-brewed drip coffee. Your husband will say, Christ, Sally, I used to think your coffee was only so-so, But now, wow! Safe when taken as directed.
Wild new Ubik salad dressing, not Italian, not French, but an entirely new and different taste treat that’s waking up the world. Wake up to Ubik and be wild! Safe when taken as directed.
We wanted to give you a shave like no other you ever had. We said, It’s about time a man’s face got a little loving. We said, With Ubik’s self-winding Swiss chromium never-ending blade, the days of scrape-scrape are over. So try Ubik. And be loved. Warning: use only as directed. And with caution.
Perk up pouting household surfaces with new miracle Ubik, the easy-to-apply, extra-shiny, non-stick plastic coating. Entirely harmless if used as directed. Saves endless scrubbing, glides you right out of the kitchen!
My hair is so dry, so unmanageable. What’s a girl to do? Simply rub in creamy Ubik hair conditioner. In just five days you’ll discover new body in your hair, new glossiness. And Ubik hairspray, used as directed, is absolutely safe.
Has perspiration odor taken you out of the swim? Ten-day Ubik deodorant spray or Ubik roll-on ends worry of offending, brings you back where the happening is. Safe when used as directed in a conscientious program of body hygiene.
Pop tasty Ubik into your toaster, made only from fresh fruit and healthful all-vegetable shortening. Ubik makes breakfast a feast, puts zing into your thing! Safe when handled as directed.
Could it be that I have bad breath, Tom? Well, Ed, if you’re worried about that, try today’s new Ubik, with powerful germicidal foaming action, guaranteed safe when taken as directed.
All quotes from P. K. Dick's Ubik. Sorry to spam the thread but there's really no excuse to use that excuse.
It’s not automatic. TCAS is a warning system (available given the sensors installed) that provides a resolution advisory. It’s up the pilot to take action.
A very few number of planes do have the capability to integrate this with autopilot but it’s almost always disabled because it can cause cascading collision issues.
The number of obstacles that LIDAR could prevent hitting, which Autopilot would not pick up, over the average life of a car, is very small.
Whether LIDAR would be worth the money is, I suppose, an academic exercise. You could calculate the number of Teslas involved in such crashes, and divide the price of a Tesla by the price of LIDAR, to decide if it is worth it (I suspect not, at current prices), but ultimately, things will happen even with LIDAR. It isn't a silver bullet.
I'm on my second Tesla. I'm totally biased. It has saved me from inattention several times. I probably wouldn't actually have crashed any of those, but I might have. Regardless, I'm glad I have it, and I am safer with it than without it. Sure, I could screw up and crash on autopilot, but it would be easier to crash without autopilot.
afaik LIDAR is not useful in even minor precipitation (rain/snow/fog). If the car is dependent on LIDAR at highway speeds then this needs to be addressed.
Being dependent on cameras is similar to a human situation and is (hopefully) no worse.
If the car can let me stop paying attention to the road, but only when there's no precipitation, that's still a big improvement over a car that always makes me pay attention. At least in most climates.
This demonstrates the superiority GM's Super Cruise approach. They have interior cameras that make sure the driver is looking forward (you're allowed to look away for brief periods). Tesla only forces you to keep one hand applying some pressure to the wheel - you could easily be on your phone with your other hand. GM's approach is much more relaxing and prevents accidents like this. That being said, there's no reason Tesla can't easily adopt this approach in the future.
Do ML models for self driving cars start from an assumption that it is safe to proceed forward unless it identifies an obstacle, or does it assume that it is unsafe to proceed forward unless it identifies that the path is clear?
I remember early on when Tesla was touting some kind of crashes/deaths per mile average or something compared to the US average... Anyone have the current numbers on that?
<<In the 1st quarter, we registered one accident for every 4.68 million miles driven in which drivers had Autopilot engaged. For those driving without Autopilot but with our active safety features, we registered one accident for every 1.99 million miles driven. For those driving without Autopilot and without our active safety features, we registered one accident for every 1.42 million miles driven. By comparison, NHTSA’s most recent data shows that in the United States there is an automobile crash every 479,000 miles.
Total overall miles and crashes were significantly reduced in this quarter.[0]>>
On thing to keep in mind is that the nature of the miles driven when autopilot engaged are not the same as the ones without autopilot (ie mostly highway miles vs city street driving), and likely also different from the miles driven without active safety features.
Agreed, those numbers alone don't convince it's safer.
Also, personally I would have a complication: would I be satisfied with being safer than average? I don't know if I'm average. I tend to pay attention and never have driven under influence (alcohol or w/e), or when sleepy. So a little better than average would not bring me peace of mind.
I think some kind of situational statistic would be relevant here; maybe they could have some part of the fleet as controls with AP disabled (and offer some kind of reward), then the overall accident rate can be compared better.
My intuition was always that autopilot + human would be safer, and the only thing that made me question that was the soggy toast Tesla provides as "proof".
There would be privacy concerns doing an apples to apples comparison, but people could opt in.
Am I the only Tesla owner that finds the blame on autopilot a little hard to believe? My car never reaches the point where I feel that I don’t have to pay attention.
I'm blown away at how trusting of AP and FSD people are. I've only used AP once or twice in the 6 or so months I've had my 3 and it consistently scares the shit out of me.
It's not _bad_ per se, but it drives like a teenager and that has resulted in a fundamental distrust of the system on my end.
How about a limited LIDAR that only looks straight ahead? (Perhaps in a 5x5 degree wedge?) That's likely to be far cheaper than a full LIDAR set, and would provide useful data about a barrier in the vehicle's path.
Somehow I doubt this problem is as simple as "add a LIDAR" considering there's probably at least one smart person at Tesla who has thought of that idea. Sure you might be right, but my impression is that the 1 in a million cases will still exist.
Adding LIDAR would make the millions of cars on the road a liability. You'd have to make those people whole and basically recall the cars. If you can 'make it work' with cameras, you save millions.
I agree - and I think you're seeing new products in the lidar space with a focus on cost constraint (with more limited resolutions). I really do hope tesla adopts at least 1 or 2 of these cheaper options when they get cheap enough, my guess is a 50% of lidar would get you 80% of the benefits and fix a lot of these long tail edge case issues around drive-able areas.
As mentioned elsewhere in this discussion, RADAR had a hard time with stationary objects. They get lots of false positives from things on the side of the road, overhead signs etc.
Ah, an outlier. I don't see any significance to this story other than being a $TSLA trigger word... I've seen many people, including truck drivers get hurt as a result of relying too much on technology (Lane assist, auto-headlights, etc). This is why they're all labelled as an assistive technology.
The driver should have seen a gigantic truck blocking his entire lane, and though it was a failure of autopilot to not detect it, autopilot shouldn't be receiving all the blame for something that is inherently not its job to do 100% perfectly.
Many of the comments seem to jump onto the "autopilot doesn't work, we told you so"-train. I think it is important to remember and be clear that no autopilot system will be 100% safe, nor is any human driver 100% safe. What is interesting is looking at statistics of how often these accidents happen, and comparing that statistic to human drivers.
There are other issues like the way we calculate statistics: for instance we shouldn't just calculate "miles driven with autopilot engaged", without looking at how it was disengaged, what would have happened if it wasn't disengaged at that time, whether that would lead to an accident, etc.
There are issues, yes, but some commenters are acting like they think that this accident is what happens every time there is an obstacle, and as if the autopilot does not have functional obstacle detection at all. This is not the case. This is an exception, a failure in the obstacle detection, not the rule. The fact that this news is getting so much traction is a testament to that. If the autopilot was not detecting any obstacles, we would have news articles like this every week.
Is it not necessary to confirm with a source besides the driver (e.g. Tesla) that Autopilot was actually on?
I feel like "I had Autopilot on and it didn't work" is a pretty convenient excuse if it's not on and you have an accident all on your own.
Because to me that is entirely plausible, yet we're all operating under the assumption (and having lengthy discussions about) this "failing" of Autopilot.
Forget the truck, there seems to be a person (the truck driver?) standing some 50m in front of the truck, waving the oncoming cars to the side.
And the Tesla seems to completely ignore the fact that a human is standing basically right in his planned trajectory.
Missing the truck, fine. But why is the system not reacting to a human on the highway? Humans standing on dedicated highways is likely a 99% signal for trouble ahead.
Why is this stuff getting upvoted? All we see is a Tesla that rammed into an obstacle on the highway, with evidence of anything else. We don't know that it was on autopilot, and the driver in this case would say anything to try and get his insurance money back. The car started to break when it saw an inevitable collision, but otherwise, it's pure speculation.
I have owned a Tesla Model 3 in the UK for almost a year now. The Auto Pilot is only useful on dual carriageways and straight roads and still very frequently suffers from "phantom braking". It should be obvious to any half decent driver that concentration is not optional.
Having said that, the Auto Pilot has saved me from a couple of near misses:
- It has detected the car in front braking suddenly and reacted before my brain had even registered.
- It has nudged the car sideways when a car (in my blind spot) moved too close.
At the moment it is a driving aid, a second set of eyes, not a driver replacement. It is useful as that, but Tesla (as much as a love them) could really do to be more honest in the marketing of their "Full Self Driving" packages.
I expect it will get substantially better, but not in the time frame the Musk says it will.
Only 1 accident that was my fault (I was young and stupid). Been driving for 17 years and have had near misses before.
I don't know for sure if I would have crashed if it wasn't for Auto Pilot, but it's nice to know that it's there, even if it makes mistakes from time to time.
The other factor is that I'm driving more now that I have the Tesla, which probably isn't too surprising (at least until the lock down).
Whenever I hear about these safety features saving people from near misses I always wonder how they've got this far driving without dying. For example, lane keep assist. If you can't stay in a lane you can't drive. It's such a basic requirement. So I can only assume that people are deviating from their lanes simply because these safety features exist.
That may be true of some people. I don't think I've become complacent. It's impossible to tell what would have happened if the car hadn't helped in those situations.
The car does have advantages over humans, like for example, the 8 cameras mounted around the car, making blind spot collisions less likely.
If someone comes out of their lane into mine and happens to be in my blind spot then there's not much I can do. I do turn my head when changing lanes, but I can't see in all directions at the same time. And I'm human, I will make mistakes from time to time.
The car can also react far more quickly than a human nervous system ever could. In situations when I've needed to brake for someone, often I've moved my foot to the brake pedal and found that it is already depressed.
I think safety features are a net win overall. I'm sure seat belts and air bags have caused problems, but in the end we have decided we're better with them than without.
It seems pretty clear that while the AutoPilot technology may be cool or fun, it is nowhere near ready for full-self driving as Tesla has marketed it as. Using cameras + radar + neural networks may not be enough - no matter how cool the underlying software stack is.
Similarly, I also question other self-driving car manufacturers (Waymo, Cruise, Uber) approaches using a Lidar/Radar/Camera suite.
The long tail of the problem is immense and going to take years to solve with the current technology.
To me it's clear that a new set of technologies that are more dynamic and responsive going into unknown situations is going to be needed to make it "foolproof" (if that is even possible at all).
Disclaimer: This is coming from someone who naively thought full-self driving cars would already be on the road by now back in 2015
That's thankfully far out of distribution. Why doesn't Tesla get more slack for incidents like this? Uber had to shut down its self-driving lab for almost a year when one of its vehicles failed and killed a pedestrian.
Why should Tesla get any slack? This particular circumstance looks like it should be within the typical guidelines for Automatic Emergency Braking systems to detect and react to--the sorts of systems which can be found in consumer models of cars.
Uber shut down its self-driving lab in part because the circumstances were such in the realm of "the car should be capable of handling it" that excess caution was warranted in figuring out why a very easy case was missed.
Personally I believe we should have fail safe back ups. A radar/lidar would have prevented such an accident from happening. You can't prove that a controller based on a neural network wont collide. But you can prove that a simple lidar based backup will not collide with a stationary object. Neural networks are great for complex decision process, but they cannot 100% gaurantee safety. Given the tesla pricepoint, and the falling price of lidars I don't see why one cannot be installed. Personally, I think it should be mandatory in all self driving cars to have a provable failsafe.
I'm very not comfortable with camera-only and no-sensor approaches in self-driving cars. While they can train a neural network with millions or even billions of sample images from real-life tests, I do think they're still bound to fail in some corner cases. Remember FB's face detection that turned out failed to detect faces of people of certain races? I think there's still some possibilities that NN in self-driving cars would fail to recognize some corner cases such as a grandma crossing a street with particularly unique combination of clothes and colors.
A few years ago I found it hard to believe driver assist systems were doing all sorts of massive complicated (potentially buggy) processing instead of "large object directly ahead ---> slow/stop." Folks gave all sorts of reasons why that wasn't good enough.
I'm going to reiterate here that I don't want anything more complicated than very simple forward collision prediction (resulting in slow down) aka "Automatic Emergency Braking" in any car I purchase.
Unpopular opinion: Anything that promotes (directly or indirectly) less driver engagement with driving should just be banned from civilian roads.
I'm sure one day when everyone is being driven by allsensing selfdriving vehicles road deaths will have massively decreased. Until then, technologist just will have to learn how to get there in much more controlled environment.
A legal waiver or T&Cs should not be the only thing 'protecting' the next self driving victim.
> Unpopular opinion: Anything that promotes (directly or indirectly) less driver engagement with driving should just be banned from civilian roads.
I take it you turn off your phone and the car radio when driving, and also tell all of your passengers they must also turn off their phones and maintain strict silence.
I see a lot of mention of neural networks in relation to Autopilot. However, it's hard to imagine that those pursuing automatic driving wouldn't combine neural networks with normal old rules and safety thresholds (ie "just brake" if a new, sufficiently large, mostly uniform object is filling much of the sensors).
I thought most of these ADAS systems had some form of sensor fusion, combining multiple systems like: Stereo camera + front radar + long range US ... etc.
This is a scenario that would be safely avoided with an entry-level AEB system, is there something special about Tesla's design that I am missing?
I think negating low-hanging bridges is one of the scenarios OEMs specifically calibrate for with auto manufacturers. It's not rocket science it's a tuning/calibration processs for that specific vehicle's dimensions and layout.
Front radar calibration is not a new problem and is well understood in the auto industry.
These accidents really raises the question, assuming they do have a front radar, what is Tesla doing so differently that allows a competitor entry-level AEB system to outperform them?
To me it seems that NHTSA, TUV, etc. should ban Tesla's auto-pilot on public roads until they can show evidence of their safety.
Why is it auto-safety systems just can't handle objects in the road? That's all I personally need it for. I try not to drive in such as a way as to be vulnerable to bad behavior from other cars but it'd sure be nice to not plow into nearly-invisible semi-truck tires laying in the road at night.
It's hard to judge from the video, but the car doesn't appear to have been going very fast, and it looks like the side of the truck crumpled a bit to absorb the impact.
Airbag inflation is a violent, potentially injurious, last-ditch effort to prevent fatalities. If the airbags didn't inflate but the driver walked away from the accident, then not activating was probably the safer option.
Early days of airbags had fairly simple deployment requirements.
Current vehicles have a lot more, in terms of impingement, impact direction, instant and sustained G-forces, etc. When teaching EMTs we use airbag deployment as an indicator of potentially serious mechanism, but the reverse isn't true - you can't automatically dismiss an accident as minor because there was no deployment in an airbag-equipped vehicle.
That tweet and its replies note that the driver was uninjured and that the airbags might not have been required. For example:
> Looking at the HQ video, it looks like the impact speed wasn’t that bad and the crumpling of the truck decelerated the car nicely. The passenger compartment is fine.
Why don't they have 6 super cheap single beam lidars in some sort of pattern to detect this type of thing? It doesn't have to be a giant spinning sphere on the top of the car. Small moving prism in one axis?
If the car was already braking, and it hit a soft enough obstacle (in this case, the roof of the truck), the G force may not have been enough to trigger them, which would be correct behavior.
The report that air bags failed to deploy should be a huge red flag for NTSB. I hope they are sending a team to investigate. Airbag deployment failure should not happen in any modern vehicle.
Continuous laser telemetry should be always available and active, anything bouncing back at certain speed and distance starts autonomous braking... No need to identify anything..
If autopilot created a situation in which a crash was inevitable with no warning to the user. E.g. if it suddenly grabbed the steering wheel and swerved right resulting in the car spinning out in only a few hundred ms for no good reason.
Even if autopilot caused a crash such as above, you still have to run a cost benefit analysis of whether or not it prevented more crashes than it causes. Considering the failure rate of humans it's ok for autopilot to have a non-negligible failure rate, and it doesn't really matter if those failures happen in the same places as the human failures.
Hands on the steering wheel doesn't mean affirmative control. That's why the example includes putting the car in an unrecoverable scenario faster than an attentive person can react.
Well, Tesla puts disclaimers that their "assistant" demands the driver to always be paying attention so they would accept blame could only if the system completely ignores correct user input.
I don't know if there is any instance where it would completely ignore user input and Tesla agrees the users wasn't at fault.
No matter the name of the feature, I think it can only be blamed if it had totally removed control from the driver. This accident would've been avoided had the driver been paying attention.
Cars are heavy death machines, not apps, and I have zero sympathy for people who take them for granted.
460 Comments Right now. Lots of criticism of AutoPilot and Marketing. "5" ( very brief) Mention of Elon Musk.
I am not sure what to make of that. May be people feeling what he has achieved on SpaceX meant he could get away with whatever he said / promised on Tesla.
Or may be I am just hyper critical of Him ( With Respect to Tesla only ).
i think autopilot is a misnomer in this case, it should have been called brainless driving assistant
on a serious note, it's not that another edge case was missing in training data but what we call "common sense" is fundamentally missing in neural network based approach, and no amount of parameters, model permutations and gpu power would compensate for that
This isn't a neural network edge case/common sense issue; this is an issue with the radar sensors that Tesla is using for driver assist.
Specifically, radar is noisy — it will constantly detect non-existent "stationary" objects in front of the car. Driver assist systems therefore ignore any objects which are stationary on the road — if it didn't, then the car would stop erratically.
>> Driver assist systems therefore ignore any objects which are stationary on the road
If true, that is totally unacceptable. Any mountain road can have fallen rocks. Any city road can have a child's toy be stationary in the road. And EVERY road can at any time have a live person, a fallen motorcyclist/bicyclist, lying crumpled up and stationary in the middle of the road. Stationary objects, whether on radar, camera or both cannot be ignored.
If the radar 'sees' such an object for more than a heartbeat or two, the car should take action. That's what any reasonable person does when they aren't sure what they are looking at.
It's even worse. Tesla maintains a whitelist of locations where problematic radar returns are caused by surrounding infrastructure. They blind themselves to the radar data in those zones. Pray you never get hit by a Tesla that has decided you're part of the background because their database says so.
Why do you say this? I'm at the wheel so I can always act. The driver assist makes this easier to do by helping out but I'm still here at the wheel and so I don't need perfection, I just appreciate the help.
People aren't machines and when something works fine 95+% of the time they'll get complacent and an uncommon event will eventually catch some people out.
Yeah there's a whole different level of automation between your car and what's being advertised by Tesla. EyeSight does do a bit of driving for you at higher speeds but it's not the stop and go capable 'Autopilot' of the Tesla.
You always see a giant stationary object in front of you - the road itself. The way you can tell the difference between the rock and the road itself is through additional resolution, reflectance and colouring information that is available at visible light wavelengths but not to radar.
So it's not a problem that can be solved through radar alone, you do need to combine that data with other sensors.
Yes, but when the radar sees a particularly reflective bit of road, suggesting something other than flat pavement, the least it can do is alert the driver. False alarms are an annoyance but a small price to pay. Even my tiny garmin dashcam has a basic collision alarm feature when it sees an object growing in size.
To add even more detail, as there still appears to be a lot of confusion:
The radar systems in these vehicles send out a radio pulse in a broad approximate-cone forward. They get bounces back from everything that reflects radio in front of them. Distance from the object is calculated by time between pulse and response. Speed towards/away from the object is calculated from Doppler shift of the radio frequency.
There are two main things that these systems can't detect.
1. Speed of the object perpendicular to the direction of radio wave travel.
2. Location of the object within the approximate-cone the radio pulse travels in.
Note that thanks to the second, you can't calculate the first with higher-level object tracking, either.
So the data you get back is a list of (same-direction velocity component, distance) pairs. There's no way to distinguish between stationary objects in the road and stationary objects above the road, to the side of the road, or even the surface of the road itself.
Radar just doesn't provide the directional information necessary to handle obstacle detection safely.
Like a normal driver, if it sees something unexpected or that it doesn't understand, it needs to slow down, drive carefully. No driver steps on the gas if they have something that they don't understand what it is in front of them.
I’d like to see this. Designing behavior and UI presentation for cautious slowdowns would be an interesting challenge. The car would only be slowing in situations when the driver was comfortable (else the driver would be asserting the control.) Believability would be a challenge, as would persuading drivers from setting speed higher to compensate.
Teslas have been marketed unsafely and promote unsafe behavior. They cannot see many classes of stationary objects, especially at high speed. And because of Dear Leader's over-reliance on "magic bullet" IP investment, they're not going to change how it works voluntarily.
The most important part for me is that the driver is uninjured. Sure, autopilot made a huge mistake, but Model 3 with Autopilot still has a very good safety record.
Are you suggesting that the car is intelligent enough to only plow into massive stationary objects if they'll crumple adequately so as to prevent driver injury?
A better argument would be about overall occupant safety vs driver-controlled vehicles per mile travelled.
I somehow agree with Musk that a self driving car should be able to work with only cameras, as humans do with their eyes.
If there are 2 cameras, depth can be calculated ( eg. 2 eyes) and Tesla is in the best position to achieve this ( lots of data).
Additional sensors make the system way more complex and most of them have the same issue: it doesn't work in the worst weather conditions.
They should however greatly check why the system failed and how they can improve depth vision with their cameras. The failing of the system in broad daylight is a severe issue/fault.
And I'm sure they will improve this.
Until full auto pilot, which is still a long way to go, the AutoPilot is driver assistance and driver's should be aware.
The issue isn’t depth perception (someone with no functional depth perception can drive!) so much as a lack of, for want of a better term, common sense. AI is a bit of a misnomer; these cars are obviously not sentient, and don’t have the same abilities as a human. If self-driving cars are possible at all with current/near future tech (I’m dubious personally), they won’t be solely dependent on cameras.
Why not? Humans can drive in less than optimal situations with 2 eyes as sensors.
If the goal is to have a solution that can drive in the same situations as a human.
What sensor is missing then and why?
Ps. My statement questions mostly LIDAR as additional sensor, I'm not sure if AI/ML is good enough yet too, but we'll c. I'm pretty sure Lidar isn't as important as many people think it is.
And for all Musk's "his faults", you can't deny he is correct on a lot of cases.
Almost everyone questions the Lidar decision, but loses sight of the most simple question.
What's missing with only cameras, if humans can do it?
> What's missing with only cameras, if humans can do it?
The human brain - which can respond to novel situations in a reasonable way without prior training.
E.g. if I put you in a truck in the middle of a field, you’d be able to successfully drive around without being confused by tall grass. You’d also successfully avoid dangers such as cliffs/boulders even though you may have never done it before. If you can figure out how the human brain does that, we’d be one step closer to general AI.
>What's missing with only cameras, if humans can do it?
A human brain?
The debates about LIDAR etc. aren't that they're needed to pilot cars effectively on roads in the abstract but that they may be needed in the absence of a high level AGI to reach acceptable levels of functionality. And they may not be enough of course.
Technologically, there are things the machine can't do that human's can:
- HDR - human eyes do this really well
- Moving the camera. Humans will move their heads to perceive depth on things and to bypass obstacles. They will tilt, pitch, and yaw their heads to get views.
I think you might be right, but is the problem here really a lack of depth perception? Even a single camera is enough to show you a rapidly-growing rectangle in your field of view.
This seems similar to a different fatal Tesla crash in 2016, where the system for some reason couldn't detect a white truck against a bright sky. Maybe what they need is better dynamic range in their camera(s)?
They already have IR extra, but I think this was a severe error in the system and it probably won't be there last.
I can't know what was wrong with this situation though.
It's all about tweaking the AI and camera's. I hope they don't need to replace the camera's for additional hardware features ( hdr, tilt the camera, .. ) but it's still a camera. Even if it lacks a feature or not.
Maybe to achieve better than human ability you may need more than multiple eyes all around a car. But I do see that fully autonomous driving can be achieved with only visual cameras
Interesting argument. Why doesn't a plane fly like a bird? Why don't we have submarines that swim like fishes?
(for the sake of clarity, yes I'm being sarcastic to demonstrate how stupid that argument is. Humans have eyes but also got a brain that is way more powerful than any AI that we have today)
> yes I'm being sarcastic to demonstrate how stupid that argument is
What is stupid about the argument as it was framed? It should be possible to drive autonomously with only cameras. It may or may not be possible with our current AI and techniques. It also may or may not be much easier to pull off with LIDAR et al. IMO the jury is still out and will be for a while.
Remember that humans always seem vastly superior to AIs until all of a sudden they're vastly inferior.
Musk is planning for a solution that drives as good as the best driver in bad weather.
Adding LIDAR and other solutions, is currently bullshit. I don't think there is proven hardware to see past extreme bad weather.
But that is not what Tesla is doing. They want their solution to work in conditions where humans can drive ( eg. Heavy snow, but not a snow storm that blocks every sight)
Because the technology isn't/wasn't there and is "good enough" as it is. But it doesn't compare to the efficiency of millions of years of evolution.
Eg. Feathers for flying + flexible bone structure and absorbing skin for water pressure.
I will react with the question: although we can dive and fly. Wouldn't it be better, if we adapt to nature's evolution? Eg. Put on a feather- suit, jump start the engine to get off the ground and then float to the destination and only let the engine kick in when we need to go higher/land? Riding on earth's natural wind and sparing fuel resources ( going off topic here)
On topic: ML/AI should be able to datafy all related information that human eyes perceive.
And those datapoints need to be handled by every situation that occurs. The article clearly shows that this is not the case yet.
While a brain has more adaptability, Tesla should be able to counteract this with more data of the vehicles that drive around.
NN's are not fit for high-predictability, high-safety-factor control. I wonder if, in fact, we cannot construct a NN controller with the level of unusual object detection and edge-case scenario recognition required to match an average human drivers capability to recognize and react appropriately to unusual situations. I mean to say that I think this is something like an NP-hard problem... to make it one-log better you need 100x the data and 100x the NN dimensionality. Model complexity explodes...
I also think we are spending a lot of time and effort solving the wrong part of the transportation problem. Cars have a thermodynamic problem for society --> a 1500kg car to move 1-2 80kg people is very energy inefficient and resource intensive to manufacturer.
We need remote work, more home delivery options (e-truck/vehicles), greater incentive for electric bikes, and better/more public transportation.
my 0.02...