Good discussion. The arguments that because humans can do it with two eyes any approach using more than cameras is wrong frustrate me to no end. Technology and engineering isn't about imagining the minimum theoretical requirements, it's about building stuff that works with what is available. People can navigate cross country without gps, but it's not wrong to put gps in cars.
It seems like the teams working on autonomous vehicles need all the crutches they can get, and it seems like a good idea to lean on superhuman sensors to make up for subhuman "cognition."
I do wish, though, that people would stop equating all radar with the adaptive cruise control style radar - imaging radars are a thing that exist, and can be competitive with lidar.
It was 10pm in the evening. I drove home in my hyundai but the tank was empty. I probably couldn't make it if I went full speed at 130km/h so I drove carefully at 80km/h. Nothing eventful happened on the highway and I got off safely. 5 minutes later I am roughly 100m away from home. I have to take a right turn uphill which meant that my car slowed down to slightly less than 20km/h and then I have to turn left to get to the parking lot. As you can guess it was dark and I was primarily paying attention to the sidewalk because pedestrians are hard to see. As I turn left suddenly BAM. A bicycle with a small hard to see light has crashed into the rear door on the right side at a pace of possibly more than 30km/h as it was going downhill. The car door and the front of bicycle were completely trashed. Luckily the cyclist didn't sustain any serious injuries. I was obviously protected in a huge cage of steel (compared to american one's its still tiny) but I slowly started to hate driving, especially at night. All the bullshit speeding tickets, the annoying maintenance and now accidents are adding up to a miserable experience.
If I had 360° night vision this wouldn't have happened. I can't trust my own two eyes. I wouldn't trust anyone's two eyes for that matter to drive a car safely. They weren't good enough last year and they won't be enough next year. Replicating humans will just replicate their weaknesses. I'll take all the crutches I can get.
Humans are not good drivers with two eyes. I constantly have to take my eyes off the road to check mirrors. I hope while checking what is behind me that nothing happens in front. I hope while looking in front of me nothing I need to worry about from the sides/back of me matter. Of course most of the danger is in front of me so I spend most of the time looking there, but there are times when things not in front of me are dangerous to me.
By giving a car more than two eyes it has the potential to become a better driver than me because it can watch more. By adding more than just visible light cameras it can see things that I would never see. I want my next car to be safer, the big limit is not seat belts, brakes, crumple zones, airbags, and such - the big limit is the limited human in control. There is potential to make some of the above obsolete if the control system is enough better.
This isn't really the point, it's that humans are able to successfully build an accurate-enough model of the world to navigate terrain with only camera like vision -- and a really bad one at that.
There is, of course, no problem with giving cars more sensory abilities to make them safer but what they mean by a crutch is that the CS breakthrough that will make self-driving cars compete with humans will function about as well with two crappy cameras mounted on the drivers seat as it would with an array of high-precision sensors.
(tongue firmly in cheek) Material science breakthrough could result in aircraft that flap wings that could compete with condors too, but what would be the point? Naturally evolved mechanism are not the gold standard; they developed under completely different constraints.
Pretty good vision, all told. Though the constant scanning motions of the eyes as opposed to fixed cameras mean they may have at least as much in common with LIDAR.
I read the justification for that 576MP number and it looks wildly optimistic. Sure eyes can differentiate two lines close together... when at the center of vision. Then they multiple that by 120 degrees horizontal and vertical.
Unfortunately resolution of your eyes drops off quickly from the center of vision. Yes you can move your eyes to focus on different things, but so can cameras.
Sure the mental image you build is high resolution, but not that directly related to reality. A good example of this is the numerous optical illusions that depend on you looking at a point of an image, building a mental representation of that image, then finding a conflict whenever you move your eye.
I actually took the time to calculate this a while ago - basically, angular resolution drops off pretty much exactly as 1/(angular distance) from the center of focus, and the total angular size of the human visual field is only so big (and is measured). The actual answer is closer to 1 megapixel at any given time.
> with a computational engine behind them the likes of which humanity isn't even remotely close to replicating
Let's not forget about human experience. It gives context to any visual perception, and can't be replicated by cars. We simply have access to a larger, more complex environment, and the environment fosters our intelligence. It's easy to overemphasise the brain to the detriment of environment, but in reality the environment is the main cause for the development of intelligence in the brain. The corollary is also true - a lower complexity environment will lead to lesser intelligence, no matter how powerful the brain is. Intelligence springs where agent needs meet with external limitations, those limitations guide its development.
By analogy, the environment is like the training set and the brain is like a deep neural net. A deep neural net with a poor training set is not going to be accurate. That's why DeepMind, OpenAI and others have focused so much on artificial environments (games) to train agents - they know that environments are the key to training advanced AI. A mere dataset is like a static environment where nothing new happens. Research is transitioning from datasets to simulated environments for the next step in AI. I'd even go as far as to track the evolution of AI by the evolution of environments and environment models.
Human experience dies with the person who acquired it, whereas machines can continue to learn indefinitely from a larger and larger pool of labeled data. Think about all the "take overs" when AI fails that Tesla sees each day that are made available to their training set.
As for the point about the training environment, a lot of what AI cars do is learn from simulations.
Books are one of the best thing we have to transmit experiences but it's far from a 1:1 process. Two people can read the same book but take away diametrically opposed things due to their own experiences / expectations.
You can read all the books and notes about WW1 battles, you'll never come close to feel what they felt in the trenches.
Sure. Those objections are just as valid for "how i felt last week", though. The past is a foreign country, even to those of us who were around for it.
This is a spectacularly misleading claim. Humans have a very small field of high-definition vision, and a wide range of low definition vision, easily exceeded by almost any camera system. Where humans excel is in the heuristics of taking marginal information and teasing out meaning (not always the correct meaning), though neural networks approximate that system.
The point stands though; we use our binocular vision to assemble a 3D representation of the world through neural networks. Why not skip some of the middle steps if we can design a system which already gets an accurate distance field as input?
Because you lose color and end up with a HUGELY lower amount of data. Lidar is quite expensive, so you end up with a few of them.
Tesla has something like 10 cameras running 1280x960 @ fps or similar. With so few data points (and no color) you can't identify similar vehicles. After all snow plows, school busses, ambulances, bikes, motorcycles, police, etc all have different behaviors. Some even signal cars with spot lights, police red/blue lights, brake lights, blinking high beams, etc. All would be invisible to lidar and contribute to lidar controlled cars acting less like human driven cars.
Ideally the computational engine thrown behind the cameras isn't also thinking about what could be wrong with my showers thermostat.
Biology got to be just good enough to make it to tomorrow. Photosynthesis is one of the oldest mechanisms around right? Quick google and at the moment that turns sunlight to energy tops out at 6%, now look at solar panels https://upload.wikimedia.org/wikipedia/commons/5/5d/PVeff%28...
Now, commenting on biology and electronics are topics vastly outside of my ability. I'm just trying to make the point that we can't be the best it can get, surely? We long left the time where industrial progress meaning "pick one things and do it well"
I don't think it's so strange, it explains very effectively that driving a vehicle has very little to do with the optics and sensors because humans can drive with a crappy camera, limited fov, and two small super inaccurate mirrors.
It's all about the data processing and the breakthrough that will enable human-ish self-driving cars will be there and not in more accurate sensors.
yep, resulting rich details simplify stereo matching a lot. Instead of complicated algorithm it allows our brain to just run simple displacement match in extremely parallel fashion natural to the brain.
>The idea that you can throw a cheap camera in a car and think you can achieve the same always seemed strange to me.
it isn't strange, it is just a bit early. Running 1MP stereo on a single core P4 in 2004 was just pitiful. These days a 20MP sensor costs under $30 and 16-32 cores is having much better time with it (using GPU is much cheaper and several times faster - it is just that i have these workstations around with good CPUs and no GPU to speak of. Tesla for example does run powerful GPU for their cameras.) So, scroll forward another 15 years, and we'll have nice stereo with something like 200MP sensors. I honestly don't see how lidar can do even just 4MP at minimally usable 30FPS in any foreseeable future (as probing at 150m is 1 microsecond per pixel).
Depth of field is adjustable for all by resizing the aperture (a.k.a. the "eyelid" mechanism). Try a higher f-stop ("squint") if you're having trouble with it.
However, the focal point with respect to the sensor is indeed miscalibrated in some units, and good units can become miscalibrated over time. There are some after-market correction devices available.
Higher f-stops are especially helpful for astigmatism correction. Overuse of the increased non-autonomic f-stop has been associated, however, with distress signals being generated by nearby actuators, resulting in a feedback loop of reduced f-stop to decrease said distress signals.
>The arguments that because humans can do it with two eyes any approach using more than cameras is wrong frustrate me to no end. Technology and engineering isn't about imagining the minimum theoretical requirements, it's about building stuff that works with what is available. People can navigate cross country without gps, but it's not wrong to put gps in cars.
At least as important, and something that comes up shockingly little as well (I don't see any responses to you yet on it here even!), is that overall humans stink at driving. Like you said, arguments begin with "because humans can do it" and need to be stopped right there, because that needs a lot of qualification. "Humans can do it"... with well over 1.25 million deaths a year worldwide [1] and another 20-50 million injured/disabled. Also these stats are actually worse then they look at first glance, because it's not a uniform age distribution. A lot of young people die on the roads vs other causes of death, which represents an even bigger loss in terms of human years.
Personalized arbitrary point to point mechanized transportation is so insanely valuable that it's well worth that kind of death/injury rate, or for that matter one vastly higher (as it was in previous decades before modern safety standards), but there is also no reason to use it as a benchmark for "acceptable" if the opportunity exists to do better. And in particular there is no reason to limit autonomous systems to human visual spectrum for information input. That doesn't cut it for a lot of adverse weather conditions, or even just for humans along the road or animals jumping across the road at night for that matter. A big part of the pitch shouldn't be to match humans, it should be to exceed them and by a sizable margin as well.
There's no guarantee that computers will do better. We don't have the data. They currently do worse.
Some people claim that they won't make the same mistake twice, but how many times have you seen the same bug crop up after it was fixed in previous updates?
I would also posit that the existing statistics provide a "good enough" floor. If a million people currently die, the software will be good enough when it has the same level of fatalities. There is no incentive to make it better, at least not in our Capitalistic system. Investments would be better spent on differentiating widgets that dazzle the occupants, not protecting non-customers outside the vehicle.
There are numerous new and different opportunities that emerge which create different dynamics. Insurance companies might want to invest in tech that reduces accidents because they can lower their premiums and be more competitive, for example (which btw, would be a side effect of a "capitalistic" system)
>There's no guarantee that computers will do better.
This seems like a fallacious argument from incredulity. There is no reason to believe that humans are the ultimate existence for navigating around multi-ton land vehicles at 20-100+ miles an hour, and there is plenty of reason not to. For computers, there will be bugs and edge cases, but there are plenty of examples of high assurance low bug code in areas it matters. Additionally, any experience gained can be applied to the entire fleet at once. Computers have fundamentally better response times. And sensor input absolutely can better. A self-driving vehicle need not have any blind spots in 360°, and can "watch" all of that at once. No human will ever handle animals at night the way a vehicle with a FLIR could.
Meanwhile we know for sure that there are whole classes of issues that simply do not exist for computers. They do not get tired. They do not use drugs. They do not get distracted by the passenger or texting (humans are awful at multi-tasking). They do not "have a bad day". They do not get angry at that "jerk" in the other lane. They will know all the rules of the and not forget. Perhaps most of importantly of all, for all these things there will not be variability within a fleet. Sure, there are very good human drivers, but there are also endless awful ones, and every single person must be trained anew and go through a high risk phase while they get experience. Not with a computer. Consistency and attention to boring details and checklists, every single time, has proven a key driver of health & safety improvements across many industries. And even all that are just a few of the first order effects, there will be second order ones too like much less incentive to put off maintenance when the car can drive itself to the service center and another can take its place.
Sure there will no doubt be more obstacles along the way and it'll take a lot of time and effort. There may well be high profile screw ups, it's still humans doing the developing. But you've got a real steep hill in my opinion if you can look at all that and the casualty stats and still want to argue that computers won't ultimately be better.
>There is no incentive to make it better, at least not in our Capitalistic system
This is just utter bullshit though. Our capitalist system includes cost internalization in many cases, and cars are one of them. It's called insurance, and since you're required to carry it by law the fact that it didn't occur to you raises will questions in my mind about the basis of your opinions. Unlike many fields, in cars improvements in safety have a direct financial bottomline. Furthermore, self-driving will most likely enhance that, not detract it, because there the model of people giving up direct ownership and buying shares instead will become possible.
Programs are written by fallible humans. There is no self-learning AI. Maybe someday there will be, but until then I won't be surprised to see self driving cars make the same mistakes over and over again.
How long will it take to update the software? How long will you get updates? If a weird weather pattern occurs, will every self-driving car crash on the same day?
You also say lidar isn't a crutch because self-driving cars aren't ever really going to be able to go by image recognition and enough logic to know what to do - rather, they're going to go by the complete plotting of the traffic grid with lidar being there for object-avoidance. This is Google's model and they are furthest along - and set-up in Arizona where the unpredictable is kept to the minimum.
> rather, they're going to go by the complete plotting of the traffic grid with lidar being there for object-avoidance. This is Google's model and they are furthest along
Indeed, and that's how autopilot driven buses, that were around for at least a decade, work. Except their movement trajectories are predetermined, and they are very slow, and stop at every obstacle imaginable.
>I do wish, though, that people would stop equating all radar with the adaptive cruise control style radar - imaging radars are a thing that exist, and can be competitive with lidar.
Notably however, the radars employed by current Tesla cars lack the angular resolution to distinguish between stationary objects next to the road, and stationary objects in the middle of the road.
I think the real point is that LIDAR has its own shortcomings. Currently the range is too limited, it's sensitive to fog and rain (more than cameras) and it's very expensive. Note that Tesla's for example don't only use cameras, they do have radar and use that to "see" two cars ahead even now. Still they don't use LIDAR and Musk's thesis is that this particular tech isn't needed and overly expensive and cumbersome
The radar in tesla vehicles is the type intended for use in adaptive cruise control. Extremely low angular resolution with a filter that ignores objects that are close to stationary.
This is why Teslas keep running into stationary cars. The radar in them is not designed for this, and can't tell the difference between a fire truck in the lane and the barrier to the side of the lane.
I feel like I have been somewhere in the middle on this. I feel like to reach level 5 autonomy, the vehicles are going to have to have really good image processing from conventional cameras. In fact, they might need it to be so good that the lidars themselves become redundant.
But there are a lot of possibilities between zero autonomy and level 5 autonomy that might almost require lidar too in the short term.
Isn't this the best part of capitalism? Someone makes a lightbulb that's super inefficient but gets everyone thinking about the brilliance of candleless light at night. We then spend generations evolving the concept hundreds of times, with a market that rewards these innovations through sales.
Is there a better way to build a better autonomous car? We're at the initial stage of exciting people with an implementation that kind of works. We will now spend generations selling better and better cars with different, fewer, better, cheaper sensors and other parts.
I'm not sure I agree with your analogy between current "autonomous" vehicles and an inefficient lightbulb. I'd say it's more like a lightbulb that flickers and burns out in 5 seconds. In which case it probably makes more sense to make the lightbulb not burn out first.
I don't think Edison (and his employees) got a momentary flash of light from a lightbulb and then said, "alright, it's almost working, now let's see if we can do it without the fragile glass bulb."
Your intuition is correct. Practical electric lights, in the form of arc lamps, existed before Edison was even born and were successfully commercialized many years before the incandescent light bulb.
Effective incandescent light bulbs were also around before Edison's. The main things Edison's team brought to the table were a better filament material, a better vacuum in the bulb, and supporting electrical infrastructure (generators, etc.)
It usually takes some time from the point "we have done all the basic research" to something that is successful in the market. Today, autonomous driving seems still to be in the basic research stage. If there was a research car, filled with ridiculously expensive hardware, able to perform at human level, I'd be more optimistic about the introduction of a marketable self driving car ten years from now. Take the mouse + graphical GUI example. It was demonstrated in the late 1960s but became only economically viable in the mid 1980s.
>It seems like the teams working on autonomous vehicles need all the crutches they can get, and it seems like a good idea to lean on superhuman sensors to make up for subhuman "cognition."
In a perfect world, yes. But cost always comes into the equation. Companies like comma.ai and Nexar are getting their devices in to the hands of hundreds of thousands of people at this point. When you can get that level of training and autonomy from a simple camera based device it becomes a really compelling product, even though you're nowhere near full level 4 self driving.
And neither is Tesla. Anybody operating either of these systems without paying attention to the road and being prepared to control the wheel at all times should have their drivers license confiscated.
Warning: Autosteer is a hands-on feature.
You must keep your hands on the steering wheel at all times.
Warning: Autosteer is intended for use only on highways and limited-
access roads with a fully attentive driver. When using
Autosteer, hold the steering wheel and be mindful of road
conditions and surrounding traffic.*
Warning: Autosteer is not designed to, and will not, steer Model 3
around objects partially or completely in the driving
lane. Always watch the road in front of you and stay prepared
to take appropriate action. It is the driver's responsibility
to be in control of Model 3 at all times.
It will stop for objects in front of you. Agree though that it's a nice assist, but as you get used to it, you can predict the situations where it will fail. Definitely would not trust it to drive in an unconstrained environment.
Warning: Traffic-Aware Cruise Control cannot detect all objects and,
especially in situations when you are driving over 50 mph
(80 km/h), may not brake/decelerate when a vehicle or object
is only partially in the driving lane or when a vehicle you
are following moves out of your driving path and a
stationary or slow-moving vehicle or object is in front of
you. Always pay attention to the road ahead and stay
prepared to take immediate corrective action. Depending on
Traffic-Aware Cruise Control to avoid a collision can result
in serious injury or death. In addition, Traffic-Aware
Cruise Control may react to vehicles or objects that either
do not exist or are not in the lane of travel, causing Model
3 to slowdown unnecessarily or inappropriately.
Warning: Several factors can affect the performance of Automatic
Emergency Braking, causing either no braking or inappropriate
or untimely braking, such as when a vehicle is partially in
the path of travel or there is road debris. It is the
driver’s responsibility to drive safely and remain in control
of the vehicle at all times. Never depend on Automatic
Emergency Braking to avoid or reduce the impact of a collision.
The user manual tells a different tale than Elon Musk does (https://amp.businessinsider.com/images/5c0e83e40fefb34c4e3b4...) Elon Musk gravely misrepresents the capabilities of these systems to impress consumers, who are left with the mistaken impression that Tesla sells cars with Level 3 capabilities.
Mobile-readable versions of the owner's manual quotes from both your messages (don't use code formatting for block quotes):
Warning: Autosteer is a hands-on feature. You must keep your hands on the steering wheel at all times.
Warning: Autosteer is intended for use only on highways and limited-access roads with a fully attentive driver. When using Autosteer, hold the steering wheel and be mindful of road conditions and surrounding traffic.
Warning: Autosteer is not designed to, and will not, steer Model 3 around objects partially or completely in the driving lane. Always watch the road in front of you and stay prepared to take appropriate action. It is the driver's responsibility to be in control of Model 3 at all times.
Warning: Traffic-Aware Cruise Control cannot detect all objects and, especially in situations when you are driving over 50 mph (80 km/h), may not brake/decelerate when a vehicle or object is only partially in the driving lane or when a vehicle you are following moves out of your driving path and a stationary or slow-moving vehicle or object is in front of you. Always pay attention to the road ahead and stay prepared to take immediate corrective action. Depending on Traffic-Aware Cruise Control to avoid a collision can result in serious injury or death. In addition, Traffic-Aware Cruise Control may react to vehicles or objects that either do not exist or are not in the lane of travel, causing Model 3 to slowdown unnecessarily or inappropriately.
Warning: Several factors can affect the performance of Automatic Emergency Braking, causing either no braking or inappropriate or untimely braking, such as when a vehicle is partially in the path of travel or there is road debris. It is the driver’s responsibility to drive safely and remain in control of the vehicle at all times. Never depend on Automatic Emergency Braking to avoid or reduce the impact of a collision.
I'm sure this is being considered, but I'm curious what the strategy is for handling signal/laser interference in dense traffic situations, in the future when all cars have 3d lidar installed.
For example, assuming 3 lidar units on the front of each car, in a large intersection, say two lanes of cars turning in front of 4 lanes of cars pointed at them, waiting at a red light. That would be hundreds of potential reflection/collisions per second.
The user manual for an older Hokuyo lidar with a single scanning plane (for example, mounted on a forklift to avoid hitting a wall) mentions to offset the mounting/angle slightly across the fleet to avoid interference.
current state of the art LIDAR systems have unbelievably tight windows in terms of space, time, frequency, and pulse coding which collectively allow for excellent background rejection. In practice, this actually isn't that much of an issue. Sunlight and retro reflectors are more difficult challenges to deal with than other LIDARS causing interference.
More specifically, each detector is looking through a soda straw at a tiny spot in space, and rejecting anything that doesn't arrive within a narrow range of time, which also has to have the right light frequency and something like a pseudorandom code sequence which gets match filtered against itself.
In practice, you'd have to send the right wavelength of light to the right 1cm circle in space, send it during just the right few hundred nanoseconds, send it with the right pulse sequence (which one could determine but would be unlikely to happen by chance). Even if it happens once, it would have to happen many times at the imaging rate of the sensor to cause major issues...
Is there an equivalent to the Skolnik books[1] for LIDAR? I keep applying radar principles to LIDAR, but it seems like the function is significantly different if it includes things like a "pseudorandom code sequence".
AFAIK Lidar is using a variant of the well-known radar technique of using pseudorandom noise modulation to avoid some of the gotchas with fixed-frequency pulses:
Skolnik's books are good but they are pretty old and focus mostly on the hardware/transmission of radar. The code sequencing is on the baseband signal processing side and is a direct result of IQ modulation being a thing in vector signal processing. Lots of modern radar designs use the same style of pseudo-random encoding to prevent interference between units. It's a major push in automotive radar right now (70GHz band) for instance.
Think of it as encoding your mac address on each packet you transmit wirelessly so you can tell which signals come from which source.
I don't recall such a scheme being used on either system I worked on. However, skimming the paper in your sibling's post it would be something done in the RF front end, which is probably why I never encountered it. The closest I got to the front end was pseudo-raw IQ from the pulses. It was also probably tied up in the anti-jamming measures, so it wouldn't get wide circulation on the programs.
Yes it's predominately a base band technique, although it has also been written about a fair bit in literature as an efficient power spreading alternative to a classic chirp. In that case, it is predominantly done to help propagation leveraging the extensive work done in communications on "noise-like" signals.
In the case of Automotive radar, using it as an identifier to prevent interference is appealing because it allows each radar to use the full 6 GHz of channel bandwidth which greatly improves the performance of the radar despite the fact that there are tons of same channel independent devices operating. Each one would effectively be a jammer otherwise.
I've always wondered this about LIDAR. It's all peachy with one car wandering down the road on its own, but a busy freeway with half a dozen cars all busily scanning everything is a different situation. Not to mention issues with sun glare dazzling sensors etc. Nobody talks about it so maybe it's not an issue in practice, but my own experience with sensors of all sorts is that they're never as good as you'd hope they are.
There's a trivial solution to this kind of problem. Frequency filtering. Essentially, you make sure that your light source beats at a peculiar frequency, and then you only look for signals in that beat with the same frequency. So you're only competing with LIDAR's beating at the same frequency. If you randomly select this frequency, you can have a lot of LIDAR's simultaneously operating without effecting one another.
That Darpa reference in the article is weak. Because in 2007 the state of the art needed Lidar, that means that it's still a necessary crutch in 2019? Image recognition has made unbelievable strides in the interim.
As I watch Tesla's "Autopilot" do its thing, it seems its limitations are less about object recognition and more about what do with the data it has. Lidar won't help it intelligently handle a merge of two lanes into one, or seeing a turn signal and slowing down to let the person in, or seeing a red light and knowing to start slowing down before it recognizes the car in front of it slowing down, or knowing exactly what lane to enter on the other side of a tricky intersection without a car in front of it, or knowing that car X is actually turning right so you shouldn't start following it if you're going straight, or having the car see a clear path ahead and having it accelerate to the speed limit when all other cars are stopped or going much slower in adjacent lanes, or moving to the left to allow a motorcycle to lane split, etc., etc. It's still great, but there are a ton of things to learn.
Maybe once there is no human in the driver's seat, you'll need the extra precision that lidar provide, but there are big gaps before even we get there.
A lidar sensor will almost certainly _always_ be lower latency than any sort of image processing engine.
A lidar gives you a stream of 3d points in the order of megavertices a second.
The processing pipeline for any visual system is at least frame rate of the camera (the faster the camera, the less light it can get) plus the GPU transfer time(if you are using AI) then processing.
this means you are looking at ~200ms latency before you know where things are.
Lidar is a brilliant sensor. Maybe it will be supplanted by some sexy beamforming Thz radar, but not in the near future.
Cameras built using dedicated hardware similar to what lidar systems are using would not have that sort of latency. You don't have to receive a full frame of data before it can be processed if you're not using generic COTS cameras.
sure, you can have the thing spitting out lines as fast like, thats not the issue.
you are limited by shutter speed, as you know, even if the shutter isn't global. (lidar is too, but we'll get to that in a bit)
Any kind of object/slam/3d recovery system will almost certianly use descriptors. Something like orb/surf/sift/other require a whole bunch of pixels before they can actually work.
only once you have feature detection can you extract 3d information. (either in stereo or monocular)
A datum from a lidar has intrinsic value, as its a 3d point. Much more importantly it does drift, event the best slam drifts horribly.
Lidar will be superior for at least 5 years, with major investment possibly 10.
> Lidar won't help it intelligently handle a merge of two lanes into one, or seeing a turn signal and slowing down to let the person in, or seeing a red light and knowing to start slowing down before it recognizes the car in front of it slowing down, ....
It certainly seems like it's still needed. Some of the high profile Tesla crashes seem to be caused by the camera system not properly recognizing the side/back of a truck or the divider on an off ramp and plowing right into them where lidar probably would have.
Todd Neff is the author of "The Laser That’s Changing the World", a history of lidar. It's a PR piece for his book.
The terahertz radar people are slowly making progress. Good resolution, almost as good as LIDAR. Better behavior in rain. Beam steering with no moving parts. Plus you can tell who's carrying a gun.
Musk is in trouble. No way is Tesla close to real automatic driving. Does Tesla even have reliable detection of stationary obstacles in a lane yet? Their track record on partial lane obstructions is terrible. So far, Teslas on autopilot have driven into a street sweeper, a stalled van, a fire truck, a crossing semitrailer, a lane divider....
What's wrong with hyperspectral imaging? Active IR systems work just fine at night. Passive night vision systems aren't exactly cheap but can probably allow optical navigation using ambient starlight. Throw thermal imaging in the mix and you have a ton of data to infer objects from.
Mr. Musk is wrong, and behind his error is, perhaps, his lack of technical aptitude.
> Musk and almost everyone else in the business recognize self-driving cars and trucks as the future of automotive transportation.
Those people in their prime majority think that some kind of "AI" program will be somehow think over the inputs and tell where to drive. They all have too much expectations for technology coming under the word "AI" these days
They all do so without understanding what those "AIs" actually are, and not realising that such programs can't make any "cognition."
Until that change, any talk of "self-driving" is premature. This does not preclude however the coming improvements in cruise control, and computerised collision avoidance.
I am, he did not make an impression of a that much smart person. The man began his career as a so so game dev, and been going from one random businesses to another (this is a part of his career he doesn't like to be public about) before Paypal "happened upon him."
He has no substantial CS or technical background. He dropped out of his physics masters.
Also, he has a startup on AI: https://en.m.wikipedia.org/wiki/OpenAI even if you think he's not proficient in technical aspects of AI, he definitely has a lot of ideas about social implications of AI.
He left in 1992 to study business and physics at the University of Pennsylvania, and graduated with an undergraduate degree in economics and stayed for a second bachelor's degree in physics. After leaving Penn, Elon Musk headed to Stanford University in California to pursue a PhD in energy physics.
Why do you believe that? Which is to say, is this something you've worked out with numbers or is it a reflexive intuitive judgement? Because if it's the later you should recognize that your intuition is based on prices that include the labor of the person driving you which won't be included in a self driving car.
Brad Templeton did some math and thinks that self driving cars will be cheaper on a per-mile basis if you sometimes take smaller cars when you need to.
Why not? Suppose your commute occupies the car for 1/24th of the hours of the day. In theory you could pay for only that usage.
There would be some markup so the owning company has a profit margin, but the higher the price is, the greater the incentive for competitors to exist, which would exert downward pressure. So maybe you'd end up paying 1/20 the price of owning a car.
Eventually you will own the car and the costs will decrease dramatically. My wife’s corolla is 14 years old, gets 30 miles a gallon, and costs about $250 a year due to failing sensors and a MAF replacement.
If I used Uber every time I’ve used her corolla in the past 10 years it would far outweigh the cost of gas, insurance, and fixing the car.
You’ve completely left out the cost of the car though. Even over a 14 year period if you bought the car new for $15k (2005 MSRP Corolla LE) you’re still at roughly $90/month. Factor in insurance, gas, and continued maintenance and you’re at a couple hundred a month. Of course that number continues to fall but eventually you buy a new car. If ride sharing moves to a subscription model of just a straight $200-300/mo I think it completely obliterates the market for individually owned small cars used primarily for commuting.
I can take a 1.5 mile Uber to Waffle House and it costs me $7-8 dollars each way. The Uber to Waffle House costs more than my meal. I give it as an example because Ive done it often. Almost the same distance to Publix, Lowe’s, more restaurants, and public transport.
Uber pool is cheaper but why would I inconvenience myself so much for a 2 miles radius.
No I don't agree. When I go to work in the morning there is nobody at work who wants to go elsewhere and there won't be until lunch time. Either the shared car is burning fuel driving empty to get back out to where people live who want to get to work, or it is sitting in a parking lot. The first makes the shared car more expensive, the second is just what we have today so I may as well own the car and get the convenience of being able to store things in the car (golf clubs for example).
Even in the best case, there are lot more people who want to get around during "rush hour" than the rest of the day, so most shared cars could at most work for 2-3 people each day. The types of people who are on the road during not rush hour are more likely to have a larger group (kids) with them so the shared car needed for them is a different size.
So that might be true for people who commute from a relatively remote location to another remote location. For someone like me who lives in an urban environment, autonomous ride shading would absolutely be more efficient than owning a car.
Most of the cost of operating a car is directly proportional to the usage of said car. Some costs per unit distance go down, some go up. In the end, if you need a car daily there is almost no way a service will work out cheaper.
What would the monthly subscription have to cost to make it financially worthwhile? I drive a bought and paid for car and if they could move me to and from work for $200/month our household would happily become a one car household. That bought and paid for car still costs me $75/mo insurance, ~$40/mo gas, amortization of the purchase price is probably in the $150/mo range, and since it’s 8 years old increasing maintenance costs. Additionally it will eventually need to be replaced which costs many thousands.
I don’t think a subscription model is too far fetched at all. We definitely intend to at least consider being a one car household, even at current ride share prices, when the current secondary car reaches end of life.
I don't think most people think about it like you do. Americans tend to spend far more on cars than they need to. An awful lot of SUVs and expensive German cars are sold to people that could probably get by with an inexpensive sedan or hatchback.
I do live in Texas though, so my experience might not be typical.
And in many places, commuting is precisely the time when shared car-sharing would work best. It's more flexible than carpools, but a lot of people are headed from general-residential-areas to general-work-areas at generally-the-same-time (often with a little flexibility).
Tesla cars, with the sensor packages they currently have, do not have a path forward to Level 5, despite the marketing claims of Tesla. Claiming that their cars have all the hardware required for Level 5 is a flagrant lie, which they can get away with for now because they've given themselves plenty of outs to avoid ever having to deliver Level 5 capabilities (when they fail they can blame inadequate software or an unsuitable regulatory environment.)
A word on that software: As it currently stands, they are delivering software that enables Level 2 capabilities. This is sometimes called "hands on", as in your hands should remain on the steering wheel and your eyes on the road, ready to take control in an instant. According to Tesla, drivers are to keep their hands on the wheel and pay attention to the road; if the driver fails to do so then they are at fault for any accident. However Elon Musk contradicts this company policy and has promoted the system as hands off on national television. Why would he do something so irresponsible? Because misrepresenting the hardware and software capabilities of his cars helps him sell cars. He knows the hype for self driving cars is at a fever pitch, and stretching the truth helps him profit from that hype.
Well no one knows how to get to level 5 since it doesn't exist yet. But there doesn't seem to be any theoretical reason why maps+vision+radar can't scale to level 5.
Their radar doesn't have the angular resolution to do anything other than adaptive cruise control, and while they boast 360˚ camera coverage, whatever stereoscopic data they are getting from those cameras is evidently insufficient to detect a large firetruck parked in the middle of the street.
Now consider MobilEye, a subsidiary of Intel and a major player in the field of camera/radar driver assist technology. Tesla was using MobilEye tech, until MobilEye terminated that relationship because they believed Tesla was being irresponsible with how far they were pushing MobilEye's tech. MobilEye had a financial incentive to see Tesla succeed with a camera/radar only solution, and continues to have a financial incentive to downplay the necessity of LIDAR. But do they? No. Instead you've got MobilEye's CEO Amnon Shashua talking about the virtues of a combined LIDAR/radar and camera/radar solution, while trash talking Elon Musk for being irresponsible.
When you consider who is saying what and what their financial incentives are, it becomes clear that Amnon Shashua is an ethical person and Elon is a car salesman who is making technically unfalsifiable claims about the capabilities of the hardware in Tesla cars to profit from automation hype.
But you don't know this for sure since you can't use any of these cars, right?
The point being Tesla has actually shipped a product that's pretty good, and therefore, is currently the best in the market. Sure maybe Waymo's system is better but since I can't use a Waymo I don't think we should be counting it just yet.
I've not tried them all, in fact I've only driven a Tesla on autopilot for a few 100 miles.
However with 1 billion miles driven on autopilot and (IMO) relatively few problems it seems like it must be pretty good. I hear about accidents involving uber, waymo, and similar just about as often as Tesla, despite Tesla shipping 5000 cars a week or more. Granted only a faction of those are likely to have the $5,000 enhanced autopilot, still radically more miles driven than anyone else.
Well, all of them are still in extensive testing with bug reports from small-event disengagement, not in customer bought cars involved in fatal crashes.
I'm interested in what anybody has to say something about this. Tesla has been reckless in their release and marketing of their 'Autopilot' software that lead to fatal crashes. Other companies (even Uber) are more responsible that that. The Uber fatal accident happened in a test vehicle, not customer bought cars. I'm not saying what Tesla has done in the self-driving arena trivial (it is not). But that does not mean their conduct is responsible.
Except for the lidar equipped uber that killed a woman you mean?
Sure, watching a Disney movie while a Telsa drives is stupid and got someone killed. Seems like a pretty poor proof that lidar is superior. Not to mention Tesla hardware and software has some pretty far since then.
How's this a good argument? The entire point in favor of lidar is that the hardware gives more data to software which can in turn be used to make decisions. There is no hardware/software distinction here. Nobody claims that lidar isn't capable of detecting objects. (Note that this is not a comment against lidar, I agree with you but I don't think your argument is sufficient)
The parent comment was opining that Uber's LIDAR based system was systemically inferior to Tesla's non-LIDAR based system. As this is a comment thread about LIDAR systems, the clear implication was that the LIDAR aspect failed.
My point was that the software failed, not the hardware, and I am making a hardware/software distinction.
LIDAR > camera-only systems. LIDAR provides positional data for objects before objection recognition/detection. Camera-only systems need to process an image and do object-detection before they can even figure out where objects are, meaning that they will always be slower than LIDAR for at least the amount of time it takes for that particular system to do image processing and object detection.
My point is that you're missing the point. This debate isn't about whether lidar works or not. Whether lidars are capable of capturing images is not the question we're trying to answer. The question is whether applying lidar to self-driving technology is worth the tradeoffs. Your last paragraph is a point about software not hardware. You're suggesting by using lidar, you give software more information than just using camera, which causes software to be faster. This is a valid point. But you cannot say, using lidar in a self-driving car is a good idea because it successfully captured the image even though software failed to process that image to make the correct decision. Well, if lidar doesn't help software (and it does) it'd just be useless crap. If Uber engineers don't know how to process lidar data, they really shouldn't put lidar in their cars... It's not about hardware or software, it's about the system as a whole.
But what is the trade off by using lidar? Higher prices that will be absorbed by the fleet owners? For a significant amount of crash-avoidance data? Lidar have a wider range of operation (especially at night), and software will catch up to squeeze every single data it can from both cameras and lidar. I don't understand this obsession of limiting bleeding edge applications to use a select few of sensors (as been echoed in this thread a few times).
Sure, but there's also cases of Tesla's acting to save lives in cases where Lidar wouldn't have helped. For instance reacting to brake lights. Or slowing when a car on front of another car, not visible, had an accident.
Sure Lidar + camera would be ideal, but if picking one or another it's far from clear where would be safer.
Lidar has 3D of course, but a MUCH lower data rate, and of course no color.
I'm not sure that he meant lidar is a crutch in the way that this article represents. Those tiny wavelengths that allow lidar to provide tremendous detail also make it unreliable in rain and snow. Until that problem is solved, and I haven't heard that it has been, lidar dependent cars either 1) can't drive in the rain or 2) must be able to operate reliably in the rain using technology other than lidar.
Believe it or not, cars will need to be able to drive in the rain, so if you're going to eventually need to build a car that will work reliably without lidar then why not build that to begin with?
It looks like most SDC companies are going with lidar because it is faster and easier, but if it only covers 90% of use cases then that does sound like a "crutch".
Maybe Lidar is a crutch. For a lot of things, a crutch is an adequate substitute for a non-broken leg. The trouble with crutches is that if you want to dance, or play baseball, or ride a bike, you can't do it. I can tell you how I've gotten a much bigger annotated dataset, and if I keep collecting, soon I'll be as fast as a normal person at walking down the sidewalk, but people want to do more than just walk down the sidewalk.
Just because it's a crutch doesn't mean it shouldn't be used. It means you should use it until you have a tool that can do better (your healed leg?).
As far as I can tell Musk insists walking unassisted with a broken leg is better than using a crutch because a good leg is better than a crutch. Sure, but he's not providing a good leg, he's providing a broken one. It's like telling people to drag their broken leg today because 3 months from now it will be better than a crutch. And he's not taking this route because today's cars do better with normal cameras than with LIDAR (future cars might though what good is that today?) but because it's cheaper while still allowing big claims.
Use the right tool at the right time. In the meantime develop the next tool and start using it when it becomes appropriate.
Lidar can create a great 3d representation of an environment, but then you're back to the same problem as cameras. You need some sort of AI to identify objects in the data. So the question is how much of an advantage does lidar give you in object identification? It's hard enough to identify objects with confidence in 2d images. Where is the confidence level at for identifying and/or discerning 3d objects with AI?
Sure, lidar gives you way less resolution. Take the velodyn alpha puck for instance, 300k points/sec. If you are moving at 60 mph once second = 88 fps. So the puck spreads 300k points over 88 feet of highway.
The tesla system has 10 cameras, but I believe only two of them look forward. I believe they are 1280x960 @ 30 FPS or 36M pixels/sec. But there's two of them, 73M pixels/sec. Each pixel is in color (lidar is just a distance).
So the tesla system has WAY more information about the environment, granted distance has to be inferred, but it also had radar to help with that.
Additionally being inherently more similar to eye sight, a car using cameras is likely to get along with human drivers better. Slowing down when it's foggy or rainy, seeing at similar ranges to humans, and being able to use color for additional context. Is that a UPS truck or an ambulance? Is that a reflection of a police car with it's light on or just a window reflection? Is that a boulder or a wind filled trash bag?
> So the tesla system has WAY more information about the environment, granted distance has to be inferred, but it also had radar to help with that.
I'd argue the contrary. Intelligence is primarily not about the amount of data, but the amount _and_ quality of data you receive. If I would have a magic sensor giving me obstacles in a segmented form, that would be couple of KB, and it would beat any other sensor on the market.
Inferring the distance from stereo images has its own failure-modes and are not easy to account for as in LiDAR.
LiDAR also gives you reflectivity, so you will be able to differentiate between a UPS truck and an ambulance.
> Is that a reflection of a police car with its light on or just a window reflection?
Fun thing, to my knowledge reflections are a major unsolved problem for vision. It is easier for LiDAR, as you can rely on the distance measurement and will have an object somewhere outside of any reasonable position (e.g underground, behind a wall).
Depending on the lidar, the glass might even register as a second (actually primary) return.
Yes, you need cameras (likely color) to be able recognise any light based signalling (traffic lights, ambulance/police lights...), so LiDAR is not the panacea.
But having the lidar telling you that there is a window and that police is behind it is likely vastly more robust with lidar.
Also, the difficulty is that you have to see arbitrary objects, on the road and possibly stop for them. As long it is larger than maybe a couple of centimeters (or an inch), it will show on the LiDAR, with stereo vision, you need a couple of pixel texture to infer it.
I've worked with lidar data a fair bit in a VR environment. It can be quite hard to tell what's going on in any kind of complex environment. The date is so sparse and the datasets I was working on were static.
300,000 per second... if you are trying to figure out what's going on in 1/20th of a second that's only 15,000 points. Assuming your scanning 3 lanes (3.7M each) out to 100 meters, say 3 meters high that 3330 cubic meters. So lidar gives you 2 points per cubic meter. Not exactly going to be easy to tell a bicycle from a motorcycle, or an ambulance from a ups truck.
From what I can tell machine learning has led to near human levels of object identification, not nearly as competitive for things like sparse monochrome point clouds.
At 65 MPH, to be able to avoid something you need some lead time, which means distance. The lidar stuff I've seen is pretty sparse that far out. Of course the sexy almost real looking detailed landscapes from lidar are from tripod mounts and long sample times.
Which leads me to the relevant question. Do you have any reason to think that machine learning will handle lidar at 180 feet range (2 seconds at 65 mph) than a pair of color cameras running at 1280x960 @ 30 FPS?
layman here, does lidar necessarily have to sample the whole environment uniformly?
I ask because, as I understand it, humans actually have quite poor visual acuity through most of our FOV, with a small very precise region near the center. the visual cortex does some nifty postprocessing to stitch together a detailed image, but it seems to me that human vision is mainly effective because we know what to pay attention to. when I'm driving, I'm not constantly swiveling my head 360 degrees and uniformly sampling my environment; instead, I'm looking mostly in the direction of travel, identifying objects of interest, and taking a closer look when I don't quite understand what something is.
is it possible for a lidar system to work this way? maybe start with a sparse pass of the whole environment at the start of a "cycle", and then more densely sample any objects of interest?
Lidar generally operate by rotating a laser that pulses at a fixed rate, using optics to sweep the beam up and down to get a reasonable vertical FoV 360 degrees around the car. They output a stream of samples - basically a continuous series of timestamped (theta-rotation, phi-rotation, distance) tuples - that software can reconstruct into a point cloud.
But! The lidar data is useless by itself since the car is moving through space at an unpredictable rate. Each sample has to be cross-referenced by timestamp with the best-estimate location of the car in order to be brought into a constant frame of reference. This location estimation is a complex problem (GPS and accelerometers get us most of the way there but aren't quite high-fidelity enough without software help) so it can't be done onboard the lidar.
So to do what you suggest, the lidar would need a control system that allows its operational parameters to be dynamically updated by the car. But what parameters? Since the laser is already pulsing at least hundreds of thousands of times per second, there's probably not much room for improvement there without driving up cost, and if we could go higher we'd just do that all the time anyway. The only other option would be to slow down the rotation of the unit while it sweeps over the field of view we've decided is interesting.
That way is a little more conceivable, but I doubt it would work out in practice. If the unit averages 10 rotations per second, it has to be subject to 20 acceleration/deceleration events, which would be a significant increase in wear and tear on the unit. It would also make it harder to reliably estimate the unit's rotation at any point in time, again driving up costs.
All this can't grant you much more than, say, a 100% increase in point density on the road (assuming 120 degrees of "interesting" FoV and a 1/6th sample rate on the uninteresting parts). If these things are to be produced at scale, I imagine it would be easier to increase point density by just buying a second lidar, which would also bring better coverage and some redundancy in case of failure.
It definitely works pretty well. I'm doing work currently with a robotic arm and hand doing random, unplanned tasks. I'm testing out different scenarios with lidar units placed on the ceiling and walls, and directly on the arm and hand, in combination with traditional camera based object detection.
Combining the real-time data together is straightforward, but figuring out how to optimally take advantage of it all is definitely a challenge. Having the extra dimension or sense definitely helps though.
I see it helping most in tasks that require a lot of dexterity, such as multiple digits working together, where a camera lens would be too close to the object or covered by the item in the task (folding a blanket etc), blocking light.
I think the LIDAR usage in this context is not to identify objects but to detect them. There are many situations where a camera-based-only vision system falls down e.g. bad weather, glare from secondary light sources etc.
I'm looking forward to lidar in human-driven cars. Imagine how cool it would be to see the 3D point cloud on your dashboard or reflected on your windshield as a heads-up display.
Yes! I wish people would take car safety as seriously as they take high tech profits. Tens of thousands killed every year. Surely if lidar is a big boost to self-driving car safety it could also help human driven cars with the right interface.
Humans are really bad at scanning and interpreting large amounts of visual data. When driving a car, your brain is already nearly overwhelmed by the amount of information it needs to process and analyse.
I imagine it will be illegal for an actual human to drive on public roads by the time we get to this point, but it would be so cool to drive my car in third person camera like in racing games.
I think banning human drivers will be unpopular in most areas as long as we have a significant proportion of the population who has grown up in a society where the ability to drive a personal car is viewed almost as a constitutional right. Technology can change quickly, but people's attitudes change much slower.
What I expect in the next couple of decades is that we'll have certain individual roads and neighborhoods where human-drivers are banned, and maybe certain roads and neighborhoods where AI-driven cars are banned. Most roads will allow either, because it's not practical in most cases to maintain two separate road networks.
Being able to drive a windowless car by head-mounted-display is something we could accomplish with maybe some minor tweaks to current technology. I doubt such a system would be safe or legal, though, because a software failure could be fatal and we're really bad at making complex software that is also reliable.
> I don't think Lidar is needed to make self-driving cars.
The only sensor that can give mm accurate, high resolution, long range 3d spacial data with a low latency, is lidar.
For a purely visual system to supplant lidar we need:
1) a self cleaning all weather/all light condition camera with ~170 degree field of view at ~30 megapixels
2) a Structure from motion system that has a sub 20ms latency, and can work with spherical images, at full resolution.
3) a semantic object recognising system that is able to classify any object class. It must be scale, rotation, colour and occlusion invariant. It must also update the worl map generated by 2
4) An object threat management system, that can take semantic information from 3, rank it in order of threat, to be passed to 5
5) A world motion prediction engine, that takes threats from 4 and works out if said object is likely to collide with the car
all inside 70-100ms
Thats not going to happen soon. Of the whole list, 2 is the closest to working.
Lidar allows you to cut through 90% of this, because it gives you an accurate point cloud. From there you can do clustering to make objects, and track those clusters to measure threats. All of this is doable on a small CPU now. Without AI.
You make lidar sound great, much better than I've heard.
I thought lidar was quite sensitive to dust, rain, snow, and fog. This is particularly worrisome because lidar samples at a MUCH lower rate than a camera (from 30M samples per cheap camera to 300,000 or so for an expensive lidar).
While lidar is pretty impressive at short ranges, what about at 2 seconds away @ 65 MPH? Will it detect a deaccelerating car faster than a camera that can detect a brake light? Will it be better at detecting perpendicular cars vs parked cars at that range?
Will weather cause the lidar to decay in similar ways to human eyes?
65 MPH is about 30m/s, your average lidar should be good upto about 200 meters, which is about 6 seconds at 65 mph.
> Will it detect a de-accelerating car faster than a camera that can detect a brake light?
Now this is an interesting question. yes and no. A lidar will on its own will not give you object recognition. It will tell you that a reflective surface of size _n_ is directly in you proposed path, and that since last scan its got closer. From that you can make a very accurate obsticle map.
I don't think that running a car soely on Lidar is actually all that feasible or a clever idea. Not without a boat load of mapping and processing to create a machine readable semantic map first.
Having a camera array _as well as_ lidar is a very good idea, as it can provide blended information from the lidar to the semantic map being generated by the camera and radar. Your example of brake lights is good, as it provides a cue as to what is likely to happen.
It also means that the high latency of a visual processing system is less of a problem, as the model can be updated by the lidar. Camera picks up a car, the model attaches it to the pointcloud it thinks is the car from the lidar/radar. When the lidar/radar updates it can create a prediction of where the visual system should look.
> detecting perpendicular cars vs parked cars at that range?
You don't have to. lidar gives you a point cloud, those points can be roughly translated into hard surfaces. if a hard surface is in your path or predicted path, take action until it isn't. Dealing with pointclouds for object avoidance is much much more computationally simpler, than having to infer 3d from either structure from motion, stereo or both.
But as I said before, you need other sensors to get other data/corroborate world model.
>lidar was quite sensitive to dust, rain, snow, and fog.
It is indeed, like any other sensor
> Will weather cause the lidar to decay in similar ways to human eyes?
Most lidars operate in the far infrared. so will handle decay differently. Depending on frequency it'll either be sensitive to moisture, or not at all.
I see this repeated over and over in this thread, elsewhere on HN, and in many other places on the web.
But no one explains what they mean by a safe driver.
There are other factors at work than mere human-ness. For instance the US has more than double the number of fatalities per unit vehicle distance than the UK, more than triple per head of population. In both countries the driver is a human yet the disparity in outcome is huge.
Perhaps whatever it is that causes this sort of difference should be addressed rather than insisting on a new an inherently unproven technology.
Or, at least, give us some numbers so that we can tell what you mean by safe.
Even in the UK there are deaths. Yes the US should work on whatever it is that makes UK drivers safer, but UK drivers still kill people and are thus unsafe.
You have just given numbers: when autonomous cars beat those numbers they are better. Until then we should be careful.
I didn't get the Uber reference, have not they given up on driverless cars, at least for the near future? after the sensor / lidar failure resulting in a fatal crash.
It was a management failure. The car had a perfectly ordinary radar based emergency braking function factory fitted by Volvo but Uber disabled it, presumably to avoid interference with the cars autonomous driving functions (if I recall correctly).
He is right about lidar being a crutch, but he is wrong about the amount of time needed to develop machine vision software good enough to make lidar unnecessary.
> In daylight, cameras can do that, too, but not so much in the dark
Why not? We've been seeing TV footage of cameras seeing perfectly well in the dark for decades. Why is there a problems with using them for automatic driving at night?
It seems like the teams working on autonomous vehicles need all the crutches they can get, and it seems like a good idea to lean on superhuman sensors to make up for subhuman "cognition."
I do wish, though, that people would stop equating all radar with the adaptive cruise control style radar - imaging radars are a thing that exist, and can be competitive with lidar.