Hacker News new | past | comments | ask | show | jobs | submit login

The technology is impressive, but humans drive okay without LIDAR, radar, or GPS. Maybe some day self-driving cars can drive using only two small visible-light sensors located above and behind the steering wheel. It might just be an AI challenge to operate that way, and commercial systems will always employ advanced sensors for better safety. But I don't think true parity has been reached until they can drive like we do.



But people aren't very good at driving, it would be silly to try and handicap computers with the features we evolved. Every new car made already has lots of advanced technology to make up for our two eyes.


Agreed, I don't think we should strive for parity. I'd much rather see driverless cars that are better than humans at driving, even if they need more tools to do it. The more poor drivers we can replace with driverless cars, the better.


Commercial self-driving cars should use every sensor modality which is cost effective and which improves safety. I would never argue we should hamstring our driverless cars, let's give them the best shot to be uber-safe and reliable, no question.

In the mean time, on a completely non-commercial separate track, AI researchers should try to do more with less. Driving using only visible-light sensors is a challenge. AI is pushed forward by taking on challenges exactly like this, let's see the push continue.

These two tracks may in fact intersect. When your LIDAR and radar are caked with ice and mud, you'll want the car to be able to drive visually at least to a safe stopping point.


Don't forget that humans also have a pretty good accelerometer and sound processing built-in, which helps. But it's true, we are much better at object detection and recognition.


Two thoughts:

Cars are far-and-away the biggest killer of people in my age group. I sincerely hope that we are not going for true parity, and they will not end up driving like we do.

Using a couple stereoscopic pictures to get the lay of the land is a very, very error-prone process. Humans have to make do with it because we have very limited sensory equipment available to us. Computers can and should make use of the vastly superior technologies that they have at their disposal. One of the main points to this whole enterprise is to make better, safer cars by replacing the primary point of failure behind most collisions. Deliberately crippling the system in an effort to slavishly imitate the thing it's supposed to replace would be completely missing the point.


> I sincerely hope that we are not going for true parity

Commercial driverless cars will blow humans out of the water in safety and reliability. That is the goal and should be goal. To achieve the goal multiple sensors can and should be used.

But separately AI researchers will continue to improve and evolve their algorithms. One avenue for improvement is to operate with fewer sensors. A human driver can drive passably well on any road without LIDAR or radar or GPS. In time computers can and should be able to do the same. We will benefit from that capability, even if in general driverless cars make use of other sensors.


I would love to not drive. I have a 45-minute daily commute each way, and paying attention for 45 minutes of driving is not something I look forward to. I'd rather surf the web, stare out of the windows, or just about anything else. Some people love driving; good for them. But for me, driving is just the thing you have to do to get to point B, where the interesting thing is.


We already have autonomous self driving solution to that here.

It uses all-electric vehicles and they even have a convoy mode where each car follows the lead vehicle closely to maximize the number of passengers and minimize wind resistance.

The guidance system is pretty primitive - it's all done in hardware with steel wheels on steel rails - but it makes it relatively difficult for it to go off-the-rails.


Don't anthropomorphise design. Computers are better at processing huge quantities of data in structured form. Humans are better at pattern recognition and adaption. Trying to design an autonomous robot by emulating the way humans or animals do it is a recipe for bad design.


They are taking the right track today by using a kitchen sink of radar, lidar, gps, visual. This is the way to deliver a self-driving vehicle soonest and safest.

But a self-driving car that uses only visual sensors is clearly possible in the long run. And having that technology would only benefit multi-sensor cars. What if one or more sensors breaks when you are doing 85 mph with the whole family asleep? I'd certainly welcome the resiliency to operate on less input.


Aren't you conflating two issues? Getting visual sensing to the same level as radar/lidar is a great aim. Having redundant multi-modal sensing is a great aim. Switching over to visual-only isn't.

There are too many situations where one type of sensing isn't good enough (e.g. lasers scatter off snow and can't penetrate fog/dust, radar can get saturated by multiple corner reflectors, visual sucks at night, IR sucks in bright sun, etc). To reduce cost visual-only might be a good way to go, but it won't be versatile enough to cover all the necessary scenarios.


I'm not advocating switching over the visual only, unless the other sensors are broken or unavailable as you describe.

I'm just advocating we do the research, create some visual-only cars as proof of concept, solve those thorny AI problems. It's an artificial constraint, one which will produce engineering innovations which can then be applied back to real world products.


The problem is not the AI (driving is relatively simple), but the sensors. Cameras are bad vs. our eyes.


Artificial Intelligence is a broad category -- it's not just the driving decisions to be made but also the particle filtering, car localization, policy search, object tracking, kalman filter, etc etc. The fact that the car can intelligently drive around bikers on the side of the road (and wait for an appropriate time to do so) is a significant AI challenge if it's actually broken down. It involves everything from raw noisy sensor readings to high level policy search.

Sure our eyes have great resolution and batteries-included depth perception, but they can't see around 360 degrees around the car at 15 fps. Pros & cons


I wouldn't rate particle filtering, localization, object tracking and filtering in AI. They are enablers, but not the intelligence.

Policy search is AI. =)

Eyes not only have great resolution and depth perception, but are attached to an amazing pattern processing machine that looks forward in time to estimate the next set of perceptions. They're also very environment-invariant - sun, snow, heavy rain, fog, etc would screw a camera, but human eyes can handle it relatively gracefully.


> humans drive okay

Seriously?


So in the US we have about 8 fatalities per 1B kilometers driven. Yes that's 8 too many, but that is a damn lot of km driven without incident. So yeah we do okay. Machines will do better, but we do okay.


Only counting the fatal accidents sounds like a mistake to me. I suspect that in many more instance, lim loss, paraplegy or even just serious material an psychological damage could be avoided by highly reliable automated driving.


Absolutely all manor of injury could be lessened or avoided by self-driving cars. And improvements made in property damage, absence from work, enforcing traffic laws, parking infrastructure, driver training, accident forensics, and on and on.

My point was just to defend humans from the hyperbole that we drive all horribly today. We have cumulatively driven trillions of kilometers. Our cities and entire economies function because for the most part when you get into a car, you can expect to arrive at your destination. This is no small feat. But yes I look forward to self-driving cars vastly improving on "okay".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: