Hacker News new | past | comments | ask | show | jobs | submit login

Is anyone arguing for replacing vision with lidar and radar instead of combining it?



If the radar data conflicts with what vision is saying, which one do they trust? They have shown that their vision stack has surpassed what they can do with radar. So "fusing" that data in only makes it worse when it gives conflicting information.


Sensors are obviously important, but it doesn't really matter if two sensors conflict, the important thing is which one is consistent with their running model.

So far their biggest problem is that they allow flicker. That's why sometimes the software picks the wrong lines. (Of course this is a very hard problem. Our brain conveniently smooths over sensory changes for us, because that's how our everyday reality is. Things don't flicker in and out of existence, nor does a car suddenly appear as a different thing, then switches back.)

And if it turns out the sensor(s) failed, it has to be able to handle that too.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: