Fascinating seeing species selecting and then rejecting additional eyes, probably because they can combine functionality into a single organ pair and reduce the overall “cost” to achieve similar results. Additional eyes in spiders typically have different vision functions: near/far, narrow/peripheral, motion, color. Our mammal eyes package that all up in a single pair.
A fun related fact is that many reptiles have a third eye that’s like a “light sensor” used to track daylight. Its function is linked to their cold-bloodedness, and basically triggers their diurnal cycles.
Tesla could do this, because at initial strges, they relied on Lidar, Radar and other high precision, specialized sensors, because they were used for mapping the terrain.
After millions miles driven, these sensors no longer need to serve their mapping purpose, because the areas have already been mapped & safe passages have been created. Just like a real, experienced driver, who knows the roads, all they need are Motion Sensors, GPS for correction & depth sensing color cameras, for handling infrequent dynamic scenarios.
The point of self-driving is to eventually be better than humans at handling the unexpected because humans are LOUSY at handling the unexpected.
I want my car to be able to see through the fog, rain or snow because it has better "eyes" than I do. I want my car to notice that train I'm about to run into because the sun glare is preventing me from seeing it. I want my car to have noticed the quick moving motorcycle behind me that's overtaking in a lane to my right where it shouldn't be.
A self-driving car that is vision-only cannot get there.
I mean, a vision-only car could still be superhuman at driving through better processing and alertness, and having cameras pointed in all directions at all times. Remember that the bar is pretty low.
(But yeah, dumping LIDAR seems not to have worked well.)
Yeah, I remember Tesla's "who needs LIDAR" announcement coming in the middle of a pandemic supply chain crunch, especially from chips needed for those LIDAR arrays.
You remember incorrectly. Tesla have never used LIDAR except for training on specific developer cars. Elon has said from day one LIDAR is an expensive waste because of the compute requirements and latency. They did remove the parking ultrasonic sensors, which at one point were used as a close proximity, low speed auxiliary data source in FSD.
The mandate of FSD has always been that humans get away with driving using only their vision so it must be possible for a computer to do the same. If you have used FSD lately it's largely there.
Tesla cars do not have a database of mapped roads. Every road is new to the car every time you drive down it. The Tesla self-driving system is incapable of learning on its own. Also they don't produce a persistent 3D representation of the surroundings, in fact every loop in their vision system they throw away the old representation and rebuilds it. This is why everything jitters when you look at the display which represents what the car sees. It's also it's fallacy, and why Tesla will be left in the dust soon enough.
A fun related fact is that many reptiles have a third eye that’s like a “light sensor” used to track daylight. Its function is linked to their cold-bloodedness, and basically triggers their diurnal cycles.
https://en.wikipedia.org/wiki/Parietal_eye
Reminds me of Tesla removing ultra sound sensors (USS) from their cars and moving all sensory input to their camera systems instead.