So does work like this essentially mean there will always be strange edge cases in AI algorithms so that they will never be 100% accurate? Which means the AI driven cars and robots will also never be perfect at navigating the world? What degree of accuracy would we ever be comfortable with? The current metrics for AI seem too simplistic, there needs to be several new and better ways to understand how functional an AI is. 99% accuracy in classification sounds great until you realize that means your car will hit something one out of a hundred times. How can you objectively say an AI is a better driver than most humans? Is the self-driving vehicle industry developing standards or is it all just trial-and-error based on human observers?
I wonder what would happen legally if someone wearing one of these items gets run over by a car in self driving mode. I presume the pedestrian is still in the clear?
Haha, the next layer will be a sweater for pedestrians that tricks computer vision systems into thinking you're a stop sign. Probably wouldn't be that hard even.
I'm also not sure what would happen if a car with an automatic speed limiter at 40 in dense traffic, it suddenly spots a 20 sign on an adjacent road and drops a gear to slow down, and gets rear-ended.
Technically the car behind needs to keep the distance safe, but also it's been functionally brake-checked by the speed-limited car, and without any brake lights coming on.
Bit of a non-sequitur, but the easiest way to implement such a speed limiter is to make it such that the car won't accelerate when over the limited speed (and either warn the driver or reduce power gradually so it slows down gently) without manual intervention, rather than jamming on the brakes
That's how my old car does it, but the new one will downshift if it wants to slow down a lot (either if you reduce the cruise speed a lot, or it imagines it saw a lower speed limit sign). Seems dangerous.