Hacker News new | past | comments | ask | show | jobs | submit login

Google too uses plenty of formal logic in their autonomous OS, it's a frankenstein of various machine learning techniques. Narrow pattern recognition alone, while powerful, is unlikely to ever be suitable for things like negotiating busy four way stops.

There is the presumption that all you gotta do is gather enough training data and shazam!, you got a self driving car, but that's bollocks. There is a tremendous amount of elbow grease that foes into developing and validating an AV to six sigma reliability. Six sigma might not even be good enough for something as safety-critical as a driverless car.




It doesn't need six-sigma, just to be significantly better than humans, and for the manufacturer to take on the liability.


In strictly rational terms It has to be better than 1 fatality for every 100,000,000 miles driven, but public opinion isn't exactly rational so it probably needs to be even better than that.


It'll be that way at first, yeah, but once it becomes obvious how much better the AI drivers are, it will start to become irresponsible to not use it




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: