Hacker News new | past | comments | ask | show | jobs | submit login

The most interesting thing about this is that it's not interesting. None of these cases are very bad. The self driving car accident is the most severe, it was a minor accident and the car wasn't even at fault.



I mostly agree, however regarding the bus accident the problem is that AI out of the box has to be way better than the human in order to gain widespread acceptance. That means it needs to be able to make tough calls, and do it better than the humans it is meant to replace (or complement). The description of the incident in particular shows why self-driving AIs have such a long way to go.


This may not be so. I expect many people would see the good in technology that prevents accidents and deaths even if it isn't perfect in its first release and that those who don't feel this way at the start might be brought around by good arguments.

Speaking of which, there's a study by some RAND Corporation researchers (described on RAND's blog [0] and here [1]) about how it is likely a good idea to get the technology out even before it is perfected, not only to save lives now, etc. but also to speed up the perfection of the technology -- the rubber needs to hit the road, so to speak.

[0] https://www.rand.org/blog/articles/2017/11/why-waiting-for-p...

[1] https://www.rand.org/pubs/research_reports/RR2150.html


The Apple Face ID hack could have long-term ramifications, no? Especially with anybody who doesn't read these articles believing the system to be completely secure.


Eh, it requires a lot of effort/technical skill/money to break, as well as good photos of the target. Oh and physical access to the device. It's very, very far from the worst security vulnerability ever. Remember the days when this worked? http://i.imgur.com/rG0p0b2.gif




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: