Hacker News new | past | comments | ask | show | jobs | submit login

A big difference is accountability. If you get into an accident and kill someone while playing a mobile game or taking a medication that says, "do not operate heavy machinery while taking this," it's clear who is at fault, and it is relatively easy for the state and injured parties seek sanction and redress through the justice system.

So what about an algorithm that has been given the regulatory green light and fails under some edge case or adversarial attack? How about something like the Tesla situation, where they advertise "self-driving" capabilities while tucking in some liability limiting fine print about how the driver should be alert and ready to take over control at all times?




Google says they want to be on the hook for their self-driving cars. (And they don't want humans to be able to take over in the first place.)


Sure. That way they can partner with insurers to offer lower rates to people who buy their tech. Pretty sound business strategy.

I understand that this is a pretty cynical view, and to be fair all the Googlers I've spoken with are true believers in their tech (perhaps a little too much). But don't underestimate the power of ego and financial incentives to constrain ethical/moral calculus.


Oh, it makes perfect business sense for Google to ask to be on the hook: less hassle for the customers means a bigger market for the product.


This is an inaccurate description of the Tesla system. When you use it, the car repeatedly tells you that you have to stay alert, and it forces you to have your hands on the wheel most of the time.


Then perhaps it should not be advertised as "autopilot".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: