A big difference is accountability. If you get into an accident and kill someone while playing a mobile game or taking a medication that says, "do not operate heavy machinery while taking this," it's clear who is at fault, and it is relatively easy for the state and injured parties seek sanction and redress through the justice system.
So what about an algorithm that has been given the regulatory green light and fails under some edge case or adversarial attack? How about something like the Tesla situation, where they advertise "self-driving" capabilities while tucking in some liability limiting fine print about how the driver should be alert and ready to take over control at all times?
Sure. That way they can partner with insurers to offer lower rates to people who buy their tech. Pretty sound business strategy.
I understand that this is a pretty cynical view, and to be fair all the Googlers I've spoken with are true believers in their tech (perhaps a little too much).
But don't underestimate the power of ego and financial incentives to constrain ethical/moral calculus.
This is an inaccurate description of the Tesla system. When you use it, the car repeatedly tells you that you have to stay alert, and it forces you to have your hands on the wheel most of the time.
So what about an algorithm that has been given the regulatory green light and fails under some edge case or adversarial attack? How about something like the Tesla situation, where they advertise "self-driving" capabilities while tucking in some liability limiting fine print about how the driver should be alert and ready to take over control at all times?