There's some psychology at work as well. Say self-driving cars cut the road accident and fatality rate in half but in the remaining incidents, half of those are driver error but the rest are all due to hardware and software problems.
I think a lot of people would have a problem if half of all fatalities are due to system glitches even if the net result is fewer deaths and I don't think that's entirely unreasonable.
It will also be somewhat new ground. As I've noted before, there are relatively few examples of consumer-facing technologies/products where properly maintained and used (or sometimes even misused) products kill and maim--and people are just fine with that because "stuff happens" and the cost/benefit is reasonable.
Drug interactions are one of the relatively few areas today and even those often lead to lawsuits. (And the US even basically indemnifies drug companies against side effects from standard childhood vaccines.)
My big concern is we’ve seen how reckless companies have been (Samsung with Note 7 batteries, Facebook/Twitter with the election, AirBNB(?) with reviews about unsafe places, Uber with basically everything...) I don’t want those companies making self driving cars without strong oversight. ‘Move fast and break things’ has done a lot of damage over the last few years.
I don’t want them to be allowed to apply that strategy to cars. We couldn’t trust their judgement for simpler things, the risk is way too high for cars.
They're an example although I would argue that, in this case, "properly used" equates to not used at all. Also cigarettes were the subject of long drawn-out legal cases and a substantial settlement so they're something of a unique product.
> I think a lot of people would have a problem if half of all fatalities are due to system glitches even if the net result is fewer deaths and I don't think that's entirely unreasonable.
It’s still a huge net reduction in deaths anyway you look at it, and there are huge gains in other areas (speed, covenience). I remain very unmoved by that argument.
I’d rather be hit by software than a human. At least I know we did as much as we could to alleviate the problem.
I’m not arguing it’s perfectly logical. If people behaved that way the sunk cost fallacy wouldn’t exist.
I think it would be a real problem. If the death count becomes non-trivial before the death count from people becomes relatively trivial I think there’s will be a big mess in the media.
There's some psychology at work as well. Say self-driving cars cut the road accident and fatality rate in half but in the remaining incidents, half of those are driver error but the rest are all due to hardware and software problems.
I think a lot of people would have a problem if half of all fatalities are due to system glitches even if the net result is fewer deaths and I don't think that's entirely unreasonable.