Hacker News new | past | comments | ask | show | jobs | submit login
Making AI Better by Making It Slower (medium.com/bellmar)
14 points by mbellotti on June 19, 2020 | hide | past | favorite | 5 comments



I finished "Blink" recently and one of the little revelations in that book is about the police and why nobody has partners anymore. Some friends and I used to joke that it was because the city was trying to save labor costs (an extra car being a lot cheaper than extra cops, we reasoned).

Turns out that a cop on their own is more conservative. They have to think about whether to engage - they have no backup, so any situation they get themselves into, they can't entertain any fantasy that their partner will dig them out of it.

It slows them down, makes them assess the situation, reason about it instead of reacting. It improves citizen and officer safety.

Using computers to audit human decisions instead of circumventing them just sounds like a more realistic option. Send the questionable xrays for a second opinion (or have the same tech look at them a second time on a 'good day' instead of 4:30 on Friday). File code review comments instead of blocking a merge. File PRs to upgrade dependencies that appear to pass the test automation.

The human still consciously chooses in these situations.

In the old days we had some UX luminaries who talked about the importance of having systems (especially where Customer Relationships are involved) whose business logic can be overridden by a human operator. Waive the fee. Exempt from taxes, what have you. It's in many ways the same kind of problem, just magnified.


> Turns out that a cop on their own is more conservative.

They are? It's plausible to me that a cop on their own is more scared and more likely to take drastic action if they feel threatened.


I thought so too, but the statistics seem to disagree. Wait for backup.

It seems like having a partner was intended to keep cops out of trouble (safe and honest) and that doesn't seem to be working out that great either.


I would imagine it would be a lot less likely for two officers' body cams to both fail at the same time. But I'm sure that's not related to the change at all.


I find blog posts like this interesting, in that they seem to justify a certain set of ethics (in this case why it's okay to make machines that ultimately result in people dying) by basically ignoring it and replacing it with a completely different problem (type 1 versus type 2 thinking, both of which can be used ethically or unethically). It really seems like a form of self delusion to me.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: