Hacker News new | past | comments | ask | show | jobs | submit login

Auditing is a complicated process, if you don’t want to do the work you’ll just have to take their word for it.



The problem is the feasibility of auditing, not whether the motivation exists. And no, taking their word for it is not adequate.


Then what do you propose be done? If you accuse someone of having an extremely racist AI you need a way to prove they set out to make it that way. You don’t just get to say “It’s racist” and shut it down.


You look at the outcome distribution.


The danger here is of assuming the conclusion, in this case, that race doesn't correlate with anything meaningful. This is still a hotly disputed point.

Ultimately, transparency and reproducibility are the key criteria I think, like it is in science.


And then what? Shut it down if it doesn’t conform to your expectations or world view?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: