Hacker News new | past | comments | ask | show | jobs | submit login

The requirement isn't to avoid discrimination, it is to not get caught. That somewhat opaque "AI" layer is great for that.



And if it's actually systematically charging e.g. a specific minority more than other people, it will get caught in a hurry and end up being hugely costly for the company.

This kind of stuff is easy to catch. A single person typing some different parameters into an insurance quote webpage can catch it.


It's one thing for people to know what you are doing, it's another to prove it in a court of law, or even to get law enforcement interested in the first place.


Until you are forced to explain exactly how your helper algorithm came to that conclusion, explaining the logic and calculations step by step in easy to understand language... (With the "I can't" not being an acceptable answer.)


Nah, there are some nice metrics to capture disparate impact between categories. Check out Microsoft's fairlearn library some time.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: