Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The usual definition of bias in ML papers is E[theta_estimator - theta]. That is explicitly a systematically wrong prediction.

In any case, the paper suggests that this "bias" or "prejudice" is better described as "truths I don't like". I'm asking if the author knows of any cases where they are actually not truthful. The paper does not suggest any, but maybe there are some?



Again, per the article "bias refers generally to prior information, a necessary prerequisite for intelligent action (4)." This includes a citation to a well-known ML text. This seems broader than the statistical definition you cite.

Think for example of an inductive bias. If I see a couple of white swans, I may conclude that all swans are white, and we all know this is wrong. Similarly, I may conclude the sun rises everyday, and for all practical purposes this is correct. This kind of bias is neither wrong nor right, but, in the words of the article "a necessary prerequisite for intelligent action", because no induction/generalization would be possible without it.

There are undoubtedly examples where the prejudiced kind of biases lead to both truthful and untruthful predictions, but that seems beside the point, which is to design a system with the biases you want, and without the ones you don't.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: