Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

[flagged]


If you think about ethical bias in a system that has no concept of ethics you are going to have a bad time.

You're essentially trying to fix reality by altering datasets which is a completely erroneous way of solving the problem.

It's playing God by determining what is right and what isn't all alone and without the input of society.


These systems are created for some intent by humans. Studying the ethics of AI is really thinking about the ethical issues surrounding the practice of implementing AI systems.

Ethical reflection on the intent and impact of the systems one builds is not mandatory in our field (it is for other professions) but probably still a good thing to consider if you want your contribute to society to be a positive one. Taking time to think about this stuff in a MOOC sounds like one way of avoiding doing that thinking alone and without the input of society.


I disagree. If you take one of the examples mentioned in said MOOC, which is the bias in word embeddings that makes vector arithmetic go from "doctor" to "nurse" if you replace "male" by "female".

I agree that it would be nice if the returned vector would be "doctor" in both cases but the embedding code (the implementation) or the embedding algorithm (theory) have no idea about gender, ethics or moral.

Here the bias comes from the datasets the AI trained on.

The bias of those datasets comes from society writing texts in a biased way.

So the solution to fixing this "bias" is fixing the language used in society which is not an AI problem nor a dataset problem.


I have been wondering if it would be possible to collect examples of bias, the same way we collect other datasets, and teach NNs to de-bias themselves. The reason for why this is hard to decide is that bias is kind of the opposite of relevant information. The data would be patterns to avoid rather than follow.

Assembling a database for the purpose of de-biasing might also prove unfeasible because of inductive bias.


The fundamental problem is that deciding what's biased is extremely subjective and context dependent. If an AI says "crime is often a problem in lower income neighborhoods", is it delivering a statistical fact or expressing bias against the poor? Depends entirely on how we think people are going to use the results.


Good point. Inductive bias takes many forms.


Or, accept that many women enjoy being nurses and doctors, such that man->doctor | woman->nurse/doctor isn't weird.

It's not a competence thing, or stopped being since woman doctors are a thing, and is a motivation thing. Not to a 'better' place, but a different one.


>You're essentially trying to fix reality by altering datasets which is a completely erroneous way of solving the problem.

I would argue this is a decidingly effective way to solve the problem.


No. This policing of “toxicity” is exactly why so many institutions are suffering from progressives’ excesses.

Learn to take criticism before it’s too late and you crash and burn.


I was pointing out unecessary negativity and disinformation. You guys are looking at excuses to be angry, I don’t get the extrapolation to progressives’excess in institutions (I don’t even know what this refers to). I got outsmart in a real HackerNews fashion. Hope you won all the Christmas dinner debates this year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: