>No one is ever going to use word vectors to figure out what genders are capable of what jobs.
How can you possibly make this claim?
Biased word embeddings have the potential to bias inference in downstream systems (whether it's another layer in a deep neural network or some other ML model).
It is not clear how to disentangle (probably) undesirable biases like these from distributed representations like word vectors.
How can you possibly make this claim?
Biased word embeddings have the potential to bias inference in downstream systems (whether it's another layer in a deep neural network or some other ML model).
It is not clear how to disentangle (probably) undesirable biases like these from distributed representations like word vectors.