What oppression? How are word vectors oppressing anyone? What a ridiculous claim.
>Have you noticed how automation often vastly amplifies things?
No, not at all. I've heard this claim on similar discussions. But I've yet to see a convincing example. Particularly with word2vec. I find it very implausible that word vectors will somehow discriminate against female doctors or whatever.
>It's a short step from saying 'this model accurately reflects the bias in society' to 'that's how things are, the computer says women aren't cut out to be doctors.'
No it's not a short step at all. No one is ever going to use word vectors to figure out what genders are capable of what jobs. At worst, your auto-correct might be slightly less likely to suggest "doctor" for a misspelled word occurring in a female context. And on net it will still make more accurate corrections than the alternative.
No one is ever going to use word vectors to figure out what genders are capable of what jobs.
Directly, no. Nobody is going to go 'ah, word2vec - a new tool with which to perpetuate patriarchal capitalism, mwuhahaha'...probably. People are weird that way.
But indirectly they certainly will. How about NPC character generators in MMORPGs? Or chatbots on social networks? Stock characters in auto-generated romance novels? The possibilities are endless.
No doubt you will these examples are ridiculous, because you seem like a rigorous scientifically minded person who would be careful not to use data in inappropriate contexts, and who would try to discount cultural or emotional factors in making strategic decisions. But you are only as good at this as your own self-awareness and willingness to acknowledge the existence of implicit bias.
And many people are quite different from you and more easily or willingly allow their judgment to be shaped by representational stereotypes. Marketing people aim to confirm their audience's worldview very closely so that consumers will be willing to identify with the commercial prompt when it arrives. Politicians and yellow journalists routinely abuse statistics to grab people's attention. And so on.
I urge you to think more about this, and in more imaginative fashion. People are often surprised by the unexpected applications of technology employed by others.
>No one is ever going to use word vectors to figure out what genders are capable of what jobs.
How can you possibly make this claim?
Biased word embeddings have the potential to bias inference in downstream systems (whether it's another layer in a deep neural network or some other ML model).
It is not clear how to disentangle (probably) undesirable biases like these from distributed representations like word vectors.
What oppression? How are word vectors oppressing anyone? What a ridiculous claim.
>Have you noticed how automation often vastly amplifies things?
No, not at all. I've heard this claim on similar discussions. But I've yet to see a convincing example. Particularly with word2vec. I find it very implausible that word vectors will somehow discriminate against female doctors or whatever.
>It's a short step from saying 'this model accurately reflects the bias in society' to 'that's how things are, the computer says women aren't cut out to be doctors.'
No it's not a short step at all. No one is ever going to use word vectors to figure out what genders are capable of what jobs. At worst, your auto-correct might be slightly less likely to suggest "doctor" for a misspelled word occurring in a female context. And on net it will still make more accurate corrections than the alternative.