I agree that if your goal is to build a machine that decides who gets to become a doctor, you need to do more than just let it loose on a bunch of text.
But I don't think preventing it from learning the current state of the world is a good strategy. Adding a separate "morality system" seems like a more robust solution.
What do you think of Bolukbasi's approach that's mentioned in the article? In short, you let a system learn the "current state of the world" (as reflected by your corpus), then put it through an algebraic transformation that subtracts known biases.
Do you consider that algebraic transformation enough of a "morality system"?
I hope you're not saying we shouldn't work on this problem until we have AGI that has an actual representation of "morality", because that would be a setback of decades at least.
> put it through an algebraic transformation that subtracts known biases
> Do you consider that algebraic transformation enough of a "morality system"?
I would consider it a sort of morality, yes. But keep in mind that the list of "known biases" would itself be biased toward a particular goal, be it political correctness or something else.
Yes, every step of machine learning has potential bias, we know that, that's what this whole discussion is about. Nobody would responsibly claim that they have solved bias. But they should be able to do something about it without their progress being denied by facile moral relativism.
If we can't agree that one can improve a system that automatically thinks "terrorist" when it sees the word "Arab" by making it not do that, we don't have much to talk about.
A black box neural network attempting to draw inferences from a human-biased dataset - potentially even more biased because it can't understand subtexts - and then verifying that conclusion through an ad-hoc set of "morality checks" entirely independent from how it reached the conclusion sounds like a recipe for disaster.
That's even before the marketing people get involved and start claiming the system is free from human biases...
I'm not sure what your objection is with regards to the independence aspect. Why would having the morality checks integrated into the "learning about the world" part be better?
If you had an unwavering moral code which dictated that men and women should be treated equally, for example, why would it matter which facts are presented to you, in what order, or how you process them? Your morality would always prevent you from making a prejudiced choice, in that regard.
Frankly, I'm not sure the "men and women should be treated equally" instruction is even possible if the data isn't processed in a way which specifically controls for the effects of gender (some of which may not be discernible from the raw inputs).
Sure, it's theoretically possible that an algorithm parsing text about medics' credentials that (e.g) positively weights male names and references to all-boys' schools and negatively weights female names and references to Girl Guides will be on average fair after an ad hoc re-ranking of all its candidates to take into account the instruction to treat male and female candidates equally. It's just unlikely to achieve this without completely reorganizing its underlying predictive model
[1]there's an interesting parallel to ongoing human arguments about how a machine should follow its "morality checks" should do this: does it ensure the subjects are "treated equally" in terms of achieving 50/50 gender ratio irrespective of the candidate pool (thus potentially skewing it massively in favour of the side with the weaker applicants), does it try to weight results so gender balance reflects historic norms (thus permanently entrenching the minority)? Or does it try to be "gender blind" by testing all its inputs for whether they're gender biased and normalising for or discarding those which are, which is basically learning everything again from scratch...
But I don't think preventing it from learning the current state of the world is a good strategy. Adding a separate "morality system" seems like a more robust solution.