> if these algorithms were better trained on a diverse set of faces, they might not have made that mistake.
Is this assumption based on something specific? I.e. are there good reasons to believe that they weren't trained on a diverse set of people and that such training would prevent this error from happening?
Is this assumption based on something specific? I.e. are there good reasons to believe that they weren't trained on a diverse set of people and that such training would prevent this error from happening?