Hacker News new | past | comments | ask | show | jobs | submit login

> to algorithms that notice true but politically inconvenient

I don't know that I agree that there exists an algorithm that can determine what is factually true, so I'm not sure I agree that an algorithm can "notice a true thing".

Do you have an example of when an algorithm noticed something that was objectively true but was shut down? Can you explain how the algorithm took notice of the fact that was objectively true (in such a way that all parties agree with the truth of the fact)?

I can't think of a single example of an algorithm determining or taking notice of an objective fact that was rejected in this way. But there are lots of controversies I'm not aware of, so it could have slipped by me.




For example gender stereotyping for jobs or personal traits, that is politically incorrect but nevertheless reflects the corpus of training data. (He is smart. She is beautiful. He is a doctor. She is a homemaker.)


> nevertheless reflects the corpus of training data

I don't think "reflects the corpus of training data" gets entails "is therefore an objectively true fact". In fact, I think a lot of people that complain about AI gender stereotyping for jobs or personal traits would _exactly describe the problem_ as AI is "reflecting the corpus of training data".

I don't think anyone disagrees that the AI/ML learns things reflected in the corpus of training data.

My request for an example is one where the AI/ML is reflecting an "objective truth" about reality and people then object to that output. But "the AI/ML is reflecting an objective truth about the training data" fails to satisfy my request for an example, because it falls short of demonstrating that the training data was an accurate and objective reflection of reality.


"Handsome lady/woman/girl" vs "Beautiful gentleman/man/boy". Try in Google Search.

We're just using different adjectives based on gender.


I think you're assuming that if it's in the data, it's "factually true" as the OP puts it. It doesn't work that way. There is such a thing as sampling error, for example.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: