Hacker News new | past | comments | ask | show | jobs | submit login

> Youtube has recommended that I watch Nazi propaganda, because I had just recently watched videos debunking those exact propaganda films.

When you're interested in this field and watched documentaries about it, it is not too far off that you want to watch the source material.

Now, it depends a bit on whether we're talking about "modern" Nazi propaganda or 1930's Nazi propaganda - the latter one can be pretty clearly categorized as historical interest, while the former might a bit more problematic. I don't want to say that showing countering view points is generally a bad thing, but in this case it's clearly crossing a line.

It, however, also totally fits in your analogy: If someone is very interested in politics, it makes sense for them to possibly want to read works of dictators and the likes. Not to get radicalized, but to know what happened and be able to prevent history from repeating itself.

I think the point you're trying to make is that the recommendation algorithm should check videos for violations, but then GPs point still stands: Would YT be able to determine the policy violation, the video would probably be already taken down long before it reaches recommendations.




> Now, it depends a bit on whether we're talking about "modern" Nazi propaganda or 1930's Nazi propaganda - the latter one can be pretty clearly categorized as historical interest, while the former might a bit more problematic. I don't want to say that showing countering view points is generally a bad thing, but in this case it's clearly crossing a line.

In this case, it was modern Nazi propaganda. This particular incident was a few years ago, but the title implied that it would be going through the history of a particular dogwhistle. I naively assumed that the history of that dogwhistle would be used as an example of the early steps for de-humanization of Jewish people, how to recognize those early signs of racial hatred, and what can be done to best combat that hatred. Instead, from the outline in the first 30 seconds of the video, the speaker was going through different atrocities committed, and lamenting how limited each atrocity was. It was pure Nazi propaganda, disguised as a lecture, and should not have been recommended to anyone.

A human could have recognized it as such, had a human been within the decision-making loop for recommendations.

> Would YT be able to determine the policy violation, the video would probably be already taken down long before it reaches recommendations.

I think systems tend to proceed based on the incentives and rules that are set up within that system. Youtube's recommendations are designed to increase engagement, regardless of societal cost. I agree that if a human had seen the video, it would have been taken down. But that will rarely be the case, because there are minimal incentives for having good recommendations, as compared to the incentives for having engaging recommendations.

Much of this is due to the push for automating everything, even if that automation results in poor results. I am of the opinion that if something cannot be done correctly at large scale, then it ought to be done only at small scale.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: