What do you expect? It's not possible to write an algorithm that will not have misfires at the scale of YouTube, if it's not an intentional effort then I fail to see what the big deal is.
It's an adversarial proceeding where both sides get to present their case and a takedown has to be for a violation of a law that complies with the First Amendment.
Notice that this is different than what you promote. Search results are sorted by an algorithm; that's unavoidable and excessively simple algorithms like "alphabetical order" are useless. And then it's a hard problem, to which the best solution is probably multiple independent search engines with different algorithms.
But removing the thing whatsoever? Court order or GTFO.
> It's an adversarial proceeding where both sides get to present their case and a takedown has to be for a violation of a law that complies with the First Amendment.
Why should this be the standard by which YouTube can remove videos? Why shouldn't they be legally allowed to remove content via algorithms even if it creates a bad user experience?
Because they're de facto not just a public space but a vital part of the public discourse. Court-like processes are the appropriate standard for decisions about the use of something that affects so many people.
Could you explain what you mean by "public space"?
> vital part of the public discourse
I don't agree. In my view, YouTube is one of the lowest forms of public discourse, its fundamental purpose is to generate ad revenue and the nature of the content hosted there reflects this pernicious incentive. Yes, YouTube does have some great things on it, but it's just one tiny facet of "the public discourse".
> Court-like processes are the appropriate standard for decisions about the use of something that affects so many people.
Can you quantify "affects" and "so many people"? YouTube is mostly an entertainment triviality, even (especially) in the realm of politics, I don't see how videos being removed from the site is important.
> In my view, YouTube is one of the lowest forms of public discourse, its fundamental purpose is to generate ad revenue and the nature of the content hosted there reflects this pernicious incentive.
You could say the same for all media. Television is advertising-driven, radio is advertising-driven, even the most respectable of literary magazines is advertising-driven. Most content is pandering to someone or other.
> Can you quantify "affects" and "so many people"? YouTube is mostly an entertainment triviality, even (especially) in the realm of politics
Again, you could (and many would) say the same for television or what have you. It would still be corrosive to society if a single entity controlled the majority of television, or the majority of radio, or... - and that's much the situation that YouTube is in.
> You could say the same for all media. Television is advertising-driven, radio is advertising-driven, even the most respectable of literary magazines is advertising-driven. Most content is pandering to someone or other.
I don't necessarily disagree, but what's your point?
> It would still be corrosive to society if a single entity controlled the majority of television, or the majority of radio, or... - and that's much the situation that YouTube is in.
This is already the case. Additionally, I don't think you've demonstrated that YouTube's video moderation policy is "corrosive to society"
You can refuse to remove content without a court order even if you're legally allowed to remove it without one. Why should they choose to create a bad user experience by removing content via algorithms? They could just not.
> Why should they choose to create a bad user experience by removing content via algorithms
Removing obscene, explicit, and even illegal content is only practical at YouTube's scale using algorithms, the idea that they should just abandon algorithmic removal of videos entirely is not a realistic suggestion.
Why do algorithms get a free pass? We generally don't allow those kinds of copouts in most other areas of life. If a physical tool causes harm it's a pretty weak defense to say the effects were too complicated to understand in detail.
What do you mean? YouTube successfully serves billions of users every month, the users that suffer from the fraction of videos temporarily removed by an overfit model are a rounding error; the results of which are essentially harmless. We regularly accept much higher rates of catastrophic failure for things that are actually harmful e.g. traffic accident rates or medical error rates.
I'm not exactly sure what you're getting at. Of course the creators of YouTube are responsible for the behavior of YouTube. I'm just confused why a forum for programmers finds it so hard to accept that the program sometimes does things that the creators do not intend.