Hacker News new | past | comments | ask | show | jobs | submit login
YouTube algorithm recommends videos that violate its own policies: study (foundation.mozilla.org)
263 points by Liriel on July 7, 2021 | hide | past | favorite | 208 comments



Here is a link to the original announcement by mozilla, which is a better source (https://foundation.mozilla.org/en/blog/mozilla-investigation...) (@dang should this be updated?)

Some insights from the report:

* The study uses donated data from mozilla's browser plugin [1]. This means it almost surely has self-selection bias and does not offer representative usage data. But that's a fair tradeoff to get realistic measurements of recommendations.

* The focus is on harmful content, so we don't know how much this is out of the entire exposure people get. (But there are a couple of representative studies out there).

* Out of all reported videos, 12% were considered actually harmful. That's not a terrible rate in my opinion.

* 70% of harmful videos came from recommendations - that's why the authors demand changes to the YT recommender system.

* Data quality: Reports came from ~37k users of the browser extension, but only ~1600 submitted a report. Total N of reports is ~3300, so pretty low. The actual analysis was performed on a sample of ~1100 out of these (why?). Harmfulness was analyzed by student assistants which is ok.

* Non-english languages seem to have higher prevalence of harmful videos (which is to be expected given the attention and resources spent).

[1] https://foundation.mozilla.org/en/campaigns/regrets-reporter...


It certainly should, since https://techcrunch.com/2021/07/07/youtubes-recommender-ai-st... (the submitted URL) does nothing but copy the Mozilla report and sex it up with gaudy language. We've changed it now.

Submitters: please read the site guidelines. Note this one: "Please submit the original source. If a post reports on something found on another site, submit the latter."

https://news.ycombinator.com/newsguidelines.html


The problem is that the study's design makes it impossible to measure the magnitude of the issue. Any recommendation system operating on the scale of YouTube's is going to have a non-zero error rate, and this study effectively just confirmed this known fact. The relative uselessness of the design makes me think this is more about ammo for advocacy efforts than genuine interest in studying this issue.

The question I'd be interested in is: how frequently does a typical YouTube viewer have actually harmful content recommended to them? Looks like it's pretty rare, if even the self-selected group that was interested in this effort and reported harmful content saw an average of (0.12 * 3300)/1600 = 0.2475 over the entire duration of the study. And that's a very generous upper bound, assuming that all of the people who didn't submit reports just forgot about the extension rather than failing to see any potentially harmful content.


Right, that's the issue here: We don't get any meaningful base rate.

But the problem is: It's incredibly hard to get representative measurements of personalized recommendations. How do you want to collect those? You'd need a representative sample in the upper four digits (costs five to six figures), and track their browsing including on mobile phones.

This is expensive and technically difficult, because you need to essentially MITM the connection (SNI doesn't give you the videos, only the domain).

Of course, Youtube knows the precise extent of the issue, but they can't/won't/don't give precise data to anyone (problematic but understandable, given that every totalitarian government in the world would immediately send a wishlist).


YT started reporting Violative View rate a few months ago: https://blog.youtube/inside-youtube/building-greater-transpa...


It would be more expensive, but I don't think it would be infeasible for an organization like Mozilla. Focus just on the desktop use to start with (extracting recommendations from the YouTube app would likely be challenging) and pay a representative set of users to install a browser extension that records the videos they are recommended. Then go through some subset of the videos and evaluate if they are harmful or not. It would definitely be more work, but at least that would give some useful new information on this issue, rather than confirming what was previously known.


Well, I've done some of these things in other contexts.

Market rate compensation for participation in such studies is ~$1 per day. You'd need at least 10'000 users to have menaningful confidence intervals. Say you're tracking for one month, then you pay 300k for the sample alone.

If you have a single multi-purpose researcher doing study design, programming, logistics, analysis, outreach and all, that's perhaps 50k-100k a year (highly dependent on country and institution).

Next you hire a bunch of people to do the content analysis. Either mechanical turk with a 2-3 factor overprovisioning or actually trained people who are 2-3 times as expensive. At roughly one minute per video, you'd need one hour for 20 videos, i.e. approximatelly 50 cents to 1 dollar (ballpark) per video.

Say you argue that the long tail is irrelevant, and you want to code the top 100k videos (danger, arbitrary cutoffs are bad), then that's another 100k.

That's half a million for something that isn't mozilla's core mission and is already being done by publicly funded researchers. Would be nice if they did it, but it sounds like a bad investment to me!


Looking at the example of their harmful content, half is videos about pro-republic videos talking about the US 2020 election. If that is representative of their study then that seems very much self-selected in both nationality and politics.

It should be noted that those that classified if content was harmful or not was a team of 41 research assistants employed by the University of Exeter.


YouTube can't win with these people.

A few years ago there was a hit piece in the New York Times implying that if you watch YouTube videos of Milton Friedman or Ben Shapiro, the YouTube algorithm will decide you like right-wing content and gradually feed you more and more extreme right-wing content until you become a nazi. This was probably sensationalized and overstated, but there was this type of feedback loop in the algorithm.

One obvious corrective to this is to make sure the algorithm also recommends videos that don't already agree with the viewer's inferred biases. As a result, if you're somewhere left of center and the videos you choose to watch on YouTube are influenced by that viewpoint, YouTube will occasionally recommend videos of "Ben Shapiro DESTROYING libs with FACTS and LOGIC" just so you're exposed to the other side.


Well, not sure what to make of that 70% since 70% of all youtube video watches are due to recommendation in the first place

https://qz.com/1178125/youtubes-recommendations-drive-70-of-...


That's a good summary. I was trying to get a handle on what 'harmful' could mean, and why this isn't higher than 70% from recommendations. What are the other sources, search results, word-of-mouth, etc? Anyway this part cleared up the context.

> New research published today by Mozilla backs that notion up, suggesting YouTube’s AI continues to puff up piles of ‘bottom-feeding’/low grade/divisive/disinforming content — stuff that tries to grab eyeballs by [...]

It seems the best way to play this is to not interact with (downvote, comment on) content you don't want to see, and stop playing such content as soon as you know.


> browser plugin

Didn't Mozilla drop plugin support completely in Jan?


They most likely meant a web extension (or “add on” as Firefox refers to it) not a NPAPI plugin which was what was removed in January.


Why would someone expect the recommendation algorithm to really know anything regarding whether videos violate their own policies? If the platform already knew the videos violated their policies, presumably they'd already be blocked/removed in the first place. It's not magic.


I think you're missing the point. According to TFA, the YouTube algorithm likes content that breaks their policies very much:

    Recommended videos were 40% times more likely to be regretted than videos searched for.

    In 43.6% of cases where Mozilla had data about videos a volunteer watched before a regret, the recommendation was completely unrelated to the previous videos that the volunteer watched.

    YouTube Regrets tend to perform extremely well on the platform, with reported videos acquiring 70% more views per day than other videos watched by volunteers.
So it's not that the algorithm is just "blind" to content that breaks their policies, it seems to prefer them over other content.


Reading the report, this wasn't actually a random sampling of typical users. They used opt-in reporting from a self-selected group of volunteers who were interested in reporting "regrettable" YouTube recommendations.

If some of these users were deliberately scouring YouTube for objectionable content, it follows that their recommendations would start matching similarly objectionable content for them.

After reading the report, I don't see how any of this is representative of a typical user's YouTube experience. It certainly doesn't resemble my own experience at all.


There are some pretty obvious explanations to all of these points:

* People don't search for content they don't want to watch.

* Videos that a specific person doesn't want to watch are often unrelated to videos that the same person does want to watch.

* If a video receives a high number of views, that is probably a signal to the algorithm that the video is popular enough to recommend more often.

If anything, those numbers are lower than what I would expect.


> it seems to prefer them over other content.

Isn't it likely that it prefers them because they're more likely to be fully watched/upvoted/commented on/otherwise engaging? Presumably, that fact together with objections to the content are the reason to have a policy...


> Why would someone expect the recommendation algorithm to really know anything regarding whether videos violate their own policies?

There's a big push to blame recommendation engines in the wake of social media misinformation about elections and COVID-19. There's a growing perception, verging on conspiracy theory, that big social media companies are deliberately tuning their recommendation engines to promote this misinformation while publicly claiming to do the opposite.

This Mozilla report feels like a strangely clumsy attack piece. They relied on self-selected volunteers to proactively report videos that they called "regrets" using a Mozilla plugin. Users were given little direction about what to report as a "regret" but according to the Mozilla report they marked as many as 1 in 8 videos as "should not be on YouTube".

I have no idea how these self-reporters were going down a YouTube recommendations rabbit hole that ended up with 1 in 8 recommendations being policy-breaking or inappropriate, but that couldn't be farther from my own experience. I stream YouTube in the background while working on projects around the house and I can't recall the last time I was recommended a strange video that was somehow policy-breaking.

The YouTube algorithm is clearly very self-reinforcing. I suppose if the volunteers were actively searching YouTube for "regret" videos that broke policy then it's likely that the recommendation engine would be triggered into serving up more, similar videos. That's obviously not a valid representation of the typical user experience, though.


> There's a growing perception, verging on conspiracy theory, that big social media companies are deliberately tuning their recommendation engines to promote this misinformation while publicly claiming to do the opposite.

I don't think that this is the perception of many people. I think that many people think (and I think so too) that social networks maximise for engagement quite deliberately. Content that is very engaging can be very good (amazing educational resources) or relatively harmless (cat videos). But, due to human nature, "engaging" videos also include divisive content, propaganda, conspiracy theories, group-think or even just stuff that is highly addictive.

The complaint, therefore, is not that YouTube etc. are actively pushing conspiracy theories—it wouldn't be in their interest to do so necessarily. The issue is that, by trying to maximise engagement, they very deliberately push all the subconscious buttons that may make us behave in irrational ways without caring for the psychological and social cost that this implies, and that any attempt of those networks to curtail the spread of problematic content will never be adequate as long as those underlying mechanisms are still the same.


>I stream YouTube in the background while working on projects around the house and I can't recall the last time I was recommended a strange video that was somehow policy-breaking.

I have a fairly curated stream of videos due to years of Youtube use. I never see objectionable content in my recommended segment. I mostly have videos related to my career area and a few creators who focus on games I play.

However, I've logged in on other computers, or via VPNs on different devices and found that if I just started with a random folksy video (let's say a bandsaw restoration), I would be recommended objectionable (overtly anti-black or anti-Semitic material) very rapidly. What's more, this random walk would then seed my recommended videos with a high proportion of similar videos. I'm not sure how much directed viewing would need to occur to re-adjust the recommendations. This was such a large issue that at certain offices I worked at, I'd pre-seed my youtube history with a few 'chill cafe background music' videos first to prevent a re-occurrence of the time when I walked away from my computer after logging onto a new terminal and opening up what I figured was some decent background noise, then coming back to a pro-hitler historical documentary.

This may be because of my location, demographic, activity on the network or the selection of initial videos. My experience is nothing more than an anecdote, but it leads me to think that extreme content does well with respect to engagement, and accordingly is promoted by an engagement focused process. Is this correct? I don't know. I just know it isn't far fetched.


Say I go over to Fred, and ask him for a book recommendation. I've known Fred for a while, and he's given me good book recommendations in the past, so I usually listen to what he says. Except this time, Fred says that the book I'm most likely to enjoy is Mein Kampf. I'm flabbergasted, because this doesn't sound like the Fred I know. Fred says that he has no idea what the book is about, but there's a small number of people who vigorously recommend it, and they're just so gosh-darned enthusiastic about it that now he's recommending it to anybody with even the slightest amount of interest in politics. He knows that I just read Elie Wiesel's Night, so it seemed right up my alley to Fred.

Swap out "Youtube" for "Fred", and maybe that helps to show the issue. I'm not making an exaggeration here. Youtube has recommended that I watch Nazi propaganda, because I had just recently watched videos debunking those exact propaganda films.

When "recommendations" become "recommendation engines", and the human steps out of the loop, the recommender still has a moral responsibility for the things they recommend. Without a human in the loop, it's a lot easier to close one's eyes to that responsibility. Maybe the solution is to go back to things that don't scale, and keep humans in the loop. Maybe the solution is to spend just as much effort on the quality of the recommendations as the addictiveness of the recommendations. But the current state is not sustainable.


I think your analogy supports the GP's point. The recommendation engine doesn't understand that the recommendation is a violation, it makes a heuristic error because there is an indomitable mountain of content to classify and it lacks the sophistication to accurately do so. It's certainly logical to hold YouTube accountable for the content it recommends, but it's also naive to expect it to achieve the impossible.


> Fred says that he has no idea what the book is about…

And that’s the thing people don’t understand when they blame YouTube. If Fred is YouTube, and Fred doesn’t know that Mein Kampf is propaganda, how is YouTube supposed to remove it?

You’re just reconfirming OP’s point: YouTube doesn’t know that a video violates its policies.


Fred has neglected his duty to know what he is recommending, in the same way that YouTube has neglected their duty. The key part I was trying to get across with the analogy wasn't just that Fred didn't know what he was recommending, but that he didn't do his due diligence before giving a recommendation.


I feel everyone replying to my comment is missing the overall picture.

YouTube has content guidelines... that doesn't mean they are obligated to enforce them, it only means they have easy reasons to moderate content where they feel like it. Their only real interest is profit, generally through showing you ads (by keeping you glued to the screen, sometimes by showing you controversial content) and by harvesting data. I'm not saying that's "right" in a moral or ethical way, it is simply the truth. If you recognize that, then it doesn't matter that they are recommending videos that ostensibly violate their own terms, and arguing otherwise seems to make you just willingly ignorant. YouTube has no incentive to incorporate "regrettability flags" into their algorithm, except to the degree that it serves them.

I'd go further to say that YouTube probably could do a better, more thorough job of cleansing content on their platform, without any additional cost. But having a certain amount of content that is controversial, which panders to a vocal minority, etc actually serves their platform. They use the 1st amendment sometimes to justify this, but they actually don't care one bit about any of that. All they care about is money, and at this scale the dynamics get kinda interesting.


So, your original post was a question, and I answered it.

But maybe your question was rhetorical and masked an overall point you are now trying to clarify. But I still don't understand the point your making!

Are you trying to absolve YouTube of any moral responsibility, while personifying with them with agency ("they have reasons", "they feel like it", "the degree that serves them", "they care") and granting them rights ("1st amendment")?

As for what really motivates corporations - I believe survival, not profit, is the essential goal. Profit is just one strategy - but another is not poisoning it's own environment.


> Youtube has recommended that I watch Nazi propaganda, because I had just recently watched videos debunking those exact propaganda films.

When you're interested in this field and watched documentaries about it, it is not too far off that you want to watch the source material.

Now, it depends a bit on whether we're talking about "modern" Nazi propaganda or 1930's Nazi propaganda - the latter one can be pretty clearly categorized as historical interest, while the former might a bit more problematic. I don't want to say that showing countering view points is generally a bad thing, but in this case it's clearly crossing a line.

It, however, also totally fits in your analogy: If someone is very interested in politics, it makes sense for them to possibly want to read works of dictators and the likes. Not to get radicalized, but to know what happened and be able to prevent history from repeating itself.

I think the point you're trying to make is that the recommendation algorithm should check videos for violations, but then GPs point still stands: Would YT be able to determine the policy violation, the video would probably be already taken down long before it reaches recommendations.


> Now, it depends a bit on whether we're talking about "modern" Nazi propaganda or 1930's Nazi propaganda - the latter one can be pretty clearly categorized as historical interest, while the former might a bit more problematic. I don't want to say that showing countering view points is generally a bad thing, but in this case it's clearly crossing a line.

In this case, it was modern Nazi propaganda. This particular incident was a few years ago, but the title implied that it would be going through the history of a particular dogwhistle. I naively assumed that the history of that dogwhistle would be used as an example of the early steps for de-humanization of Jewish people, how to recognize those early signs of racial hatred, and what can be done to best combat that hatred. Instead, from the outline in the first 30 seconds of the video, the speaker was going through different atrocities committed, and lamenting how limited each atrocity was. It was pure Nazi propaganda, disguised as a lecture, and should not have been recommended to anyone.

A human could have recognized it as such, had a human been within the decision-making loop for recommendations.

> Would YT be able to determine the policy violation, the video would probably be already taken down long before it reaches recommendations.

I think systems tend to proceed based on the incentives and rules that are set up within that system. Youtube's recommendations are designed to increase engagement, regardless of societal cost. I agree that if a human had seen the video, it would have been taken down. But that will rarely be the case, because there are minimal incentives for having good recommendations, as compared to the incentives for having engaging recommendations.

Much of this is due to the push for automating everything, even if that automation results in poor results. I am of the opinion that if something cannot be done correctly at large scale, then it ought to be done only at small scale.


It seems like a reasonable assumption that, as of 2021, the majority of people who read Mein Kampf or watch Triumph des Willens are not Nazis, but rather people who are curious about the history of Nazi Germany for normal, good-faith reasons.

If your personal attitude towards the Nazis is "never again", as it should be, you ought to be interested in how it happened the first time. Part of how it happened the first time is that a lot of Germans actually voted for them and many of the others just sort of went along with the program. Why would they do such a thing? How do I make sure I'm not fooled the same way they were? These are some haunting questions.

I, myself, have watched this sort of primary source material on YouTube, e.g. subtitled videos of some of Hitler's speeches. You might assume that Hitler spent most of his speeches ranting and raving, but he spent a lot of time hiding behind the mask of a responsible politician promising to reduce unemployment. I think watching that stuff helps to answer those questions. And, as I recall, I don't think I actually went out of my way to search for it, either.


Sure, that's plausible, but it seems equally likely that Youtube _can_ detect videos that violate their policy and yet chooses to wait until they get some threshold of reports/media attention because those videos are also very engaging. Having the policy allows them to react when they get pressure without impacting the bottom line.


I wouldn't say equally likely. They both have a non-zero probability of being true but a pattern of malfeasance such as you describe would be detected pretty quickly. Imagine this or thousands of similar conversations: "Brad, have you noticed that our porn videos get removed only when they get over a certain view count?" "I had noticed that, Bob! I wonder if that observation would be interesting to the media?!"

It also has minimal upside because YouTube makes more money on high view count videos by far, so removing only when a certain level of fame has hit a video cuts off the part that is economically interesting to YouTube. The tail is a loss leader.

Your scenario is not very plausible to me.


"Brad, we can't deploy your conspiracy theory classifier because it would reduce the number of high value viewer engagements by 2.7%. Let's just stick with the version that depends on accumulating reports. It's not a big deal because we can always intervene if something embarrassing slips through."


I'm unaware of any evidence consistent with this version of how things work there. By all accounts, YouTube are unlenient with bans and blacklistings. Can you please cite a source?


The original comment I was replying to was speculating about how they imagined youtube worked without evidence. All I did was provide an alternate explanation. I wouldn't even go so far as to suggest it's a bad thing that social networks avoid using more aggressive classifiers, just that I'm skeptical of the "we don't have the ability" explanation.

By all account YouTube are arbitrary with bans and blockings. That's not the same as unlenient.

The standard example of the phenomenon I'm describing is from Twitter. They successfully keep nazi content out of German twitter, but not the rest of the world. That suggests that Twitter's moderation is partially constrained by their own policies, not just technical ability. I don't think this is a far fetched understanding of how any social network works.

https://www.snopes.com/fact-check/twitter-germany-nazis/


One might expect a publisher to pay particular attention to the content they promote.


Too bad they exist in a quantum superposition of publisher and platform states simultaneously, while also being neither. Pretty nice gig if you can get it!


Platform and publisher are meaningless terms born of latter day propaganda. They are about as meaningful legally as asking if they are a unicorn or a leprechaun.


I get recommendations of people that literally make me angry. Like people I totally cant stand and because they make kinda similar content to people I watch they get recommended all the fucking time. I tried disliking (which is not fair to begin with, because I would not watch them if my TV would not simply start it) but it changed nothing.

So I am stuck with getting the same few shitty YouTubers pulled in my autoplay every single day.

I tried making an new account from scratch. But nope, recommendations seem to be based on the channels you follow with no personal references pulled in


> I tried disliking

Don't do that! The algorithm counts that as an engagement action.

What you need to do is click on the three dots symbol next to the video title and select "don't recommend this channel". This absolutely fixes it. You may have to do this a lot. There is also a general "don't recommend this" option that, anecdotally, doesn't work as well.

Also, disable autoplay. It doesn't seem like YT can distinguish between clicking on a video and it just playing on autoqueue.

The YT algorithm is not fundamentally different from the other social media algorithms out there: it tries to serve you up with controversial content that "engages" you. You are most likely to be "engaged" by content you either directly disagree with, or by second-source channels that report on content you disagree with. As a fallback, the algorithm will also try to serve you content based on your cohorts, which can in of itself be pretty radical and disagreeable.

Get a YT Premium account if you can, sending a very small message that you vote against ads and all the algorithmic baggage that comes with them.


> Get a YT Premium account if you can, sending a very small message that you vote against ads and all the algorithmic baggage that comes with them.

YT premium users are still presented with the same recommendations as normal users, are they not? That's kinda my problem with the service. Sure you don't actually see the ads, but you're still using an advertising-based platform with all the user-hostile design that model encourages.


Oh absolutely, I'm recommending this because of the second part of my sentence above. I believe the ad model is bad for both consumers and companies, in ways that are not immediately obvious. Ad models tend to generate perverse incentives in my opinion. I also think it's important to signal a willingness to be a reasonable consumer, and to me it's also about ethics: I watch a lot of Youtube, I can afford to pay for YT Premium, and I run an adblocker.


I'm actually not sure that it works that way.

I watch a lot of YouTube content that's outright demonetized, and demonetization consistently correlates with recommendations and views plummeting as well. Yet this same content still shows up in my recommendations. My theory is that for free users, YouTube prefers to recommend videos that are monetized, because obviously they have a vested interest in doing so. For premium users, YouTube doesn't care about ad revenue--in fact, they may even have a slight incentive to recommend demonetized videos because then they don't have to pay a share of my premium subscription revenue to the creator. Instead, they're optimizing for keeping me interested enough to maintain my subscription.


I see this even with Netflix: the movies look like they are designed for optimizing for the longest view time instead of also taking account people who are older, willing to pay for shorter view time, but well written and produced content and storyline. The last example was Brave New World where my girlfriend also read the book, and even she wasn't able to explain me the series, because it just didn't make sense (although the visuals were beautiful).


Yes, a view is a view and a dislike is engagement. If you really don't want to see people or content, use the triple dot menu and click 'not recommended' or 'don't recommend this channel' (which actually straight filters it out).


> Don't do that! The algorithm counts that as an engagement action.

Do you have a source for that? Though it does sound plausible in a world where "we want to keep people viewing our ads for as long as possible" is the _only_ goal of the developers, I find it hard to be _so_ cynical about Google that I believe that "let's show people more videos they have told us they dislike!" would get past the idea stage.


> they have told us they dislike!

if they feel passionate enough about it to watch, then dislike it, the video would trigger other people for more engagements.

The opposite of like is not hate - it's apathy. You want to be apathetic towards clickbait. You want to be apathetic towards those sorts of videos.


> Do you have a source for that?

https://www.youtube.com/watch?v=wx9Jv5uxfac

Here you go. :)


The "don't recommend this" notably has two reasons on the "tell us more" dialog: "I've already seen this video", and "I don't like this video".

I often use the former, after watching stuff via mpv (chromium has tearing, but more notably, high enough CPU usage to annoy me with a half-broken fan that's quite loud).


A few years ago I completely scratched using a YouTube account and instead switched to the android app NewPipe (I was primarily watching on my Android phone).

This has completely changed and improved the experience for me. NewPipe lets you create multiple feeds of groups of channels. So for example I can have a feed podcasts, music and tech news.

This not only makes it easier to actually get to the content you want to consume at any given time without distracting yourself but also helps you get better recommendations in these grouped feeds (the recommendations still work).

NewPipe is free and open source and can be downloaded on F-Droid.


Thanks for that. Going to look into it. (In the hope it works on TV and also blocks ads)


Seconding newpipe, it's great if you just want to watch specific videos or channels. Should work on Android tv, unfortunately not available for LG WebOS TVs.


Third for newpipe. The modern internet is practically unusable without FLOSS.


i wish there was a build of newpipe that works on a computer rather than mobile device.


I have tried to run it within anbox on my Pinephone devkit before, which runs linux. The app itself worked although all the videos didn't work. Maybe it has gotten better since though.


I have the same problem. I watch alot of science and history channels but damn near every time I watch videos on World War II I start getting suggestions for crazy far-right channels. And I don't mean like mainstream 'conservative' channels like FoxNews, I mean crazy stuff like Nick Fuentes. Even if I tell it to not recommend a certain channel it will just pop back into the recommendations after a few weeks. It would be different if the rest of the recommendations were good but they're just crap too.


I am personally abusing the block feature for this. I haven't tested if it is actually true, but I began blocking channels that I am not interested in and haven't noticed anything being recommended yet from any channel I remember blocking.


Is this abusing the function? Block is for things that you never want to see. It sounds fairly appropriate.


That sounds like it could work. Pretty sure I never saw that function on my Android TV tho.

Kudos, gonna try!


I spend time blocking/curating videos in a desktop browser so that my Apple TV YouTube app sucks less from a recommendations perspective..

It's lame that it has to be done that way, but it's the best way I've been able to do it.


> It's lame that it has to be done that way, but it's the best way I've been able to do it.

How else do you think they can figure out what you hate vs you don't like but don't hate either ?


Oh yeah I get that part.. what I mean is that it's lame that I have to sit at my desktop computer in a browser and do this manual work just so my Apple TV experience - which is where I primarily consume YouTube - doesn't give me garbage recommendations.


https://github.com/TeamNewPipe/NewPipe/

Turn off recommendations, comments, autoplay etc. I only see exactly who I've subscribed to and nothing more.


Not sure what client you're using, but on the web and mobile you can click/tap the kebab menu and select "Don't recommend this channel". Youtube has a nasty habit of recommending rage-inducing channels because they generate engagement. If "Don't recommend this channel" was a button on my keyboard it would be worn down to a nub by now.


I did this to Cocomelon as my daughters watched it a few times and the algorithm thought I wanted to watch nursery rhymes all day long.


Google just really hates their Android TV users. I realized I simply can block them on my phone which actually works!


turn off autoplay to protect your recommendations.


Whatever happened to “let people be their own judge?” Or are we really that elitist, and do we really have so little faith in individual's ability to think for themselves? Why is Mozilla, of all entities, seemingly seeking to influence YouTube, and to what end? Should we just abolish the idea of a free market of ideas? Is it bad to be radical? Are you automatically wrong if you are? And are YouTube's own policies flawless? Just some questions that come to mind.


>Or are we really that elitist, and do we really have so little faith in individual's ability to think for themselves?

>Should we just abolish the idea of a free market of ideas?

Not all ideas are the same. The free market of ideas is a concept that promotes exploration and competition between ideas. However, some ideas are, on their face, intentionally manipulative. If we look up the 14 words or other rhetorical devices typically deployed in disinformation and propaganda efforts, we can identify a lot of soundly anti-social noise in the general pro-social marketplace of ideas.

Someone refusing to pay ad revenue dollars to monetize a flat earth, anti-vax, or white supremacist message isn't saying 'ideas are bad and we should not have them'. They're saying 'we explored these ideas, and I have judged them. They didn't win the competition'.

This IS the competition inherent in the marketplace of ideas. To remove this does not promote competition in the marketplace; it banally removes it, replacing it with a nihilist 'no idea is better than any other' position.


"Social justice" movements are also manipilative and are doing much more harm than good at this point, but companies and a lot of people keep pushing them. Should we start banning them too?


What concrete evidence do you have that social justice movements are "doing more harm than good" or that whatever harm they are doing is equivilent to flat earthers, anti-vaxxers and white supremacists?


Well, to be fair, these companies are only woke in some markets. This isn’t a global problem.

Show me the major corporation’s Middle East division with a pride logo. I’ll wait right here… skelton.

So clearly the solution to overly woke social justice is just for everyone to become the one religion that no one is allowed to criticize and who is still “allowed” to have different opinions.

Modern solutions and problems and all that.

My plan has no downsides unless it does. You’re welcome /s.


Social justice movements are almost without exception extremely critical of countries like Saudi Arabia but despite this fantasy conjured up by conservatives that it's the woke crowd that shy away from criticism of Islamic theocracies, it's amusing how it's always the former that love multi-billion dollar arms deals with them (though to be clear, both neoliberals and neocons are guilty of this).

I guess hijabs in American streets are a larger problem than enabling the mass surveilance and extermination of undesirables in their eyes.


>"Social justice" movements are also manipilative and are doing much more harm than good at this point

Seems like a completely separate discussion, but I'd disagree emphatically with a premise that attempts to equivocate between content that is misleading, false or intentionally promoting genocide and whatever you'd define as falling under the umbrella of social justice.


The "free market of ideas" sounds like a great ideal. But it gets really messy in practice. What should such a "marketplace" look like? I'm happy to have a free and vigorous exchange of ideas among my friends and me gathered around a dinner table; we're bound by mutual trust and respect to operate in good faith with one another, even in disagreement. How should we try to scale that to a global communications plaform? Is that even feasible?


It scales up just fine, but individuals have to take personal responsible for their feelings.

As long as we allow folks to flip out and go crazy over bits of text, or even images appearing on their screen that they don't like or are offended by, nothing will will work.

You see, in your situation, if you don't like what somebody is saying you uninvited them from the table. But on the internet, folks demand the offender be removed from the internet -- this won't scale.

What nearly all social media needs to do is simply hide users or content from somebody when they "flag" or "report" it. This scales nicely.

But no! we demand they remove it from the internet and ban them from contributing anything further. This is liken to getting into a argument at the dinner table, being offending, and then killing the offender so they can't offend you again, verses just ending future interactions with this person. Yeah, you might see them at the store, and yeah, they might have have to look away passing them in on the street, but the majority of the content you don't like coming from them is removed from your life.

But that is not good enough, you want this person to not be able to share the ideas, topic, or crude joke you did not like with anybody else.

I don't know what it is about the internet, but folks seem to think they should have way more control over what others do than they normally would in the real world.


>I don't know what it is about the internet

It's the scale, speed, and reach that makes it a different beast. It's the difference between a party popper and a flashbang; similar principles, but completely different implications.

You're taking an overly simplified view of the power of internet speech. It's not that people are merely "offended." It's that an individual can be on the wrong end of an internet mob for which there's no accountability. By your standards, swatting is a perfectly acceptable outcome as long as the victim had the opportunity to mute their harrassers. Likewise, poisoning someone's reputation and making them unemployable is fine too. It's just words, right?


> By your standards, swatting is a perfectly acceptable outcome as long as the victim had the opportunity to mute their harrassers.

Folks don't get swatted via tweet, meem, or fb post. They have to pick p the phone and commit a real crime.

> Likewise, poisoning someone's reputation and making them unemployable is fine too. It's just words, right?

We have had laws against this long before the internet. Maybe they need strengthened.

Edit:

We are clearly talking about two different cases. We have things that are already illegal, and should continue to be illegal. Your examples are both something covered by existing laws.

On the other hand, we have folks who want to ban things that are legal, simply because they are offended. I can name a few, but in this clement, it would not be wise.

Oh, look at that. Self moderation.


As somebody with less to lose, I guess it's my duty to take the fall for you. Examples are the idea that women are underrepresented in prestigious jobs because they're biologically determined to not want to do them or not be as capable of doing them. Another is that blacks are poor because they're genetically inferior at doing things that make money (ie. professional work).


Swatting is only a problem because the police are incompetent. They absolutely should never shoot somebody with the only information of their being dangerous an anonymous tip. It should obviously still be a crime but one against the police, not the target.

As for becoming unemployable. If there was truly free speech, nearly everyone would get it and it would be obviously unreliable so employers would disregard that signal. Even if that didn't work, stricter labor laws might prohibit hiring discrimination based on internet rumors, same as we do for race/sex/etc. discrimination.


I think it's fundamentally a fear that political ideas you don't like will become popular and end up having actual power. If they spread enough, people might vote for them, get laws changed for them, etc. or they might just change their culture in a large scale way that affects your daily life because these "wrong" people might be all around you.

Of course that's pure arrogance and complete disregard for the concept of democracy and respect for other people's ability to think.


> change their culture in a large scale way that affects your daily life because these "wrong" people might be all around you.

and the last time that happened in the USA, it lead to the civil war.


I think that what's different about the Internet is scale. On the Internet, you can find a group of people to validate whatever fringe theory you want - and with social networks, you can do this _extremely_ easily. When you do find these people, you treat it as validation ("See, I'm not the only one who thinks this!").

For most of human history, if you had a fringe idea, your community would be extremely skeptical until you came up with something to convince them - or, more likely, concluded "huh, no-one else thinks this, I must be wrong" and veered back into the mainstream.

Even in the days _just before_ the internet, with pervasive newspapers and TV, your 'circle of validation' would've been _much_ smaller.


I think these are great questions, because there is a way it touches on fundamental values such as the freedom of speech.

Human attention/willpower/judgement etc are exhaustible resources, and they don't scale like a machine does. Even if the machine is conveying mere human utterances and likeness, the recommendation itself is ultimately the "speech" of a machine, which was conjured by thousands of highly paid engineers and data scientists to do some "magic" of keeping you talking to it. The attractiveness of the recommendations of a machine can out-tire the willpower of the reciprocating human, especially considering the ubiquitousness of the machine; be it on your tv, phone, car, videos, music, news, social media.

So I think it is possible to formulate the reaction you see as a defense against building machines that abuse organic human attention. Going back to free speech, there wouldn't be defending the right to free speech of the machine if it could denial-of-service the recipients' judgement with all the engagement tricks in the book and at scale.

And the spirit of the objection is not about the radicalness of the content; but rather the frequency of which such content as brought to the user by the machine. In a free marketplace of ideas of humans, such content would a) have lower incididence than youtube b) would have much different propagation/decay function than youtube c) you could give direct, high fidelity feedback to those humans. People's ability to filter out bullshit collectively in a real social setting is much stronger than our 1:1 encounters with youtube. There is no talking back to youtube, there is no friends you can together vet the ideas you encounter on youtube.


The problem is that many people, especially advertisers, don't want to be part of a marketplace with ideas that will cause great backlash. Parents don't want their kids in a marketplace with ideas they don't want to expose them to.

The principles of a marketplace also apply to marketplaces.


This explains why some videos are demonetized but not why they're banned.


Demonetizing them may not be sufficient for the parents or advertisers, particularly the former.


> especially advertisers

Not true. Advertisers wouldn't give a damn if they themselves wouldn't be targeted.


YouTube is being its own judge of what is on its platform. And Mozilla is participating in the "market of ideas" by holding them accountable for those decisions.


The feedback loop the recommendation algorithm generates is really powerful. I personally have to treat youtube cautiously the way I treat any addictive drug otherwise I end up watching it long after I would have liked to.


YouTube has never been a free market as long as they've had editorial discretion at removing/promoting any videos they like. An unbiased YouTube would be a directory of videos organized by topic, like old Yahoo categories, that was shown exactly the same to every visitor. We don't have free, unfettered activity in any domain of life because the fabric of society would break down. Then you get Somalia.


Like it or not there are young children watching YouTube. YouTube doesn’t give any useful way of controlling the content that children see beyond its default recommendation engine. You can’t as a user easily block a channel (just report it, or dislike, and hope the recommendation engine notices).

So, if you accept that children should at least be somewhat monitored/guided in what they watch, this is a problem.

Of course, someone might say that parents should be monitoring the content anyway. That may be true, but doesn’t always happen in practice. And it’s a constant fight against the recommendation engine in any case.

YouTube has become the default way to share video content online, there is no way to “opt out” of the recommendation engine, and they do their best to push low value content at children. So in my view, yes this is a problem.


> YouTube doesn’t give any useful way of controlling the content that children see beyond its default recommendation engine.

Youtube has Youtube Kids, with two different content sets for younger and older kids.

> You can’t as a user easily block a channel (just report it, or dislike, and hope the recommendation engine notices).

You can block videos and channels in Youtube Kids: https://support.google.com/youtubekids/answer/7178746?hl=en


YouTube kids is kind of clearly designed for preschool children (5 or less), and you lose a lot of functionality.

It also blocks a huge amount of content, I just tried searching for Biotechnology related content most of it is not available. I’d guess because it’s not been marked as “targeted at children”.

There’s no way of saving or sharing content.

So, no it’s not a great solution in my opinion. It would be better to introduce these features into the main app.


> YouTube kids is kind of clearly designed for preschool children (5 or less),

Since their two age settings are for under 7 and 7-12, that's manifestly false.

Its for children as protected under COPPA, to wit, under 13.

> There’s no way of saving or sharing content

Also because COPPA (at least, the sharing part.)


So they’re more interested in compliance rather than building something useful.

What I’m saying, is that they should give everyone the ability to more effectively control content in the main app, website.


> You can’t as a user easily block a channel (just report it, or dislike, and hope the recommendation engine notices)

You absolutely can! Go on the three dot menu next to a video title in the recommendations and click on "do not recommend videos from this channel"


Right that’s true. To be honest I’m not sure that always factors into recommendation. I also can’t block classes of content (like short videos). And I can’t go to a video and just set is it or the channel as blocked.

“don’t recommend” and “block” are also different concepts. I’d like the ability to block content so it doesn’t appear at all if searched for, linked etc.

The fact is, YouTube could easily provide much of this functionality. Particularly the ability to block short content. But they’re interested in pushing this content on users.


Youtube Kids is a thing


> Should we just abolish the idea of a free market of ideas?

Yes, please! This has never existed in the past, and continues to not exist in the present, and seems to me logically impossible to ever exist in the future.

Just like many would not consider Fox News to be "fair and balanced", but instead consider this slogan to be a dishonest attempt to obscure Fox News' bias, the sooner we give up on the "free market of ideas"-style sloganeering the sooner we can approach problems like those discussed here with any sort of honesty.


"X has never existed perfectly/completely, therefore X should be abolished."

Do I understand you correctly?

And what are you suggesting it be replaced with?


> Do I understand you correctly?

Nope, the original question was if we should "abolish the idea", not the thing itself.

I'm saying that the idea is a myth in the same way that "unbiased news" is a myth. Both sound like a nice idea on the surface, but with both eventually it all boils down to humans making decisions that elevate one viewpoint (bias) over another, and this holds true whether we are publishing newspapers or publishing HTML documents.

Adding another step (e.g. "human writes code; code publishes HTML document") unfortunately doesn't fix this either.


It’s hard to argue that a platform that’s algorithmically selecting the next video based on ML models is anything close to a “free market of ideas”, banned content or not.


Why not? It just amplifies videos that are more likely to be viewed. If there are no explicit biases, then it's a free market.

Penalizing the unsavory opinions makes it not a free market.


It’s not a free market because the owner of the platform is sitting on the scales. It’s hard to argue that the best videos or ideas win out when a large chunk of what’s watched next is actually controlled by the platform itself.

Honestly, that’s probably fine, but it does mean that any analysis along the lines of “the free market of ideas” is of questionable value, given that YT is heavily manipulated by YT itself.


Some others- Should Youtube develop and use algorithms which lead people down the path of radacalism for the purpose of engagement? Why are people so radically and violently opposed to conflicting thoughts?


Should the village smith make pruning hooks and knives which can cut throats? Should the carpenter make direction signs which may be posted anywhere to mislead anybody?


> Whatever happened to “let people be their own judge?

This. In the end, doesn't it come right down to this? The sad thing I've noticed about these discussions when they come up is that they quickly degrade imho into arguments about the merits of points of view rather than whether or not society or corporations should be making the decision on what a person sees or doesn't see on something ostensibly called "YOU"tube. Obvious caveats about lawful/unlawful content aside.

> Why is Mozilla, of all entities, seemingly seeking to influence YouTube, and to what end?

Asking the right questions. To what end indeed


Big Tech went from the shiny new thing, to the Mc Donalds of content.

They serve crap in mass produced quantities because it sells.

Remember that content moderation stops scaling after a point. No matter what good engineers think they will achieve in the Trust and Safety / Risk divisions.

Reading this article it seems pretty clear that training parameters weigh engagement at a rate that makes junk content a guarantee.

The troubling part is that this IS creating another educated/uneducated divide. The kind of conspiracy theories I see gaining traction around me differ entirely based on education (and therefore income).

And if you are reading this far - almost all the data needed to have a society level discussion is under an NDA.

It gets worse the moment you leave the English language. Want to find a sentiment Analyzer that works well on Code-Mixed subcontinental/Portuguese/Dominican hate speech? Good luck with that.


> No matter what good engineers think they will achieve in the Trust and Safety / Risk divisions.

You can have all the "Trust and Safety" effort you want, but when the core architecture of the system is built to reward the craziest stuff, you're still going to get a ton of crazy.

It's like building a paintball park inside a nuclear reactor, in the middle of a WW2 minefield. No matter how many "Safety" staff you have running around in orange vests and hard hats, people are still going to get hurt.


I enjoy watching metal machining content and trying to find it, in a language I can at least semi-follow along with it way harder than it has any right to be.

On the other hand I really, really dislike most extremely popular YT shows that are focused on personalities and/or talking heads... yet YT has no problem relentless suggesting those to me.


If YT ever suggests something you don't like then you have to not ever watch it for a long time to make it go away.

Unless it's television news, something seems to artificially inject that into the feed.


metal machining content and trying to find it, in a language I can at least semi-follow along with

What language are you looking for, and what channels do you like so far? I'm curious how much overlap there is with what I've been watching in this area, and can recommend some English language channels.


Most of what I see is in English but I also speak German and can understand some Russian (at least enough to be a danger to myself).

Some of what I subscribe to: Tom Lipton, Adam Booth, Chris Maj, Cutting Edge Engineering, Dragonfly Engineering, Blondihacks, CNC4XR7, Edge Precision (a favourite), JamesPark_85 Machining TV, JohnSL, Keith Rucker, Mateusz Doniec, Max Grant, The Swan Valley Machine Shop, MrPragmaticLee, myfordboy, outsidescrewball, PracticalMachinist, Rustinox, Special Instrument Designs, shaper andi, Stef van Itterzon, Steve Summers, Topper Machine, AlwaysSunnyintheShop, An Engineer's Findings, AndysMachines, Assolo, Bart Harkema Metalworks, BAXEDM, clemwyo, CompEdgeX, Cà Lem, David Wilks, FactoryDragon87, Fireball Tool, H&W Machine Repair and Rebuilding, Halligan142, Henrik Andrén, igorkawa, IronHead Machine, James T. Kilroy Jr., Jan Sverre Haugjord , Joerg Beigang, Mechanical Advantage, Piotr Fox Wysocki, Solid Rock Machine Shop Inc. (another favourite), Stefan Gotteswinter (really good), THE IRONWORKER, Владимир Алексеенко, TheMetalRaymond (now I want a horizontal boring mill).

I guess that's just over half. A lot of these channels have either stopped uploading videos or upload very irregularly, so YouTube's algorithm hides their content from me, even though I am subscribed to the channel.


I wonder how many of us there are out there. It seems like I bump into folks that are into the same thing all the time, but where are the numbers? Tom Lipton (for example) is one of the GOATs of course, but he has 140k subs? Peter at Edge Precision, consistently delivering excellent tutorials and great camera work on a huge machine...60k subs. Stef van Itterzon, building one of the most ridiculous DIY CNCs I've ever seen, less than 10k subs. It's odd.


I think part of the problem is that what makes for an honestly competent machinist or metal worker very often conflicts with what makes for the sort of person who creates videos exactly the way YouTube wants people to create videos so that YouTube can make them "become" popular.

A couple of years ago YouTube changed something but not being a content creator I don't know exactly what they changed. What I do know is that a great many of the machinists on YouTube who were releasing regular long format videos series abruptly stopped. Most of them were people that I we know by name... for example Tom Lipton never showed the completion of the larger etching press he was building for his wife. This is the reason why I subscribe to a couple dozen channels which are presented in Russian and Spanish... suddenly all the machining content dried up (because YouTube refuses to help me find that content on their site) and I had to get my fix somehow.

Also, feel sorta obliged to note that I have unsubscribed from a number of active machining/CNC focused channels because either a) the projected affect the presenter put on really got under my skin (I have some neurological / hearing problems, so that's unfortunately an issue I run into on occasion) or b) the presenter has habitually expressed some sort of unpleasant jingoistic and/or hostile attitudes... fortunately that's not the norm but it does happen enough that it does bother and discourage me.


I watch a fair bit of this sort of thing on YouTube, but I don't use the subscribe or like functions on anything.


"This Old Tony" is english spoken with just the right humour for me. Anyone got any more channels of similar style, not neccesairly just about metal machining?


AvE's worth a look if you like This Old Tony. The drunk uncle to Tony's sober one, if you like.


Breaking Taps has some interesting stuff including machining but without the humor, Rainfall Projects is metalworking rather than machining, Wintergatan is building a crazy machine, Ron Covell shows some crazy sheet metal bending techniques in a very dry style, Machine Thinking goes into some of the history of machining, and Clickspring makes clocks and an Antikythera mechanism out of brass.

Not all of those upload regularly, and as someone who also uploads occasionally (though not machining related) I don't blame them, because good video is hard.


I can't think of any similar channels that are in English. I guess Tony is kinda unique in that way.

However, I will say that a really surprising number of the YT channels that I watch and subscribe to, that make it obvious that they too watch This Old Tony (by swiping his karate segue styles). Including one guy who's making wooden, woodworking planes by hand with limited use of electric tools.


Abom79 is one I watch regularly.


My case: my brother commited suicide early this year, I got recommended Reckful's last minutes of streaming [1], I didn't know what was going on; only after reading the description and comments that I understood the whole thing, and guess what, I got invested and searched some more, and I found out that his brother had also commited suicide. Imagine that going through my thoughts.

I didn't take the "recommendation" very well, couldn't sleep properly after.

[1] https://www.youtube.com/watch?v=UiER4tMnbJA


That's horrendous, sorry for your loss.


> Another person watched a video about software rights, and was then recommended a video about gun rights.

Do all videos about gun rights violate Youtube policy? Maybe the content was actually problematic, but presented like this, I'm wondering what was the problem on this one.


From the report: "Mozilla has intentionally steered away from strictly defining what counts as a YouTube Regret, in order to allow people to define the full spectrum of bad experiences that they have on YouTube." So seeing a recommendation for a video that disagrees with your poltics is a "regret" but it's unlikely to be harmful. Mozilla is really watering down their argument here.

Oh wait, "Our people-powered approach centres the lived experiences of people, and particularly the experiences of vulnerable and/or marginalized people and communities". It's just another front in the culture war.


I literally never get such content and I use YouTube constantly. I'm not sure how much I'd blame youtube for showing similar content to whatever people are already watching.


I think the trick is also to watch pontential extreme videos linked from some place else in private mode so that your feed doesn't get messed up.

But yeah I'd second that. My feed is mostly video games, music & crafting which fits my interests pretty well. I don't get any political or violent content suggested at all.

I also wonder how much this gets influenced by tracking data gathered outside of YouTube.


>I also wonder how much this gets influenced by tracking data gathered outside of YouTube.

I think not at all given what I see in incognito/secondary google account and that I dont see any of my interests which I havent watched much about on youtube recommended - e.g. no programming stuff and I'm sure there's a ton of tracking data outside of youtube suggesting I'm interested in that.


As far as I can tell, simply watching news is enough to get entangled with the loonies.


On the contrary, the loonies are the ones who refuse to watch the news because they believe it's controlled by evil Marxist puppetmasters, while believing anything Youtube, Twitter or Facebook tell them.

As bad as the news gets, it isn't going to tell you Bill Gates is putting mind-control chips into COVID vaccines, or that the Presidential election was stolen by a conspiracy of Chinese communists and Democratic party pedophile cultists, or that Jewish space lasers are responsible for forest fires. It may report on those beliefs and their effects elsewhere, but unlike the internet, won't assert them as fact, or let you form a comfortable bubble of alternate reality around yourself where those beliefs are only ever reinforced, but never questioned.


They are also the ones brigading the comment section on news segments.


Yeah, Hacker News is starting to become as allergic to nuance as it is to humor.


I prefer to use YouTube in private mode so the ML doesn't pigeonhole you. It only takes a few videos for it to "decide what you like" and so you can pretty easily convince it to feed you weird stuff like this if you want.


I turned auto play off by default on YouTube. Recently I noticed that sometimes auto play was happening anyway. It seems that YouTube has introduced videos that override my choice called “Mixes”. Whoever came up with that is an evil genius. Now I have to be really careful to click the actual video I want to watch rather than a version of it with the same thumbnail that ignores my settings.

Fuck YouTube.


It's not harmful to watch Alex Jones videos, lizard people videos, deep state conspiracy videos. This premise should be rejected outright. It is more harmful to be watching videos about religion, astrology, etc. In some cases, these are mainstream belief systems that have caused countless death, destruction, financial ruin for thousands of years. This sudden interest in harm reduction is disingenuous and conveniently selective.

If you are not treating these videos as harmful, do not waste my time with blatant politics disguised as sanctimonious harm reduction.

As a new YouTube user why am I recommended 100% of comedians and channels who have identical politics? What are the odds?

All extreme ideology and ideologues - left, right, up down, are poisonous and divisive. They should be treated equally, not selectively.


It is not more harmful to watch videos about Astrology than videos about how the Sandy Hook Elementary School shooting was a false flag operation.


I have used financial ruin as one of the criteria for harm. There are far more examples of people who have been reduced to financial ruin because of astrology, so I am correct with regards to the quantitative comparison.


You are not correct under any lens in any context


They are though. You just don't agree with their viewpoint and seem to want this kind of censoring of content.


> It's not harmful to watch Alex Jones videos, lizard people videos, deep state conspiracy videos. This premise should be rejected outright.

That's an open question - whether conspiracy videos have a negative effect. Given the current state of research in psychology, political science and communication research, it seems plausible that they do have a negative effect, albeit a small one.

The defining feature of conspiracy theories is mistrust in state institutions and elites. Such attitudes - while legally protected - can lead to people rejecting sensible and helpful behavior, including vaccinations [1].

So given the current state of research, I do not think the premise should be rejected outright.

[1] https://journals.sagepub.com/doi/full/10.1177/19485506209346...


You are selectively highlighting part of the comment.

The full context is that it should be rejected outright IF other forms of harmful information are allowed to remain.

Any selective enforcement of this nature is insincere because it blatantly ignores more harmful examples.

If there is a comprehensive enforcement that treats all harmful content equally, it can be considered sincere, else it is simply politics. These are not minor examples, they are far more dangerous than the examples being cited in this study.


I agree with you, but I think I and most people wouldn't view astrology videos anywhere near as harmful as any of the other videos mentioned. Could you link one or two videos that you consider to be presenting information as harmful as the other things you listed?

For things like homeopathy and other medical pseudoscience, I think a lot of those things do get banned, depending on the claims they're presenting.

And for something like a video recommending you invest in the video creator's company because [astrology gibberish], I think in that case it's just a matter of magnitude. Of course such a video causes harm, but YouTube can't be expected to be able to prevent all levels of harm. Your argument is sound when you're comparing against things that many would agree are at least as harmful as the other examples you give, rather than just meeting the criteria of any level of harmfulness.


You are using the term "homeopathy", which includes osteopathy, acupuncture, chiropractic treatments, and other "holistic" areas of treatment.

Homeopathy is a very broad area, and some of it isn't just pseudoscience, although we may not yet know the mechanism of operation for some of the 'treatments' (and some are almost certainly actively harmful).

It's probably a weak analogy, but homeopathy is to medicine as supplements are to FDA-approved pharmaceuticals. There's always going to be some crazies, but there will also be some good things there, too, and by banning it, we will miss out on some really good innovation and ideas.


I've never heard it referring to all pseudo-medicines. Wikipedia seems to agree on a narrow definition about just heavily diluted shits.


Please review the sidebar on the homeopathy page for "fringe medicines", which includes the other examples I cited.

https://en.wikipedia.org/wiki/Homeopathy


The third sentence:

>Its practitioners, called homeopaths, believe that a substance that causes symptoms of a disease in healthy people can cure similar symptoms in sick people; this doctrine is called similia similibus curentur, or "like cures like".

I think homeopathy just applies to the pseudoscientific/fake "like cures like" dilution method. The broad category is just known as "alternative medicine", I believe. Some other forms of alternative medicine aren't necessarily quite as baseless.


> You are using the term "homeopathy", which includes osteopathy, acupuncture, chiropractic treatments, and other "holistic" areas of treatment.

> Homeopathy is a very broad area …

Homeopathy is a specific system of “alternative medicine” that involves treating “like with like” – albeit in dilutions that no longer contain the active ingredient. It has little in common with acupuncture or chiropractic which have very different origins and belief systems – other than they are all pseudo-scientific and are broadly categorised as complementary or alternatives to mainstream, evidence-based medicine.


The only reason we have somewhat responsible state institutions and elites is due to a long history of well deserved mistrust of said powers.


You argue that belief systems are both equal and not equal.

Pick a side.


No that is not accurate, I am arguing that if the goal is to tweak an algorithm to reduce "harmful" information then the criteria for what is considered harmful should not be narrowed down on the basis of one's personal politics.

Any sincere effort to minimize harmful information would start with astrology, superstition, religion, homeopathy and other such content - not some local USA "problems".


Then make that argument.

That our tools should empower us. That all existing recommenders disempower, or worse. That people should be in control of their own eyeballs.

In our attention economy, the choice of what to show and not show are equally important. It's trivial to crowd out opposing opinions, no censorship needed.

Who controls that algorithm? Certainly not us.

Most of the pearl clutching over censorship, "cancel culture", blah blah blah would be resolved today by restoring the fairness doctrine (equal time).

(This obvious and correct remediation will not happen so long as the outrage machine is fed by advertising.)

((Chomsky, McLuhan, Postman, many others, all made all these points, much more eloquently, repeatedly, much better than I ever could.))

> ...minimize harmful information would start with astrology...

Encourage you to omit your own opinions of any particular belief system. No one cares. It's a distraction (food fight).


It's not harmful to watch videos, if you don't do so uncritically. If you watch uncritically, almost any media is harmful, down to Teletubbies. People need to take responsibility for their minds: Boogaloo bois and Antifa didn't radicalize you, you radicalized yourself by believing what they said, either rationally or by default — but own your decision-making.


> It's not harmful to watch Alex Jones videos, lizard people videos, deep state conspiracy videos. This premise should be rejected outright. It is more harmful to ...

Sure we can go down the whataboutism-road to find examples of things that are _more_ harmful, but you do realize there are people who believe in deep state and other insane conspiracy theories?


There are concepts like equal enforcement and equal justice which by definition require what you are terming as "whataboutism" to highlight.

If there is going to be selective enforcement of content, then call it what it is - politics.

No sincere effort to minimize harm would ignore the harm caused by religion, astrology, homeopathy and other ideas like this.


I fully agree!


I think a better analogy would be Flat Earth, moon landing hoax, aliens on the dark side of the moon vs. astrology and mysticism and religion.

I doubt astrology causes much financial distress for the vast majority of believers, even if it results in some people making poor financial decisions. Same for any other kind of distress.

The problem with lizard people, Deep State, and much of Alex Jones' stuff is that the claims are about specific people and groups of people and their intentions and actions. They cause many people to genuinely, truly believe that certain people are the most abominable evil imaginable. That's inevitably going to increase the risk of direct and indirect harm a lot more than woo-woo fortune-scrying.

One could say religion has done the same, and that's not necessarily wrong, but it's misleading. You'd have to compare specific things like religious extremist videos rather than merely religious/spiritual videos. Lizard people / Deep State YouTube content is very often extreme, while astrology and religion YouTube content is very rarely extreme. Pizzagate and QAnon adherents are earnestly trying to bring about the arrest and/or execution of people they believe to be child rapists and murderers.

Even stuff like 9/11 conspiracy theory videos, if believed, lead you to the unavoidable conclusion that some, much, most, or all of the US government and/or Israel and/or Jews are diabolical mass murderers. You're not going to come away with a conclusion like that after watching a crystal healing or reiki video.

>As a new YouTube user why am I recommended 100% of comedians and channels who have identical politics? What are the odds?

Probably because something in your Chrome or Google search history, or the history or video activity of other people sharing your IP address or device, led YouTube to believe that may be something you like. Or, as this article points out, maybe you clicked one or two things that were tangentially related in some way and their data set indicated a lot of people who watched those also watched those other things. So, the odds are probably very high.


The Alex Jones crowd recently tried to stage a coup. Thousands of Asians were assaulted in the last 16 months or so, often by people indoctrinated by middle-aged men talking rather loudly directly into a camera, in their car with ugly sunglasses.


Such a coup that to date, despite the most far reaching federal investigation in U.S. history, the only sentenced conviction has been to a grandmother who spent 10 minutes sightseeing through the (otherwise publicly accessible) area of the Capitol building, and given a whopping 120 days of community service and probation. Your rhetoric is more dangerous than this.


Alex Jones was literally telling people not to enter the Capital: https://twitter.com/CardinalConserv/status/14037682301083033...

> Thousands of Asians were assaulted in the last 16 months or so

Mostly by people who aren't Trump's demographic. I'm pretty sure you don't want to get into the statistics.


A riot is not a "coup", especially by people with no weapons. The people attacking asians are mostly not from the right wing crowd.


[flagged]



> The non-English speaking world is most affected, with the rate of regrettable videos being 60% higher in countries that do not have English as a primary language

My first assumption was that YouTube trains better on English-language stuff and so the recommendations are better. However, downloading the paper and scrolling to the actual data on page 22, it turns out: yes, some of the top bad countries (Brazil, Germany, France) are worse than all English-speaking countries, but also, all English speaking countries are worse than the lowest two (Denmark, Austria) and only Canada does better than Bangladesh and Czechia. So it's a bit more nuanced than "non-english is 60% worse", hidden behind an "on average".


Youtube is like Reddit or Twitter: a dumpster fire unless you carefully curate what you do on it. I remove everything from my youtube history unless i specifically want it to be used for recommendations.

I mostly watch video's about bass playing and use youtube to listen music, and even with curating my history i still get the occasional insane political rant recommmended out of nowhere.


> recommender AI

So, am I correct to understand that the recommender AI is just learning from youtube users' actual behavior and shows what content people who have watched a given set of videos tend to find "engaging" (watch for a significant amount of time, leave comments, like, etc)? That, in effect, the AI is just holding a mirror to youtube users, and some journalists don't like what they see in it?


There is a known problem with Youtube both excessive recommending in the same category and driving people to more extreme videos.

If you watch a surfing video, even for a few seconds, expect a lot of surfing videos to pop up in your recommendations for a long time. This has gotten slightly better over time in my experience.

The larger problem is extremism. Watch a video about a vegetarian dish, get recommended vegetarian dish and lifestyle videos, watch those, get recommended veganism videos, watch those, get recommended PETA and animal activism. It kinda works like a funnel.

Might not seems that dangerous for your favourite topic, but now replace the topic with conspiracy theories, religious extermism, radical feminism, anti-government armed resistance, self-harm and suicide, anorexia etc. etc.


I believe it's been attributed to the fact that more "extremist" content gets more engagement from users in the form of comments or longer watch time.


All of this reminds me of the early days of the TiVo. It had a recommendation system that it would use to fill your DVR with shows you would like. If you recorded a show in a new category (which gave the show an implicit positive rating), it would often go off the rails and find everything in that category. The first time I recorded "The Simpsons", it filled up my DVR with cartoons over the course of the next week before I noticed what it was doing.

We seem to be basically no better than that sort of logic.


> There is a known problem with Youtube both excessive recommending in the same category and driving people to more extreme videos.

There is a belief that that problem exists with YouTube but whether that actually exists requires further investigation. For example [1] suggests the opposite.

[1] https://firstmonday.org/ojs/index.php/fm/article/view/10419


Not quite.

a) we don't know how the recommender works, and it changes over time, and some problematic recommendations have been found even in absence of actively seeking out problematic content, and

b) it's not journalists but researchers from mozilla and scientists from the University of Exeter.


> some problematic recommendations have been found even in absence of actively seeking out problematic content

I don't know how to put it in proper terms, but isn't it possible that the user who has watched videos A and B ("non-problematic") is classified the same as a subset of users who have watched A, B and C (C being "problematic"), and thus is shown C?

Asking because personally, I've never seen my "safe for work" youtube account ever recommend me anything "problematic". At the moment, it's just feeding me music videos of the same genre, which I am already bored by :-)


Both happens.

Some studies such as [1] measure recommendations without past browsing history, and they still find recommendations for harmful content.

Other studies (such as this from Berkman Klein Center [2]) simulate user behavior and subsequent recommendations and find additional harmful effects of personalized recommendations.

[1] Faddoul, M., Chaslot, G., & Farid, H. (2020). A Longitudinal Analysis of YouTube’s Promotion of Conspiracy Videos. ArXiv:2003.03318 [Cs]. http://arxiv.org/abs/2003.03318

[2] https://cyber.harvard.edu/story/2019-06/youtubes-digital-pla...


Think more like junk food when it was first invented.

A culture/society with no understanding of Mc Donalds will immediately see Mc Donalds proliferate.

The food is cheap, fast and tasty. It hits all the buttons.

Same thing here. Except, this is the phase before health and safety regulations and poisonous/radical content is distributed along click bait.


Mozilla conducted this research using RegretsReporter, an open-source browser extension that converted thousands of YouTube users into YouTube watchdogs.

Research volunteers encountered a range of regrettable videos, reporting everything from COVID fear-mongering to political misinformation

Like videos discussing the lab leak theory?


The lab leak may have been in there (you can probably check in the appendix to the report).

But to be honest, it's pretty irrelevant. Content analysis is usually performed using the current state of knowledge. It is to be expected that known facts and what is perceived as truth changes after the fact. This does not invalidate the analysis itself.

Here's the explanation from the report, which I find pretty reasonable:

> Over the course of three months, the working group developed a conceptual framework for classifying some of the videos, based on YouTube’s Community Guidelines. The working group decided to use YouTube’s Community Guidelines to guide the qualitative analysis because it provides a useful taxonomy of problematic video content and also represents a commitment from YouTube as to what sort of content should be included on their platform.


I keep telling YouTube that I'm "Not interested" in COVID-19 videos, but every other day they shove it in my face anyway.


Isn't there a problem with the sampling here? What if people who likely install RegretsReporter are also people who are more likely to view tangentially related problematic content in the first place, and then are shown more of it. Also, the dividing line on what's consider problematic is difficult, as we saw recently with Right Wing Watch.

And the proposed regulation seems even more problematic. If the AI model were public, then it would be far easier for people to create adversarial content to game it. This is a problem with any models built on public data, including OpenAI's GPT stuff, or GitHub's Copilot. Detailed knowledge of how it works allows more efficient adversarial attacks, and the more these services become important public infrastructure, the more value such attacks will be.

Imagine a co-pilot attack that inserts hard-to-discover Heartbleed-esque bugs into code for example.

It seems way way too early to be discussing regulation of algorithms of a field so young and changing so rapidly. 5 years from now, the network architectures may be unrecognizably different in both form and function.

It might be better to have some industry-wide group like the W3C or IETF which sets these standards, and have tech published reports and audits for compliance, but done in a way to prevent attacks.


> Isn't there a problem with the sampling here? What if people who likely install RegretsReporter are also people who are more likely to view tangentially related problematic content in the first place, and then are shown more of it. Also, the dividing line on what's consider problematic is difficult, as we saw recently with Right Wing Watch.

As I commented above, sampling is a problem, but the sample was only used to gather candidate videos which were then manually analyzed following Youtube's own content guidelines.

So the takeaway is: According to the study's authors, Youtube recommends videos despite them violating their own guidelines.


Does Right Wing Watch violate the guidelines? It’s difficult enough to make AI that could enforce these guidelines, but even with a staff of tens of thousands of human evaluators I doubt you could avoid even 10% false positives given how much people argue over classification already.

This kind of filtering, some say censoring, is super problematic both to pull off And to please all stakeholders, which is why a one size fits all regulation is likely to fail and create a new class of problems.


> Like videos discussing the lab leak theory?

Which lab leak theory?

There are widely varying claims on this topic, ranging from narrow and nuanced scientific discussions to all-out nonsensical conjecture. Labelling all of these claims with a broad brush and equating them is simply a straw man.


And yet, the “study” itself appears to imply that a video is extremist just because it has a title asserting that there is a left-leaning bias in some of the media.

Seems false equivalencies, lack of nuances, and straw man arguments only matter some of the time.


I wasn't defending the study. I doubt crowdsourced measurements of "regret" are good at measuring the accuracy of a video's content.


Fair enough, and your point is perfectly valid taken on its own.


When Donald Trump talks about lab leak it is clearly racist conspiracy and harmful video, otherwise its okay

https://comicallyincorrect.com/wp-content/uploads/2021/05/04...


By the way, regarding these algorithm studies:

It's important to do them correctly to prevent bad studies, not (just) to get valid information.

Or, as Andrew Gelman [1] said:

> The utilitarian motivation for the social sciences is they can protect us from bad social-science reasoning.

[1] https://statmodeling.stat.columbia.edu/2021/03/12/the-social...


Why wouldn't it? Until YT has a perfect algo to detect violating videos, this will obviously happen.

I keep seeing pieces like this and I feel like I'm taking crazy pills. I'm no expert at tech or social media. But this is a really obvious fact isn't it? So obvious that doing a "study" to "discover" it seems actively dishonest.

What am I missing here?


They will never make the algo perfect... like that they always have an excuse to remove whatever they don't like.


So this is where all the Mozilla donations go: not to its floundering web browser or development of other open web technology, but to 10-month long YouTube “investigations” of the exact same sort a dozen other media outlets are doing.

Waste of resources aside, this is missing the forest for the trees. They claim that YouTube’s algorithm is the problem, but in regards to the goal of an open and user-empowering web, YouTube is the problem. The implicit argument here, again sad to see coming from Mozilla, is that one siloed website completely dominating web video through predatory pricing and network effects would be totally OK if a bunch of people they liked were put in charge of its recommender systems.


That's true, but the donations are so small (proportionately) that they wouldn't matter to Firefox. Firefox needs better strategy, not more money spent on the current failing strategy.


This. Mozilla has the power to make a bittorant client instantly available to tons of web users which would make self hosting video trivial and make YouTube's recommendation algorithm mostly irrelevant.


I found my YT sticks to mostly what I like, but if I accidentally select a video about some old vintage weird gun from some guy that occasionally appears in my feed, I guess through some other inference, then my channel is suddenly all guns, nothing but guns, guns, guns.


I don't get that and I subscribe to forgotten weapons nd a fair number of millatery themed channels Tank museum etc


How do you then un-gun your feed? Select a recommendation from a different topic? Or, how long does it take for the gun recs to go away if you don't pick any? It's interesting that there'd be a burst of recommendations around a new topic.


If you go into your history and remove the video that caused it fixes it for me usually.


I think last time I just avoided all the gun ones and they eventually went away. I since clicked on another style of annoying video topic and a faster way to get rid of an annoying category is to actively go and dislike the videos


I agree it sucks, but so does Netflix, Amazon, Twitter, Facebook, etc. Its almost like social media is just a marketing tool and your promotions aren't even recommendations at all, but paid placements.


They all do it, from influences, big advertisers, and cable news networks on social media sites like Facebook, Twitter, YouTube, Instagram, Snap, TikTok, etc all game the recommendation algorithms of each of these platforms.

As long as the advertisers are happy paying them, they won't change. Not only these algorithms govern what is seen and unseen on those platforms, it is also a privacy issue and they operate by requiring to track the user's usage information.

The small users and 'content creators' lose either way. They work for the recommendation algorithm and it can change at anytime and they end up earning less. Or they get removed from the platform / programme because of their low number of views and that's that, whilst the big players game the system.


You can't compare netflix tho. With the small amount of content they have its enough to browse the new section maybe once a week to not miss anything. Facebook and Twitter are not there to recommend content but to follow content you already cared about.

We have 'illegal' streaming sites with better discovery than YouTube has.


I stream music from YouTube because of its superior recommendation algorithm (in comparison to, e.g., Spotify). The recommendations that I didn't know prior to them being suggested to me are spot-on.

This must be entirely based on my views and non-specific data from my Google account, as I do not engage (like, dislike, comment).


This might be the best critique of big tech, if these results were to be framed this way:

- Algorithms are designed to give people more of what they want

- YouTube's policies are designed to protect people from bad things

- People want bad things, therefore YouTube's technology and policies will always be in conflict


Youtube plays at social engineering. Censorship and propaganda, that is. Sells it as a service.

Their policies exist to preemptively justify their shenanigans. Youtube "did it for our own protection" etc.

A blind man can see this.


Why is the first question to this stuff not "maybe their policies are shit?"


Sounds like right wing content is being equated with extremism.


“Art Garfunkel music video, and was then recommended a highly-sensationalised political video titled “Trump Debate Moderator EXPOSED as having Deep Democrat Ties, Media Bias Reaches BREAKING Point.”

It was much harder to figure out the connection on that one. But seriously, most of this is just demographics. It’s not like there’s a conspiracy to control information or anything crazy like that. I remember the panic about the creepy kids cartoons. It’s just algorithms doing their thing. Get used to it.


> It’s just algorithms doing their thing. Get used to it.

Given how much influence some of these platforms have on people, how about the companies running them fix their stuff? We could also introduce legislation to make them fix it. Not sure why we have to get used to it.


That influence rhetoric is so pervasive despite the transparent dishonesty that is an elephant in the room. So when are we going to shutdown the BBC and stop best selling authors from writing because they have "too much" influnce? Sorry Steven Spielberg - you had too much influence on film already so it is retirement for you! That would be considered unthinkable and horribly dystopian.

Yet it is okay if standard special pleadings #1 and #7 are applied (in this case "But it is people I don't like so it is okay!" and "It is new and scary so it is okay!").


Honestly I wonder how much of it is just peoples own personalized recommendations and how much of it is general recommendations.

I have a hard rule not to use YouTube for any sort of news or politics, and the algorithm accommodates that very well. I’m sure if I created a new account, it would start off recommending me whatever BS is popular across YouTube at the time of creation, but if I used it exactly like I use my current account, I think the algorithm would accommodate that. I actually like YouTube, which seems to be an unpopular opinion in some places.

The only thing I can’t seem to get rid of, permanently, are the stupid banners of videos that YouTube adds in about COVID, or at the time of the election, that crap, and some other nonsense. I mean if I keep clicking the X to get that out of my face, isn’t that a strong signal I never want to see that crap or any of those topics again?


Mozilla should stick to making good open software

Support for censorship which this is doing is not a way to support the free and open web




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: