Hacker News new | past | comments | ask | show | jobs | submit login

Semi related: there's a browser plugin[1] available to "de-mainstream" Youtube. It removes from recommendations the well known channels of the big name media brands[2]. You can add more to the list.

--

[1] https://demainstream.com/

[2] https://github.com/miscavage/De-Mainstream-YouTube-Extension...




This works pretty well (I use it) because it removes recommendations and search results (which are really recommendations) that are artificially inserted/ranked at the top every single time.

That said I don't think it's what Mozilla and many commenters here are looking for, they're looking for banning content they disagree with (i.e. censorship).


> they're looking for banning content they disagree with (i.e. censorship).

That's an extremely unfair read of the situation. The extension doesn't even ban anything. It is a research project attempting to understand why the YouTube algorithm recommends the videos it does. Whether or not you think Plandemic (a nonsense conspiracy theory video) should be banned or not you can still see research into the YouTube algorithm as a worthwhile endeavour.


Literally the first sentence of the article is about dangerous recommendations. And the second is about harmful content.

This is not just an open-ended inquiry of what youtube is recommending. It's about finding the bad and getting rid of it.

EDIT: I also want to remind people about how Mozilla used its push notifications to call for a Facebook boycott over objectionable content.


> YouTube recommendations can be delightful, but they can also be dangerous. The platform has a history of recommending harmful content — from pandemic conspiracies to political disinformation — to its users, even if they’ve previously viewed harmless content.

What part of this is untrue?

And again, you're saying "getting rid of it" without evidence. It's a study. It does not mention banning anything, anywhere. If the argument is that they should not study this because it might lead to bans in the future then you're being pro-censorship in order to promote anti-censorship which doesn't make a whole lot of sense.


Having tried to get rid of things in the past is evidence. The very loaded choice of words they use is another.

I'm not saying they should not study this thing. I am just discussing their aims, because I think it's interesting, and it may inform others' choice of whether to participate or not.


> What part of this is untrue?

The characterization of pandemic conspiracies and political disinformation as harmful.

It's like kids and allergies. You never hear about kids who grew up on farms being allergic to animals - it's always those whose parents didn't have any when they were growing up.

If people aren't subject to misinformation, they'll never develop the sense of who's lying and who isn't.

It used to be that we gave common-sense advice - "don't believe everything you read on the Internet". Now, it's the other way around - "we must cleanse the Internet of harmful content".

Being exposed to misinformation is good for you, and it's good for democracy.


"I never heard of it happening, so it is universally true that it never happens."

Meanwhile, the actual state of affairs is[0] that there's one allergic farmer child per three allergic non-farmer children. Don't believe everything you read online.

[0]: https://pubmed.ncbi.nlm.nih.gov/11048766/




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: