Hacker News new | past | comments | ask | show | jobs | submit login

"out of filter bubbles" == "into other filter bubbles". This isn't empowerment, it's hijacking.

Even in 2022, the world wide web is still a hell of a lot more diverse than one single organization. Especially Mozilla.

Should you trust any one website? No. But which websites you frequent is your choice, and it's one of the few dimensions of freedom remaining to the internet. The only job of the browser is to facilitate this freedom, and any divergence from that goal will only diminish what free choice remains.




> "out of filter bubbles" == "into other filter bubbles". This isn't empowerment, it's hijacking.

It's hardly hijacking when they placed you in those bubbles in the first place. I'm quite happy to break algorithmic filter bubbles given the rise in both extremism and lunatic conspiracy theories we've seen in recent years.

> Should you trust any one website? No. But which websites you frequent is your choice, and it's one of the few dimensions of freedom remaining to the internet. The only job of the browser is to facilitate this freedom, and any divergence from that goal will only diminish what free choice remains.

It's a fantastic thing that Mozilla isn't suggesting that any of these changes be made to your browser then isn't it.


> It's a fantastic thing that Mozilla isn't suggesting that any of these changes be made to your browser then isn't it.

When a browser vendor advocates for "amplifying factual voices", in my mind's eye I see a browser tab filled entirely with a well-styled notification: "This site may contain misleading or harmful information. It has been blocked for your safety."

But I'm sure there'll be an about:config setting to bypass it and it only gets reset every two or three updates.

Mozilla should stay in their lane. -No, that's not accurate. Mozilla should get back in their lane. Maybe then they can finally find the time to finish implementing all the APIs that they removed without replacement with the Quantum rewrite.


> When a browser vendor advocates for "amplifying factual voices", in my mind's eye I see a browser tab filled entirely with a well-styled notification: "This site may contain misleading or harmful information. It has been blocked for your safety."

The browser vendor is explicitly suggesting that platforms like Facebook "amplify factual voices.". That quote is literally a link discussing Facebook's work in that area. There's no suggestion whatsoever in the article that they're planning having the browser do this

What you're picturing in your minds eye could be concerning if it was at all close to the point of the article or being suggested as a solution. There's been no indication that there is even a plan to do so.


Hey, maybe I'm worrying over nothing.


> It's hardly hijacking when they placed you in those bubbles in the first place.

Don't people create their own feeds? I hardly ever go on these sites, but I just checked my Facebook, and it's still people I knew in high school/college posting pictures of their travels, kids, etc. plus some lawn/garden groups my wife subscribed to. Don't you have to follow people on e.g. Twitter for them to appear in your feed?


There's no definitive answer but I'd argue not really.

When you first land on a platform they decide what kind of content you're exposed to. For the majority of users any curation they're likely to carry out will be based on what the platform has recommended to them.

I think the crucial part is that these recommendation engines provide almost no room for rebuttal or content that leads you to examine your views. They function in such a way that they only recommend content that reinforces whatever beliefs you currently hold.

A personal example would be that I was fairly interested in the Atheism movement of the early 2010s. That movement largely collapsed and the zeitgeist moved on to "anti-SJW" content. During that period my Youtube feed became almost exclusively "anti-SJW" videos despite never having subscribed to them. I don't recall ever having seen a video from the other perspective in that period.

Around ~2018 I had largely moved on to more left-leaning content but had heard people mentioning Ben Shapiro. I watched one or two videos to see what he was about and my recommendations became nothing but Shapiro and other right wing content for weeks.

I noticed the same thing with conspiracy theory videos. You watch one out of vague interest and suddenly they become the only thing recommended to you.

None of these recommendations lined up with the content I typically subscribe to or watched regularly.

I logged into Facebook for the first time in years recently and it was full of quasi-sexual videos. I hadn't interacted with those at all. They mostly seem to be gone now but there was no reason for them to have been present in the first place.


It's interesting to me that people seem generally comfortable with the mechanisms browsers use to determine if a site is unsafe in the sense that it has a track record of carrying malware ("Google has detected harmful content") but get really upset about the possibility that a similar system could be used to protect users from having their minds hacked by info-charlatans.

I guess people know what malware looks like but the jury's still out on whether someone who's been radicalized into believing a demonstrable untruth about the world around them has even been attacked? Like we see as different, for some reason, grandma crying because a hacker stole her passwords and drained her bank account vs. grandma crying because she's in jail for having broken into a pizzeria because someone online convinced her vampires were using the pizzeria's basement to drain adrenochrome out of children for an immortality serum?

(My opinion has swung on this topic as I've gotten older and become responsible for elderly people's welfare in my life. The tools used to attack their ability to tell truth from falsehood online are refined, pervasive, effective, and insidious. I have to keep purging news aggregators from their smartphones because they click on an ad that brings them an ad that brings them an ad that brings them an ad, and the next thing I know their phone is locked up because three apps are bumping twenty notifications an hour about what the 'woke agenda' will do to the bathrooms in their homes).


> It's interesting to me that people seem generally comfortable with the mechanisms browsers use to determine if a site is unsafe in the sense that it has a track record of carrying malware ("Google has detected harmful content")

I am definitely not comfortable with that considering that I know it contains false postives that google does not care to do anything about. Most people don't even know about google safe browsing or how realize how much control it gives Google.

BTW, VirusTotal is also run by Google which they are not very open about.

> I guess people know what malware looks like

I have yet to find anyone that does. Certainly not the anti-virus industry.


> It's interesting to me that people seem generally comfortable with the mechanisms browsers use to determine if a site is unsafe in the sense that it has a track record of carrying malware ("Google has detected harmful content") but get really upset about the possibility that a similar system could be used to protect users from having their minds hacked by info-charlatans.

It's so much easier to tell whether a website is distributing malware than it is to tell when an "info-chartalan" is "hijacking someone's mind" that those things are in entirely different difficulty classes.

It's completely impossible to even get a small group of people in the same political party to agree on a definition for "misinformation", let alone a large group, let alone a group spanning multiple political parties, let alone be able to consistently tag content in the same way, let alone do that at scale.

We already have people who disagree about the nature of reality, or claim that truth and morals are subjective.

You might as well try comparing adding 3-digit numbers together with being a judge in a court of law. One of those things is mostly mechanical, although nontrivial to do at scale - the other is extremely messy and involves a massive amount of human factor. It's entirely reasonable to have significantly different views about them (and standards for who you let perform them).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: