Hacker News new | past | comments | ask | show | jobs | submit login

Unfortunately, this sort of glibness misses the fact that Meta’s algorithms push topic engagement and in doing so, amplify high-emotion content.

A passive user of the general internet is not as likely to encounter the same concentration of singular topics as they would on Facebook. Your comment would largely apply only to active seekers of said content.




I don't think it's that simple. "The algorithm" historically favoured engaging posts, which basically means "what people care about".

This same algorithm pushes all the good things that someone like Wirathu was doing (for the Buddhist population) as well as the horrific things (promoting genocide).

So let's talk about the obvious -- disabling "the algorithm". With no algorithm the problem might be even worse! If someone like Wirathu runs 20 accounts which each post 10+ times per day, his genocidal speech might be even more over-represented to the general populace. Timeline order rewards spammers, which rewards people with resources to simply scale output to more of the same.

Well ok, why not improve the algorithm then? It's easy to say "just promote the good stuff", but it's very difficult to do in practice at scale. Let's examine why.

Since it's all happening in Burmese, any ML models for sentiment or radicalization or hate speech will hugely lag behind English in effectiveness. A poorly trained auto-moderator is often worse than none at all, because it acts at random.

It is also a cat and mouse game -- simple models are stymied by basic (for humans) techniques like misspelling, swapping letters for symbols, inventing new slang words, dogwhistles, appropriating benign words, etc.

Ok, so why not scale up moderation? Well, they need to find sufficient Burmese speakers who can be in person in their offices. (In person because of data protection and resources like mental health support.)

It's not a popular language, so finding any candidates is already difficult, let alone people who are willing to wade through the horrors of humanity day after day for a paycheck.


Why not stop providing services if you cant scale the safety mechanisms that go with them? If it's hard to scale moderation in Burma, don't do wrap ups with Burmese mobile operators to put your service on everyones phone for free.


I'm not sure, but let me try to provide some reasons.

Firstly, Facebook (and the Internet in general) is used for hugely positive things in most people's lives. Stopping service would have a negative impact on most people. That might even include the Rohingya, who could use social media to organize for their own safety.

Secondly, it was not known beforehand that there was going to be a genocide, so the services were there at the same baseline as other countries and languages.

Once it's there and the people love it, I can imagine that taking it away would only push everyone to another platform. That might solve the problem for Facebook, but it wouldn't solve the problem for Myanmar. The article even mentions an alternate popular news site in Myanmar that also incites violence.

As for why provide services for free, are you suggesting that the people of Myanmar would be better off without free access to a subset of the Internet? I think their actions, and the actions of the developing world in general, speak differently.

If I remember correctly, Internet.org doesn't just provide Facebook access, they provide a handful of sites that include Facebook but also include Wikipedia and others.


>Secondly, it was not known beforehand that there was going to be a genocide, so the services were there at the same baseline as other countries and languages.

Except that Facebook employees were told directly that the country was on a path to genocide and Facebook was helping to fuel it. They had the warnings.

Between "do nothing" and "withdraw the service" there is a third option where Facebook does a good job moderating the platform so good effects are promoted and negative ones are limited. As the monopolistic operator subsidizing internet access for a majority of the population, they have a greater responsibility to avoid doing harm, and yet failed despite repeated warnings.


Who’s in charge of pens to make sure nothing harmful is written?


I absolutely loath these free speech strawman arguments.

Nobody is saying prevent people from expressing thought. However meta does have a duty as a platform when it comes to amplifying thought

Pens do not amplify the reach of a post.

These are not one and the same and I think it’s disingenuous to try and equate them.


Charlotte Corday? Idk? What even are you talking about?


If they can’t responsibly provide a facility then they shouldn’t provide it.

I don’t even mean just accidental issues. In this case they were warned over many years and were negligent.

They do not care to provide a responsible service, because a responsible service reduces engagement.

Their older algorithm was much more scalable with a void of moderation. It used to favor recent shares among your most interacted with friends.

While that doesn’t prevent group think, it does reduce amplification of emotionally charged posts. Today, a post that has a lot of engagement, even across a wider social network, is higher on your feed. That means that if I have some crazy acquaintance that I never really talk to, the second they start posting anything high engagement, it’ll come to me. Previously it would understand that I don’t care about them.

Meta chased the tail of engagement and don’t care what the second order effects are.


I don't think it's correct to assert that a different algorithm would be less harmful, and it could even be the opposite.

In the article, it says that Wirathu used multiple personal accounts to "friend" at least thousands of people. I don't know to what extent his reach would be diminished by favoring "friends" -- and it might even increase his reach.


That’s why I specified their older algorithm where it prioritized people you actively engaged with.

Even if Wirathu sent me a friend request that was accepted, unless I actively engaged with him, his content would fall off my feed.

So it wouldn’t eliminate his reach but it would have reduced amplification. Today, because he’s popular, Facebook would prioritize him on my feed.

In essence, Facebook went from saying I should care about my close friends at school to caring about the most popular cliques. (As an analogy, I’m not in school of course)

I do agree with you that it’s not as trivial as any of us are writing here, but it’s also clear imho that facebooks current algorithms have harmful incentives in the face of inadequate moderation.


what's the alternative, no curation? No thank you


Why is your assumption that the alternative is no curation? Why can’t it be that their curation is bad or has the wrong metrics.

The problem with Meta’s algorithms is that they overindex in single topics with high emotional value. That means you end up getting into unhealthy echo chambers. And unlike other social media (Twitter, TikTok etc) this ends up localizing to your immediate social circle.

The end result is that Meta has time and time again been shown to amplify the most charged positions.

Their curation algorithms are bad. They don’t account for second order effects, only engagement.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: