It's always worth thinking a little more about these things than "Facebook is blocking EFF privacy tips."
Maybe:
* This account already scored poorly on spamminess and attempting to post a bare link (with no content) on their page pushed them over the edge?
* The EFF Privacy Tips link was somehow used in a parallel spam campaign, or some characteristic of the page itself causes it to be flagged as spammy?
* Something about this specific browser session was flagged, preventing posting? (this might go along with the No Violations observation about the page).
I find it much more interesting to try to reverse engineer these strange behaviors of some spam detection systems than to just chalk them up to a "Facebook hates the EFF" conspiracy theory - if this happened to me, I'd be doing some more tests to try to see what's up.
Yeah, this is just so little evidence to simply come to the conclusion that Facebook hates EFF. The decision of blocking someone has probably many variables, previous posts, ip addresses, browsers, ...?
Some part of me wonders how is this even news? I guess "some person was blocked temporarily and that's all we really know" is not going to make it to the top of HN.
FB will remain opaque as long as FB benefits from opacity. FB has won a strategic victory. Now even HN is using opacity against any company, no matter how questionable, to the benefit of that company. No one is saying that the armor of opacity must be broken.
All people here are saying is that it's worth taking a breath before jumping on the rage train. HN posts like this usually end up going full rage mode for a few hours before someone finally digs up the rest of the story, so I'm relieved to find that this one got nipped in the bud.
This would be my first thought... This doesn't seem the link itself, but maybe something else - perhaps other logins with that account, since it seems to be a not-person account. Maybe they should have a Page for their business instead of a named account? Definitely needs more info.
Maybe one of those reasons is right, but personally, Meta lost benefit of the doubt long ago. Just like I wouldn't give the benefit of the doubt to someone walking towards me brandishing a knife, just because I don't know for certain they're going to attempt robbery/murder/etc.
If you had a leg to stand on you wouldn’t need to use such an absurd comparison. It isn’t that at all. It’s exactly what it is, and nothing other than that.
Yeah. Which seems pretty likely as the sort of person that’s looking to rage against this particular machine probably has a largely dormant Facebook account with fake details, a protonmail email address, being accessed via a VPN.
Having worked at Facebook, albeit a long time ago. It’s far more likely that a conflict of events caused a false positive of an Automated spam system than Facebook giving a crap about privacy campaigns that hurt revenue.
But alas, we humans love our conspiracy theories because they tell a more interesting story.
but in the end the outcome is the same. malicious or not, facebook blocks what it should not block and that in itself is the problem. it being a false positive is not an excuse. if they don't fix this it just shows that facebook doesn't care about false positives either.
If Facebook could have a false positive rate of 0% and a false negative rate of 0%, they would absolutely make that happen. Unfortunately, due to the way statistics work, Facebook can pick its false positive rate or its false negative rate, but it's impossible to get to 0% false positives without just giving up on moderation altogether.
We're not talking about capital punishment here, we're talking about social media, and Facebook appears to have made the very reasonable decision that it's worth accidentally rate limiting some innocent accounts in order to keep spam lower than would otherwise be possible.
> We're not talking about capital punishment here, we're talking about social media, and Facebook appears to have made the very reasonable decision that it's worth accidentally rate limiting some innocent accounts in order to keep spam lower than would otherwise be possible
That sounds reasonable.
Well, reasonable unless you happen to be (or care about) one of the innocents being accidentally punished, because some corporation's algorithm said so.
Scholars have been worrying about this kind of thing in the real world's justice system for a long time (think of William Blackstone's "it is better that ten guilty persons escape than one innocent suffer" quote, which is over 250 years old[0]; of the 1895 U.S. Supreme Court's "it is better to let the crime of a guilty person go unpunished than to condemn the innocent", and these weren't novel ideas, they go all the way back to the Romans).
Where are the checks and balances for the online world?
There's a world of difference between subjecting an innocent person to the penalties afforded for felonies by 18th-century English law and subjecting an innocent person to "limited access to [Facebook] for a few days." That difference completely changes the acceptable ratio of innocents-suffering to guilty-prevented-from-harm and the expected level of oversight for the process.
blocking for a few days is not rate limiting. if i am talking to a customer on facebook and i get blocked for a few days this could cause me to loose a job. (and no, sometimes i can't choose how to communicate with customers. if they insist on facebook then that's where i need to be). same for my grandparents. they may be upset if i can't talk to them and i can't reach them in other ways. the problem is that most people are not aware of this risk and it will catch them off guard which has the potential to hurt more than it would otherwise. the risk of this makes facebook an unviable option for me to communicate in the first place.
> if i am talking to a customer on facebook and i get blocked for a few days this could cause me to loose a job. (and no, sometimes i can't choose how to communicate with customers. if they insist on facebook then that's where i need to be).
This scenario feels contrived. In this hypothetical, you're talking to a customer using your personal Facebook account and that's the only way you have to contact them? And your employer somehow would see you getting blocked from Facebook as the problem, not their nutty customer relations practices.
> same for my grandparents. they may be upset if i can't talk to them and i can't reach them in other ways.
Again, this is super contrived. You act like you have no other contact method for your grandparents, and no way to get another contact method.
> the risk of this makes facebook an unviable option for me to communicate in the first place.
This is a very decent conclusion to come to. I wish more people would do the same. But there's no sense blaming Facebook for using moderation practices that any other platform in the same position would also choose.
i don't know how contrived the examples are for facebook, as i am not on it, but replace it with wechat in china and you have reality. many people really have no other way to keep in touch with some people.
apart from that, i have relatives in another country, and neither of us have our phones set up so we can make international calls (because that costs extra money), so while we could get in touch, we wouldn't unless it was something urgent. instead either of us would just wonder why the other is staying silent for a few days. and some relatives just refuse to use any other way to communicate besides the one of their choice. it's not facebook fortunately, but still, the example is not really that contrived.
i also have many friends that i can only reach through one method. if i loose that method, they are gone unless i am lucky and i can reach them through intermediaries.
ADDED:
temporarly blocks may not be that serious, but permanent blocks exist too. we have seen many of those stories even here on HN.
the problem is really that i fear many do not think that this could happen to them, so when it happens, they are caught unprepared and unaware.
i certainly lost contact to some people because we didn't consider this a possibility.
In this case some actual conspiracies have been ousted. Like when Cambridge Analytica used Facebook data to influence elections. There's a track record here.
Not saying this is true but sadly sometimes it is.
If you could prove beyond doubt that Facebook cared an iota about this person posting an EFF privacy tips link, I’ll lick any NYC subway pole you ask me to.
Accidentally a perfect analogy, because elephants do think about ants, and take steps to avoid them. The only other small animal they avoid is bees (not mice).
And just like ants, the EFF are one group that could have lots of tiny little warriors scaling up inside Facebook's fleshy trunk.
> The MythBusters hid a mouse under a ball of elephant dung, planning to flip the dung over and reveal the mouse when the elephants approach it. When they flipped the dung and revealed the mouse, the approaching elephant was startled and quickly moved away from the mouse. The MythBusters then flipped dung without the mouse under it, but the elephants did not react at all. They then repeated their first experiment to confirm their results, and the elephant noticed the mouse and actively avoided it. Even though the elephants did not panic at the sight of the mouse, their acting cautiously around them was enough to have the myth be considered plausible, as it was not known whether the reaction was due to fear of or empathy for the mouse.
... Note how they were wild elephants, in an unnatural environment, facing weird dirt behavior.
Similarly, I think there's something poetic about how elephants aren't blocked as much by fences--even if those look more significant to us humans--compared to trenches.
With a fence, the mass of the elephant helps to knock it over. With a ditch or a hole, the elephant's mass works against it. It risks getting stuck or breaking important bones that are already under a lot of stresses.
I'm a paying supporter of the EFF, so they are important to me. But I disagree, it would be a story if this was a regular occurrence for sure, but moderation is a hugely complex, distributed, and opaque process. If somebody found code like `post.censor() if post.mentions("EFF")` then yes, it's an outrage. But it's not gonna be that simple. The real story is probably "when using statistics to make decisions, sometimes things that shouldn't get flagged get flagged, and vice versa"
Also important to note that it's not as simple as having a whitelist of domains that are exempt, because at Facebook scale that immediately becomes an avenue for accusations of bias (see all of the noise around the twitter files).
If it's a malfunctioning spam detection algorithm, and the malfunction has nothing to do with Facebook's actual policies about what constitutes spam, it's really not a story in itself, or at least no more of a story than any other downtime.
> Even if it's automated, it's still not great for Facebook that they are treating EFF as a spam link. That's a story in itself.
Exactly! This is also one of the most traditional domain names (Creation Date: 1990-10-10T04:00:00Z). I can assume FB has a lot of people working on spam prevention and they should have reviewed all old domains, relevant NGOs, political parties, country-level governments, etc. by now and chose to not whitelist EFF. If it was malice or incompetence it is a matter of (endless?) discussion, however.
It could just be me, but I feel the offence/outrage culture is very tiring on social media. It seems like certain parts of the internet have become a sponge of human frustration, often misdirected, often unnecessary. Is it healthy for us to keep consuming these micro-outrages all the time?
I would offer that there's no use for social media! Anecdotally, I stopped using it altogether many years ago - I feel I've lost nothing for it, and I am also much happier and focused.
I don't blame you for seeing this as outrage or offence, or me victimizing the users of social media. But I think you're labelling this as outrage because we're so used to seeing outrage online.
No, this is just an observation with no strong emotions or reactions attached - it's tiring and it's probably unhealthy. No one has wronged me or caused offence to me. If anything, it's self-inflicted. We all ultimately decide what we consume.
Anyways, your comment was interesting in the way you see this, and I think, illustrative of the point I was making. Outrage online has become so normal it is to be expected.
I think you misunderstand me. If I understood your original comment, you are saying you are tired of the social media outrage culture, with the complaint about the EFF blockage being an example of said outrage.
I was just saying that it's ironic that the outrage is being directed towards Facebook, which is usually the generator of social media outrage, and out of all the outrage you complain about, it's the one directed at the source.
Ah, thanks for clarifying. Yes, I agree. I think that social media is responsible for a lot of this outrage culture, although the culture has spread outside it. People were much less adversarial and easily offended in the 90s and 00s. There was a lot more empathy and agreement, overall.
It seems like there are many small but easy to understand factors in social media that led to the outrage culture. In one of Sam Harris' or Lex Fridman's podcast episodes with Jack Dorsey, an idea was brought up that we might have less fracturing in our society if the social media didn't have a "like" button, but instead a "thanks" button. People like a lot of tribality and toxic things, but few are grateful for them. I think that things like these show the negative impact social media had in this area of the social fabric.
You can really see how people are much more likely to take sides and galvanize against each other in every day life, even if it's against their interests. It's certainly not limited to social media, even if it could have been the biggest driver for it.
Be strategically annoying about your values, whenever it comes up (but almost never when it doesn't: don't be a vegan stereotype). You can get, amortised, a few organisations to change their behaviour per year, if you find someone interested, make a good case, get your timing right, and there are people with the required know-how.
Can't one value both social media and privacy? I don't think we need to be very one-dimensional about this - "either privacy or Facebook".
You know, ultimately, you can choose to not share that much with Facebook. There are tools for privacy that don't let Facebook get too much from you. The easiest one is clearing cookies automatically in a browser to disrupt long-term tracking. Beyond that, there are VPNs and burner phone numbers. I feel like people are not as powerless against Facebook as you might mean to say, and it's not that hard to care about privacy and value Facebook.
I disagree with your sentiment. Being public/private doesn't make one more/less trustworthy. I would posit it is the business model of the company itself that should be given scrutiny.
Is this the normal understanding of private/public companies? I don't own the USPS nor do I own the national parks. However, taxes pay for them, so we are given access to them. NASA is another example. I can't just walk into the JPL and start using a computer. However, any data that is produced by NASA, I can use just like I can use the national parks.
I own part of Google. I can't take it home with me. I don't even know what that would mean.
> If I own it, I can get a permit to build on it.
It is indeed possible to get a permit to build on/use public grounds.
> If I own it, I can refuse to allow everyone else to use it.
You can vote for representation that could in theory restrict access to just you. The other owners probably wouldn't agree to it, though. There are a lot of owners who want open access.
I think you're confusing things here. You seem to be talking about having full ownership of something, as if partial ownership isn't a thing.
sidfthec is a little confused, but they've got the spirit. Here is the key: the public owns it. Not any individual member of the public.
The public doesn't have a home, but if it did, it already took the national parks there. The public can get a permit to build there. The public can refuse to allow anyone else to use it. Remember: members of the public are not the public. "Anyone else" here actually includes members of the public (ie: you and I).
This is getting painful to watch. He is right, you are wrong. He isn’t being pedantic, he is being precise.
“A public company is one that issues shares that are publicly traded, meaning the shares are available for anyone to buy and sell on the open market, usually very easily. Note that publicly traded companies are not publicly owned -- they are not owned or controlled by any government.”
Please explain the quote in my other comment from your source.
It really just seems like you're uncomfortable with the fact that you're wrong about this, having resorted to name calling. It's ok, this really is a common mistake to make. I do encourage you to read the Wikipedia links you sent though. I think they do a good job of explaining what I have said. Just remember that individuals are not the public. The public actually owns national parks. The whole public as an entity.
Actually I just found another Wikipedia page which may help elucidate this for you, if you're still having trouble with the links you posted.
>Members of the public own stock, so the public owns stock.
You seem to be equating members of the public with the entirety of the public. You need to get this misunderstanding straightened out to correctly understand the concepts being discussed here.
> In most cases, public companies are private enterprises in the private sector, and "public" emphasizes their reporting and trading on the public markets.
Ostensibly, no. But practically, there is a single majority shareholder who can choose to do pretty much whatever he wants with the company. So... kind of?
The point here is that facebook is particularly abhorrent. It's a risk to offload any data to anyone outside of your own self, but it's a serious mistake to interact with a zuck product in any way.
The editorialized title here seems deceptive at best. Near as I can tell from the tiny post, the user in question was restricted for something and then posted screenshots of trying to post links after being restricted. Unless we live in a world of time reversed causality, there’s no evidence for that sensational headline.
Assume for the sake of argument that facebook users make one-million posts a day. Assume it's spam detector is 99.999% accurate at telling if a given post is spam or not.
That's 10 false positives _every single day_. And in all likelihood their spam detector isn't that good, and there are many, many more posts than that.
Every now and then one of those false positives will be an interesting web site or one that feels really obviously wrong. But that's just what statistics does. When you take enough samples from a distribution, even very low-probability events happen.
Sometimes something will feel really out of the ordinary and wrong, but it happened entirely by chance.
If there were evidence of systematic decisions like this, it would be more of a story. But what we have here is just a big nothing-burger.
As someone who has worked at big social media companies, I can say the conventional wisdom about content moderation being a carefully planned process is off-base.
The reality is that moderation relies heavily on imperfect machine learning models and overworked human reviewers making rushed judgments on hundreds of cases per day. There's no meticulous strategy document mapping out the pros and cons before banning accounts that upset the company.
Mistakes inevitably happen when relying on this combination of flawed automation and human reviewers who are stretched too thin. The moderation policies may seem arbitrary or politically motivated from the outside, but much of it comes down to hasty human error and buggy algorithms rather than some malicious scheme.
Does Facebook still consider distrowatch spam? It was posted here a couple years ago now, and I was skeptical then and I've always wondered whether it was true/fixed. https://news.ycombinator.com/item?id=29529312
By that I don't mean it is bad, just that it is business as usual for the EFF. The article is expected and not particularly provocative. There are dozens like this on the EFF own Facebook page.
That's what makes me think the content of the article and the fact it comes from the EFF doesn't have much to do with the blocking.
Classic fecebook tactics, purely evil company leeching off of its user’s privacy, feels like it’s the cliche evil corp you read or see in scifi novels/movies.
It's so funny that literally every single other post in this thread is wordy overly politely-written 'this was absolutely a false-positive and we should really give the benefit of the doubt waffle waffle waffle' comments that just reek of AI with that cloying overly sanctimonious tone you can detect with the briefest of glances
Actually it's not at all funny, it's horrifying
It's getting a bit depressing that the only thing worth replying to so many comments is 'Bot.'
They did this years ago (possibly 2015) on a UCLA article I tried to post about Facebook censorship. I posted a clip on YouTube (trying to find it, will edit if I do).
Maybe:
* This account already scored poorly on spamminess and attempting to post a bare link (with no content) on their page pushed them over the edge?
* The EFF Privacy Tips link was somehow used in a parallel spam campaign, or some characteristic of the page itself causes it to be flagged as spammy?
* Something about this specific browser session was flagged, preventing posting? (this might go along with the No Violations observation about the page).
I find it much more interesting to try to reverse engineer these strange behaviors of some spam detection systems than to just chalk them up to a "Facebook hates the EFF" conspiracy theory - if this happened to me, I'd be doing some more tests to try to see what's up.