This is a really good article in its summary of the problem. It's making the negative argument that fake news is very different:
> Fake news, on the other hand, is almost always the opposite. You want to read that stuff. For example, Casey Newton pointed to this study in his Interface newsletter that says some of the “fake news” is even more engaging than the real news.
This is it. The problem is Bob going around posting "Alice is the antichrist" everywhere on Facebook to one hundred million Charlies may eventually result in one of them murdering her. It will certainly result in Alice being harassed and limit her career. What protections does Alice have against this? Note that "just don't use Facebook" doesn't help Alice, because the problem is between Bob and Charlies.
There used to be people who argued that spamfighting, at all, in any form, was censorship. They have largely given up on that one, since nobody wanted the spam, but moved their dogmatism elsewhere.
Edit: there is a second-order problem in "recommendations". If Facebook (or for that matter Youtube) was just a chronological list of posts the only way to get more attention would be actual spamming - posting the same thing repeatedly to get into the user's very finite wall space. However, that's not how they work. Facebook (or a computer system wholly controlled by Facebook) selects certain posts you might want to see over others. If Facebook "recommends" the post saying "Alice is the antichrist", to what extent are they responsible for the abuse Alice receives? It's not obviously zero.
> There are some easy wins here, at least in theory. I think a great deal of fake news is spam-like and can be eliminated by similar techniques.
I agree wholeheartedly with this. As someone who ran an email newsletter tool that battled spammers on a daily basis (successfully), I see many similar tactics working.
The issue for me, however, is the subjectivity and inconsistent enforcement. Spam has clear lines drawn, especially around opt-in and authentication. Fake news is often susceptible to interpretation, especially around politics.
> For example, a defining quality of spam is that is not just it is unsolicited, but it is annoying.
Sure, that's true, but from a spam-prevention perspective and CAN-SPAM, it's mostly around unsolicited email, not the emotional impact thereof.
Personally, I think it's about definition. If we can define Fake News the same way we can define "unsolicited email" (rather than calling it spam), I think we can begin to tackle it.
I don't find that spam is a solved problem. This morning, five minutes ago, I checked my overnight mail. Six messages. Three were outright spam, one was a survey (a person could consider it spam since it was unsolicited commercial mail but I'll call it legit, just to be generous) and two for-reals. My organization has professionals in charge of mail, who seem to me to be capable. Do others find that it is solved?
I get a lot of spam, mostly correctly categorised for me, but some leaks through.
I do find it a solved problem in the sense that it's easy to deal with though - as long as we're considering phishing a separate issue - it's easy to identify spam (I get several identical messages, possibly just from a different name, at the same time apart from anything else!) and click a button to get rid of it.
It's much harder to identify 'fake news', and even if you do once you've forgotten about it it may be in the back of your mind as a thing you heard, and forgot you proved false.
Ain't gonna happen, as it goes against the very core business model of Facebook. Facebook literally makes money by making spammers pay them to shove spam into your feed.
(They've also managed to successfully blur the line between spam and ham, and made it trivial to accidentally subscribe oneself to spam.)
Exactly. Facebook has to ride the edge of showing people stuff that winds them up enough that they engage, but not so inflammatory that they quit the site or become abusive.
> Fake news, on the other hand, is almost always the opposite. You want to read that stuff. For example, Casey Newton pointed to this study in his Interface newsletter that says some of the “fake news” is even more engaging than the real news.
This is it. The problem is Bob going around posting "Alice is the antichrist" everywhere on Facebook to one hundred million Charlies may eventually result in one of them murdering her. It will certainly result in Alice being harassed and limit her career. What protections does Alice have against this? Note that "just don't use Facebook" doesn't help Alice, because the problem is between Bob and Charlies.
There used to be people who argued that spamfighting, at all, in any form, was censorship. They have largely given up on that one, since nobody wanted the spam, but moved their dogmatism elsewhere.
Edit: there is a second-order problem in "recommendations". If Facebook (or for that matter Youtube) was just a chronological list of posts the only way to get more attention would be actual spamming - posting the same thing repeatedly to get into the user's very finite wall space. However, that's not how they work. Facebook (or a computer system wholly controlled by Facebook) selects certain posts you might want to see over others. If Facebook "recommends" the post saying "Alice is the antichrist", to what extent are they responsible for the abuse Alice receives? It's not obviously zero.
(People considering liability for speech acts in murder cases may like to read https://en.wikipedia.org/wiki/Derek_Bentley_case )