This is a very interesting example of a concern that Schneier has long had about "semantic attacks" on the Web. Since I started participating on Usenet in 1994, and even before that on commercial online services since 1992, I have noticed that some participants are quite willing to poison discussion of contentious policy issues with made-up "facts." That's why I think it is a good idea, here on HN and everyone online where people discuss important issues, to check facts and ask for sources or further details for any "fact" that seems striking.
I find HN is pretty good about finding sources and backing up facts. Commenters here will call you out if you say something they can show is false and I like that.
Important observation, from the post he's referring to:
"But second, be careful of what you read and believe on Twitter. I think some of the leeway granted to InTheStimulus is based on the soundbite nature of the site; people can get away with no citations, which is less likely than with a conventional blog." (emphasis mine)
Something that reinforces ones' worldview tends to be less likely to get questioned than something that contradicts it, and a complete lack of evidence can get overlooked: After all, it's obviously true.
afflicts most discussion, and especially most online discussion. Agreeing with another reply above, I greatly appreciate it when anyone on HN asks follow-up questions to another reply, to check facts and sources.
Yep, but since I am more interested in developing my own abilities than in trying to convince others (of anything), I make a conscious effort to be more skeptical of things I would like to believe, than of things I disagree with. Unfortunately, no one has enough time to double-check everything, or even most things.
This has the feel of the Sokal Affair about it. The guy should have announced that it was a hoax over the wire, but the goal is basically the same. Put something together that has the same appearance as what you're trying to satirize, but include some critical flaw, and see if anyone notices.
I wonder if we'll end up seeing any of these in mainstream reporting; that would be quite the thing.
The internet has made agitprop more efficient along with everything else. Social media is by definition a large group of civilians on a coffee break. Anyone with an agenda and a certain twist of mind can cause large disturbances... when I bring this up to people in abstract they don't really get it. Maybe this example will help.
I am keen to see how long the twitter search as it is today seems to be relevant before it gets spammed enough and a tweet-rank algorithm needs to be used rather than a purely realtime algorithm.
Yes, something less emotionally and politically charged could have been chosen. However, this doesn't necessarily invalidate the results, and I'd be interested in hearing of an experiment that would provide better proof that false information can be spread that would be equally or more conclusive and doesn't read like an arm-chair sociologist's blog post. At least there is an experiment and a methodology here.
And what of the morality of testing the capabilities and limitations of systems? While the conclusions you draw from an exercise like this may be "morally corrupt" or logically unfounded, why is the act itself morally corrupt? For misleading people?