This is why reputation and reliability matter. While the example in this post is the equivalent of "I got major news outlets like TMZ to think that Paris Hilton was getting married" many types of news aren't so inconsequential. In the first few days after the Connecticut shooting, there were so many inaccuracies it made me reconsider other "facts" I heard which were never obviously wrong (e.g. it became obvious that the shooter's name was wrong when he was in New York - most inaccuracies aren't like this). Like many blogs, the "source" of misinformation often originated in a single benign inaccuracy that was then propagated through other news agencies.
What I would love to see is a method for tracking or indexing reputation of a site by some automatic method. As it is, I can kind of infer how reliable a site is based on the language I see, and the types of headlines, and whether others have told me it's reputable. I would love to see that automated somehow. News aggregators try to solve this via upvotes, downvotes, or flags, but there are plenty of reasons to upvote that have nothing to do with reliability per se, and trying to use multiple upvoting buttons seems silly, because at the end of the day rank is a one-dimensional property, so you'll have to combine "funny", "reliable", "interesting" into some single number.
It's a "hard problem" but since the value of information depends so heavily on its accuracy, solving it would appear to be very lucrative and valuable to a society in which the quantity of available information grows by leaps and bounds each year. Google seems to have solved "relevance" and I would love to see someone solve reliability.
My initial impulse is to say, "any such automatic system will be gamed beyond usefulness". Having said that, if someone does solve the problem that would be great.
Is this a major problem with Google now? I know this is completely subjective, but my friends and I have talked about this a couple times, and we feel that Google used to be significantly more useful 5-7 years ago. Has SEO done that much damage, or has the way we use the search engine changed? Or perhaps something else?
The way we use the news has changed. Many years ago we would sit and read a newspaper. Nowadays its all about short headlines or very shocking bits (partially because of computer-based news consumption behavior and partially because of breadth of availability). Catering to the instant and very short nature of news consumption involves a greater indulgence in sensationalism. Google contributed to this insofar as it allowed us to select what sources and what articles to read based on the title or blurb.
I wonder, at a deeper level, if the political divide in the US is fueled by the "bubble" that google allows: many years ago, people read newspapers covering all points of view, but google news lets you select which sources you draw news from (allowing you to select only fox news and drudge report, for example)
I'm not sure this is quite as true as we think it is. Long before the internet we had physical newsletters and pamphlets and fliers with their own slants, and we had newspapers with agendas of their own and no particular standard of ethics, &c, &c. I'm not sure "headline makes me click it when I see it on Google" is a worse pressure than "headline makes me buy a paper when child selling papers on the corner shouts it".
That said, there's unquestionably room for improvement.
In a way, there is already a system like this in your brain. Reputation is something you formulate yourself based on meta-information about the outlet you're reading. You do this already: ever read an article and look up at the URL to check and see if it's hosted at a reputable source, like nytimes.com or salon.com?
What would help is more meta data available to the reader. So, instead of measuring an unmeasurable "truthiness" (because really, what is truth, in the philosophical sense...) how about a Chrome plug-in that pulls down meta data from other sites that have written about this outlet.
So, you read NYTimes, and your little plug-i pops up some links to http://en.wikipedia.org/wiki/Jayson_Blair or some such similar thing. You can read up on that and see that there have been issue sin the past, but for the most part, NYTimes remains reputable.
I guess it would need some sort of Hacker News-like inaccuracy tracker. Post bad stories, more up-votes, more likely to be detected as relevant meta info on the TLD it comes from.
NYTimes does not have a single unified reputation.
A front page story like "Pentagon Lifts Ban on Women in Combat" is probably highly accurate, breaking news less so. Op-Ed, might as well be a random blog.
True. Add to this the fact that things like the Wall Street journal are now Fox News puppets, and MSNBC's desire to be the liberal Fox News, and there's almost nothing left for reliable news.
Still, BBC, Al Jazera, NPR are where I go. That, and hyper local sites.
What I would love to see is a method for tracking or indexing reputation of a site by some automatic method. As it is, I can kind of infer how reliable a site is based on the language I see, and the types of headlines, and whether others have told me it's reputable. I would love to see that automated somehow. News aggregators try to solve this via upvotes, downvotes, or flags, but there are plenty of reasons to upvote that have nothing to do with reliability per se, and trying to use multiple upvoting buttons seems silly, because at the end of the day rank is a one-dimensional property, so you'll have to combine "funny", "reliable", "interesting" into some single number.
It's a "hard problem" but since the value of information depends so heavily on its accuracy, solving it would appear to be very lucrative and valuable to a society in which the quantity of available information grows by leaps and bounds each year. Google seems to have solved "relevance" and I would love to see someone solve reliability.