Before making a non-trivial purchase online, I typically do a search for "$vendorname sucks" and/or "$productname sucks", and this has saved me headaches on more than one occasion.
I hope that Google's solution won't interfere with finding bad reviews in this case.
Also, many vendors these days will sell through Amazon.com. It's a lot safer purchasing through Amazon.com since they are very intolerant of bad merchants.
From the original NY Times article:
In other words, Mr. Borker is perfectly capable of minding his manners. And he does so, right now, with every order that comes through a store he runs through Amazon.com’s affiliate program. (He declines to provide that store’s name.) He handles those transactions like a Boy Scout because Amazon doesn’t mess around, he says — the company just kicks you off its site if you infuriate customers.
How does that work though? Doesn't this method just return negative results by definition? There are a lot of results confirming that "apple sucks" or "toyota sucks".
Genuinely interested, because the only reliable way I've found is to spend hours and hours reading through niche forums and reviews, and for every great product there are plenty of bad comments out there.
Of course it returns negative results. The key is not to take them blindly.
If they mention specific facts (extra bonus for verifiable ones), do not sound like ones written by poopheads or mass produced by competition, then it's a very strong red flag.
Things like unfavorable terms of service or checked-by-default, hard-to-notice checkboxes subscribing you to worthless additional services are easy to verify, even if you give zero trust to anonymous reviews.
You then have to read the review and see the content.
You contrast "one star, it doesn't this obscure thing it didn't to do but otherwise it's great" versus "one star, it fell apart in three days, seriously" and get the idea.
It's a constant battle, altogether.
Hopefully, the Internet will continue to be deeper than the scamsters' ability to game. And hopefully Google will keep helping in this regard.
And can it recognize a double negative, as in, "all of these Apple sucks sites suck"?
If you can do decent sentiment analysis though, you could lower the page rank granted by a link by sentiment of text in its proximity, while not lowering the rank of the page that contains the negative sentiment.
Me too. They should be able to distinguish between a negative review and the negatively-reviewed website it links to easily enough. Distinguishing the negatively-reviewed website from other links on the reviewing site could be a bit trickier.
While I can understand not giving out details, to avoid gaming, this post as a result doesn't have much content! As far as I can tell, this is the entirety of the description of the change:
we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide a extremely poor user experience
The explanations of what Google didn't do and why are informative, though.
Well, they can't really say so clearly exactly what they did. In an effort to stop people from gaming it, they keep their exact algorithm hidden. When people get around it, they tweak it further, in an endless cycle.
My guess is that their solution isn't actually based on customer's true opinions of the vendor, and thus the solution is cheatable. Hiding the solution is probably their best option at this point.
The people who are trying to game google, know exactly what the solution from google is, or will find out within weeks. The ones left in the dark are the others, that is, the innocent, or the people who are not trying to cheat google, who equally, can not hold google to account.
Thus, their secrecy serves no purpose but to further their own interests.
Total guess, but I believe Google licenses Yelp's data. They may be using the Yelp ranking in place of sentiment analysis to reduce the weight of those links.
I also do not understand the distinction between the unrevealed "algorithmic solution which detects merchants that provide a poor user experience" and the "sentiment analysis" approach that they say they rejected. I don't get how it is possible to algorithmically detect poor user experience without using sentiment analysis! Isn't negative sentiment more or less the same thing as a poor experience?
What I was thinking when I read it, although crawling reviews is another option, both are gameable though. Even a very good algorithm is still going to have the problem of planted reviews, I don't think we are at the stage where an algorithm can decide if a review is genuine.
Google's "opinion" for better or worse is fact. Let's hope they figured out a way to differentiate between genuine bad reviews and those with underlying motives.
Even more, once you move to anything slightly controversial, an opinion can be genuine, meaning sincere, but still not valid for the person searching ("they sold me the movie I asked for but it had too much violence!").
This post is a great rebuttal to those who think that Google is becoming a lumbering giant. I wonder how long it would have taken Microsoft to respond to a similar situation.
> I wonder how long it would have taken Microsoft to respond to a similar situation.
When ASP.NET's AES implementation was exposed to automated attack recently, the DevDiv guys got out in front of the issue with a workaround in about four days:
One of the search terms mentioned in the NY Times articles was "Christian Audigier glasses". On Bing, the questionable www.decormyeyes.com site is still the 5th listing.
Not at all! For the reason that google explained on the blog, it is a dumb response to remove or penalize this particular company in the wake of the media froth. The admirable and responsible thing to do is to figure out how to make the thing work better in general to cover the hundreds or thousands of similar cases not in the New York Times.
This issue is complex and I don't have great overall answers. It isn't clear how search results should bias based on sentiment. (That's discussed on the blog and here.) One thing I am sure of is that I respect discovering a result they consider bad and looking at the algorithms that caused the result and I don't respect deciding that you got egg on your face for the results of your algorithm in this case and jiggering the results.
Of course, we don't know that Bing didn't come up with algorithmic improvements, but the comment you're replying to isn't refuted in any way by the fact that the result has disappeared (or greatly diminished) in Bing. To the contrary, in addition to my bias to agree with the OP on Google / Bing, when one company says, "We've taken a look, made small improvements already that have some impact, and we're looking to make more" and the search result simply disappears in the other case, I'm inclined to take that as specific evidence of the former doing the better job.
You can't argue that Microsoft didn't respond quickly, because there has been a change in Bing's results. The original comment asked "how long until Microsoft addressed this problem?", and it appears that Microsoft has now addressed it, so the point that the comment made stands.
The fact that Microsoft may or may not have addressed the problem by "jiggering the result" has nothing to do with it. There is no evidence one way or the other.
You may be deeply suspicious of Microsoft's behavior, and that's perfectly fine. Let's just be clear about what we're saying. It would have been more direct to say "Microsoft may have responded quickly, but I bet they just cheated."
> and the search result simply disappears in the other case
Except that the search result didn't disappear on Bing. it had the same behavior as on Google.
You seem to have a bias to assume that Google did something algorithmic and Bing just hand crafted the search results based on absolutely no facts at all:)
How cool would it be if the guy behind this site did a writeup now to show what happens to search traffic and income when you shit on customers, then get your very own article in the NYTimes, and then Google tweaks their search algorithm specifically to hurt you?
"We know that people will keep trying: attempts to game Google’s ranking, like the ones mentioned in the article, go on 24 hours a day, every single day. That’s why we cannot reveal the details of our solution—the underlying signals, data sources, and how we combined them to improve our rankings—beyond what we’ve already said."
Doesn't this sound like the exact things security people used to say before realizing that they needed to make their mechanisms public in order to ensure their security.
This feels like mostly a PR piece to me. And reports seem conflicting.
Google doesn't seem to say what they did beyond that they "developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience."
Techcrunch claims Google "compiled a list of hundreds of merchants (including DecorMyEyes) that provided “bad user experience” and algorithmically forced them lower" (http://techcrunch.com/2010/12/01/googl/).
Hopefully this is PR spin. If it's not, it's pretty disturbing that allegations of one mean merchant can cause an algo change. How many businesses will be collateral damage of this update? How many fake negative review campaigns with this spawn? If this is a real algo change, it was done in an extraordinarily hasty, reactionary, and cavalier manner. (Please don't ding my site G, nothing personal)
But if we demoted web pages that have negative comments against them, you might not be able to find information about many elected officials, not to mention a lot of important but controversial concepts. So far we have not found an effective way to significantly improve search using sentiment analysis. Of course, we will continue trying.
Google admits, and I agree, that this is a bad idea. But then why are they continuing to attempt it? Wouldn't neutral search results be better than those that favored a subjectively positive or negative business?
There might be interesting ways to combine sentiment analysis with other factors.
For example, off the top of my head, combine it with the selectivity of the search word. If the sentiment is positive about a store, then it may be a good search result for a word like "sunglasses" which is associated with many sites. If the sentiment is mostly negative, then it may still rank very highly for keywords/phrases that are highly selective e.g. "decor eyes" while dropping precipitously for the less selective keywords. I.e.: positive sentiment opens up the umbrella, irrespective of how high the umbrella goes.
This is an excellent point and deserves more votes.
Google does have somewhat "sneekier" options:
- Demote negatively reviewed items only on searches which seem "inherently commercial" - ie, act differently depending on whether the search is for a store or an opinion.
- Have some sort of "realness" index: is this entity actually selling sunglasses.
These aren't necessarily easy so your question stands.
Maybe they are continuing to research it because enough people outside Google think it's a good idea that not researching it would make them appear unresponsive or irresponsible? Kind of like politicians doing something for the sake of being seen doing something.
I agree. Seems to me to be a knee-jerk reaction by Google - bending to negative mainstream press. I bet this decision came from marketing rather than engineering ;)
Research may not hurt. Trial and error does if the error affects the users.
If something bad is happening in the world, like thousands of people being murdered in some remote country, and every page that links to articles about it has a negative tone because they're condemning what's happening, should google hide those articles?
I would be fine with alphabetical results if they would give me a regular expression search instead of recommended results.
Now if they could only get rid of BigResource.com pages. They come a lot on my searches, especially coding specific ones, and that site has no content of its own. It just aggregates forum threads from other forums (not its own), and what's worse it's usually in such a broken way as to not let you read the thread on the page you're given OR find the original forum thread it was scraped from.
Yeah, I know, there are a lot of questionable sites cropping up more and more as everyone is going all DemandMedia on monetizing SERPs without providing much real value. That's kind of my point, and why I don't bother reporting BigResource even if I thought it would accomplish something (which it wouldn't) -- because it is by far not alone.
Google used to be so good at identifying scraped sites or made-for-adsense and it seems more and more are not only slipping through the cracks but in fact dominating the search results. To me this is a much more important issue than that of people getting a ton of backlinks from negative stories/comments which anyone with half a brain would be wary of giving money to if they actually checked (by searching for the company itself directly and discovering all the bad press instead of just searching for "$mycity $productname" and then giving their credit card info). Yes, I know that's too much to expect of most people. But really, I think google should try to help people who can effectively use their engine and try hard to do so but get poor-quality results before people who can't or won't bother to use it effectively.
If Google's problem has been pushed from "people can be bad and game our system" to "people can be bad and break the law" then I'd say they don't have to chase the problem much further than that.
Also, you don't think ISPs can be subpoenaed[1] for the names and addresses of "anonymous" people?
Why not just put sentiment analysis directly in the search results?
Search for: Designer Discount Glasses
and maybe this page still is on the first page, but have a frowny face by the link, to indicate that sentiment analysis is largely negative.
Don't hide anything, but make how the site is relevant more apparent.
> Google is as responsible for this as a phonebook company would be for giving you a bad phone number.
Generally, I agree, except that there is an important distinction to be made - the phonebook generally doesn't rank businesses into any order other than alphabetical. Yes, the phonebook has 'paid listings' in addition, which are bigger and more prominent, but google has these too, and like the phonebook these are labelled as such.
But the general google listings are sorted into a form of meaningful order, people expect (rightly or wrongly) that the higher up in the general listings a result is, the better it is likely to be because more people use it.
I'm not sure what the best solution to the issue is, perhaps just to accept that google isn't the best resource for finding reputable businesses, and try and educate people on the alternatives.
Would anyone else like sentiment controls in google's search tools sidebar? Three settings I'd use would be:
• Show more results about which people talk positively
• Show more results about which people talk negatively
• Show results which generate strong opinions, both positive and negative
Alternatively, a "show sentiment for this search" which added a small indicator next to the search results might do the trick. (Just to be clear, I'm not proposing adjusting the ranking of pages based on the sentiment on the page itself, interesting though that might be, but rather based on the sentiment in the text surrounding links to that page.)
From Google's blog:
As it turns out, Google has a world-class sentiment analysis system (Large-Scale Sentiment Analysis for News and Blogs). But if we demoted web pages that have negative comments against them, you might not be able to find information about many elected officials, not to mention a lot of important but controversial concepts. So far we have not found an effective way to significantly improve search using sentiment analysis. Of course, we will continue trying.
Rule number one in online retail is looking up the vendor before you buy anything, no matter how much money you are saving. I dont see why any one should blame Google for this. Next thing you know, people are gonna want directions on google maps which others recommend as opposed to other metrics. If I got car jacked on a certain route, blame Google, it gave me directions.
This was an artful response. Google took what was essentially a piece of negative press from the New York Times and used it as an argument against the call for transparency in the algorithm that so many major media sources are pushing for lately.
"we developed an algorithmic solution which detects the merchant from the Times article along with hundreds of other merchants that, in our opinion, provide an extremely poor user experience."
The implication seems to be that they've found a way of programmatically identifying these offenders.
Of course, this solution probably produces lots of false positives, so they will have to change it later when other complains are raised.
I think the original idea of Google was great, but with each iteration their search results get worse. In the end, it seems that anything they do can be gamed in some way.
I hope that Google's solution won't interfere with finding bad reviews in this case.