"The students found the writer voicing negative opinions much more intelligent and persuasive than the one voicing praise."
Well, this resonates. One of the things I do when scouring online reviews is to search out the negative reviews. I dismiss reviews that are irrelevant to the product itself and do (what I hope) is a reasonable job filtering out the less-informed negative reviews. When I find a good, informed negative review I tend to search through that particular reviewer's history to then find what they HAVE reviewed positively. Because I do tend to think that person has more expertise/knowledge.
Until I find a good, informed negative review on something they've rated positively.
Resonated with me a lot. As a faculty member college students would rally around a critic (no matter how illogical their words) and attack someone creative with criticism almost out of habit openly in class.
Also I have seen religious people constantly rally around negativity for decades and it always rubbed me the wrong way. This also seems to be the idea that the negative stance is seen as the most intelligent. I have years of Theology and learned 3 dead languages to better understand Theology and the scriptures and yet some yahoo with a high school education will have the upper hand ever talking about Kindness or Trust or Love. The message of fear is very powerful and sadly the message of hope and love is seen as weak.
It's not just the students. Faculty members do it. Board members do it. It's a cheap way to look smart and can lead to a toxic environment if left unchecked. It's much harder to make positive work than to criticize.
"Think not that I am come to send peace on earth: I came not to send peace, but a sword.
For I am come to set a man at variance against his father, and the daughter against her mother, and the daughter in law against her mother in law.
And a man's foes shall be they of his own household."
I guess I go with Nietzsche's idea that Christianity is an inherently negative faith. There isn't anything inherently wrong with that, but you're basically trying to live divine law in a material world. That's not exactly going to bring out the kind and trusting in people, even if the divine law is love.
Frankly my opinion on the majority of religious people I’ve met is that they aren’t actually religious and just want a club that encourages them to be a bit nicer, bit more prudent, and where people are a bit more friendly. Sometimes I worry that they are messing with something immensely powerful as if it’s a toy but perhaps it’s not my place to consider that
I find negative reviews better since they tend to have actual discussion of issues and flaws. If it is someone giving 1 star because of the post office or because Wolfenstein dares to portray nazis in a bad light you roll your eyes and move on.
I think a positive review could have a similar impact but it would need to be really detailed like say "I did the math and this is the optimal price point for data storage vs cost".
The issue mentioned of expected rank inflation from unrealistic standards is probably why. We are used to cheap crap being called awesome and the greatest thing ever and filter out the noise.
Negative reviews are the anti-marketing it is no wonder people pay attention.
My guess is that in pre-technological times, you were sticking your neck out when you criticised something.
Saying "XYZ" is great is just an expression of wanting to fit in with the group, and doesn't require explanation.
Saying "XYZ" is awful is probably hurtful to someone nearby, and you're going to have to think about what exactly is bad about it. And not only is it bad, it's so bad you are willing to risk the happy consensus in order to help the group.
Of course things that worked in small, close groups might be gamed in larger, more loose ones...
Took me over a minute to figure out that JavaScript was mandatory. This nullifies any amount of loveliness for me. JavaScript shouldn't be required to read text, dammit.
You could argue that HN is similar, where the highest comments are often the most contrarian. Which isn't to say they're not always valid, but I think it's important to be aware of this pattern.
Either we agree and elaborate, or disagree and eloborate (which is hard when you simply don't have additional knowledge). If we just agree, we upvote, because there is nothing to say, right? So that itself shifts the "pattern" some more.
I remember it also being like that in the small forums I remember fondly. We didn't have upvotes, but I still noticed that agreement is shorter than disagreement. But that was fine, since we kept talking through the months and years, we actually learned a lot. The flamewars very never so bad because we were few and close enough to be able to not "stop" them, but simply throw all fuel into the fire right away, and then cleared out the air after it was all burned. I remember a guy thanking us when he finally got around to not being a bit of a racist anymore.. We ribbed on him occasionally for several years, but never in a mean way, and he ended up writing us a thank you letter style post. I forgot all the details, but I'll never forget that actually happened. Still proud of that guy whatever the nick was ^^ That really can only happen in small, long-term communities, I think. Maybe it's simply not possible to "scale" that indefinitely, because it's about human capacity.
Whoops sorry, I only just saw I messed it up a bit (other than "eloborate" :D), but can't edit it anymore, I meant to write:
> either we agree and elaborate (which is hard when you simply don't have additional knowledge)
meaning in that case we can only upvote, i.e. just agree and don't elaborate. I more often find myself agreeing with comments than in a position to add anything to them.
I had a conversation in a forum recently about something like this. This particular forum has a kind of upvoting system but no downvoting system. Occasionally there's threads asking to put in a downvote system or asking why there isn't one.
The general consensus tends to be that it's easy to just agree with something but if you disagree with a topic or post it encourages better conversation to have to explain your stance and why you disagree.
I've posted on those forums for a long time and generally i notice this is what tends to happen more as opposed to a place like reddit, or even here sometimes, where a post just gets downvoted away with no counterpoints or discussions.
I look at reviews which are in between the lowest and highest ratings, in order to discard possible "ballot stuffing", and to find reviews which may be more nuanced.
I do this too. But from other comments in this thread, I wonder if 5-star and 1-star comments are significantly more influential than 2- to 4-star comments?
A related note, taking the point about negativity's correlation with being perceived as more intelligent: I've long wondered whether Hegel's notion that thought itself is fundamentally negative is true; it seems to make some intuitive sense and also gels with findings like those article opens with. If this is true to some extent, it's perhaps not that positive thoughts are less intelligent, but that meaningful positive thoughts are more difficult to obtain—as one necessarily has to pass through the negative first (thinking) to reach them. Outright positivity is not thought but simple satisfaction. If something doesn't disappoint us we really don't have any reason to think about it aside from the way in which we think about wants and urges, which isn't really thinking so much as it is desire. Really, only those poetic spirits seem to have mastered thinking purely positively—that is thinking desire.
In other words, it is not negative thought that's praiseworthy, but critical thought—thought that questions—which must begin negatively. Often this questioning begins (and unfortunately ends, for most) by questioning a thing on its aspects that disappoint or dissatisfy, that's after all what urges the questioning. To emerge from that position and to be able to, in turn, acknowledge the positive qualities of the thing is to dwell in that serene and moderate realm called reason, not too negative, not too positive—reasonable.
If someone is negative I usually hold them to a higher standard (and correspondingly think them dumber for fixed intelligence) in many cases. If you’re wrong but positive at least you’re helping to be in a good mood and bring up people around you. If you’re negative incorrectly you’re wrong and damaging the social mood.
Reading this reminded me of an old favorite lecture, on the origin of the word "loser". It turns out losing wasn't always a widely used noun - it came into popularity as a shorthand for lenders to evaluate loan prospects, specifically in order to turn someone's past experience into a current trait. He goes onto argue that this slight of hand really contributes to anxiety and inequality, because it reinforces the idea that one bad experience can define your identity.
Negative reviews can generally be tracked easily by registered user. It seems like a good enough idea that the force of negative reviews should fall off exponentially with respect to frequency per user that I'm surprised its either not done more, or more apparent.
If someone complains all the time about everything, the people around them quickly learn to ignore it. Our systems should too.
You are exactly the intended use case. If you write a bad review every time the system will soon ignore you. If you seldom write a review but bad is the only kind you ever write, each will carry some weight after a significant "cool-down period".
In other words, we successfully encode the difference between "wow, juhzy only complains when its really bad" vs "ignore juhzy, they complain about everything." This vs. the more naive and destructive "5 bad reviews and you're out" we seem to do too much of now.
I suppose it should work the other way too. Someone who always "5-stars" everything should be ignored as well.
The point is that the system right now seems to have a negative bias and is able to do so because of a glut of providers which it burns through like an expendable, renewable resource. (Which morally sucks because they are real people with real lives.)
Your * is very insightful. Implicit in my suggestion is the fact that the service is able to count how often you use it vs your +ve and -ve reviews.
When I say "seldom" I of course mean "uses the service often, but seldom complains". The unit of time in this exponential weighting is not chronological, but uses of the service.
Some people can complain about anything. (When I'm in a grumpy mood, I certainly will. Puppies? Too wiggly. Sunshine? Too bright!) When I read a review, my goal is to find out what I would think of the place or thing. The reviewers I look for are balanced and thoughtful, able to point the good points about something they hate and the bad points of the things they love.
I have a very smart friend who apparently has never liked a movie. If one comes up, he will have a complaint about it. If you see one at a theater with him, before the credits have finished rolling, you'll be getting a list of his issues. It's exhausting. And it means I'll never listen to him on the topic of whether a movie is good.
It's the classic stopped clock: it's right twice a day, but you can't know which times, so why look at it?
If the ratings scale is 1-10 and all of your ratings are 1-5, does that mean you only give reviews to things in your 1-5 range, or does that mean everyone else's 10 is your 5?
From a perspective that is external to your inner monologue, I have no way of knowing. Even if your criticisms are valid, I know too many people who seem to never talk about the positive parts to know whether you're just perpetually unimpressed. Or, more likely, you're the type that never talks about the nice parts. I'm guessing this because of your comment elsewhere in the tree that specifically asks why one should compliment when something is doing what it's supposed to do. (The answer is that people usually look at reviews for confirmation that X is as advertised.)
Or there's actually nothing redeeming about the subject of your review.
So yes, taken in context I would end up throwing your opinion away. In practice, I don't check every reviewer's review history, but maybe that would be a useful signal to see.
No, but it shows that you only care to share in the bad, not the good. There is a tint to your worldview and it'd be useful for others to see that when weighing what you have to say.
For the record, I rarely leave reviews. When I do, though, it's only for the exceptionally bad or the exceptionally good.
I'd argue that the bias is a natural consequence of two interacting effects, neither of which necessarily represents a "tinted worldview": a skew in quality distribution and a minimum threshold for information value.
It's hard for a product or service to exceed its expected value by more than a fraction or small multiple, while it's easy to cause misery well in excess of a dozen or even a hundred times the expected value. I believe that's more a reflection of it being easier to destroy vs create rather than a reflection of psychology. Combine this distribution with a heuristic to only report deviations from expectation in excess of a minimum threshold and the "average review score" will reliably undershoot the actual quality.
That's only a problem if you want to interpret the average review score as an absolute measure of quality, though, and I don't think anyone really does. Most of us are more interested in communicating and informing our decision processes than in passing moral judgement, and if our goal is to optimize communication then we should expect negative reviews to dominate the discussion because they're inherently capable of more meaningful excursions from the mean.
I agree with your overall point: it would be fantastically useful to be able to contextualize reviews against reviewer psychology. That way I could ignore both shouting from negative-nellys and forced positivity from those who feel compelled to balance the universe :-)
>if our goal is to optimize communication then we should expect negative reviews to dominate the discussion because they're inherently capable of more meaningful excursions from the mean.
In reality, however, it seems that positive reviews tend to dominate. Using Google Maps reviews as my barometer, I hardly ever see any place rated less than 4.5 stars. So, I tend to think to myself "4.5-5 stars: might be good. 4 stars: probably okay. Less than 4: maybe steer clear."
Though, in practice I disregard reviews, take a plunge, and then decide on my own. Often I find myself in conflict with the average majority opinion.
Why would I write a good review? If I go to a place and pay I expect to be treated well; being treated the way I expect is not noteworthy (or "reviewworthy")
Your response sounds flippant, but I am with you, except I don’t see the value in writing reviews at all, good or bad. I have no stake in whether another person is persuaded to patronize or avoid a place. Moreover, I doubt any online review I write would have any impact on the business itself.
If I have a specific complaint for a place close to my heart, like a coffee shop or restaurant or local shop, I’ll talk to the manager, privately and calmly, and be on my way.
I'm not trying to change your process, but just letting you know that that process is not the same for everyone, hence it'd be useful for others to know your "tint."
It's interesting that your main criteria is how you feel you were treated, though. Discussed in the article:
"Restaurant reviews in which people sound traumatized by perceived injustice don’t tend to comment much on the food — it’s usually the perception of being treated rudely or uncaringly that seems to have pushed people into processing by writing out their feelings in a public forum."
I think it's normal. Poor food may mean many things and most of them are not malicious. But being treated rudely or uncaringly? You deserve your bad review.
>Poor food may mean many things and most of them are not malicious.
I honestly feel the same about poor service. I'm usually accommodating and understanding, but I'm no monk. Of course there are times when I feel either the poor food or poor service merit some mention.
Most of the time, however, I think "they're human, going through human things. No big deal."
Most people look at reviews to see whether the thing actually is what it claims to be, so reviews saying that it is in fact what it claims to be are useful.
Doesn't mean you have to, just means that it'd be more helpful to others if you write "it actually is what it claims to be" reviews.
Because the world is not divided into good and bad, and I care about the 'why' of a rating much more than I care about whether it's positive or negative.
> Negative reviews can generally be tracked easily by registered user. It seems like a good enough idea that the force of negative reviews should fall off exponentially with respect to frequency per user that I'm surprised its either not done more, or more apparent.
> If someone complains all the time about everything, the people around them quickly learn to ignore it. Our systems should too.
That assumes that "too much" negative criticism is false and merely the result of a disagreeable personality. I think that assumption is false.
I think that's false, people who frequently post negative reviews may just be the kind of people who have legitimately higher standards vs. people who are happy with even objectively crappy things.
I don't think the frequency of negative reviews, by itself, gives you any real information about the quality of the reviewer.
What surprises me is that sensible review systems seem to be used least by the sites that ought to know best.
Amazon, as we all know, basically expresses quality as a raw average of stars plus a histogram, with no correction except for removing reviews. Meanwhile, BeerAdvocate is essentially just a labor-of-love beer tracking site, but it offers multiple secondary stats about ratings. Each beer gets a percentage deviation stat (pDev) to show how varied their rating are, each specific review lists its deviation (rDev) from the product's average review, and each reviewer's profile offers their frequency of reviewing above, below, and inside the average window (|rDev| - pDev > 0, then rDev - pDev).
I don't think anything is actively done with those stats, but even that's enough to spawn forum threads where reviewers discuss which beers are most controversial, which beers they gave outlier reviews to, and whether they're typically harsh or generous reviewers. The site also recommends rating sub-categories to get people thinking about different aspects of a problem, while Amazon is laden with reviews that are either myopic (e.g. that XKCD about a tornado tracker) or completely off-topic (e.g. about the seller instead of the product).
A site that wanted to go a little further could use the same stats I mentioned for spam detection (there are several papers on doing that effectively) and for score correction. (Basically, take a user's average deviation over all reviews and use that to scale or adjust their impact on averages.)
Given how easy all that is, why do Yelp and Amazon consistently have some of the least useful reviews and averages of any site I know? I suspect this article nails it - casual readers appreciate the simplicity more than depth.
Now I'm imagining a system that never displays a user's absolute rating. Instead, ratings are only displayed as a number of standard deviations above or below that user's mean rating.
The main drawback I see (assuming that you are seeking an honest picture of that provider's quality, and not a tool for punishment) is that you don't meaningfully capture people who only write reviews to flag serious problems.
> you don't meaningfully capture people who only write reviews to flag serious problems
Yes - reading the comments here I'm realizing a major problem with any reviewer-centered system is that people decide whether to review on hugely varied conditions.
An always-five-stars reviewer might just be easily impressed (or a fraud), but they equally might subscribe to "if you don't have anything nice to say...". And quite a lot of people write exclusively bad reviews, but it's not obvious how to discern grumpy reviewers from people who only speak up about major issues.
A partial fix might be available by analyzing how far a given review is off user's average difference from product average, which could discern a five-star bot from a person who only reviews great products. But even that doesn't solve the XKCD problem where a product has median-case appeal but a high rate of critical failures. In true "what can't meta-analysis fix?" style, this could be improved by looking at a user's average distance outside 1SD of the mean review, or perhaps by special-casing products with multimodal reviews.
Of course, it's deeply unclear how to convert this to an output. Scaling reviews based on reviews sounds like a nightmare, reviews shouldn't be differential equations and no one wants to see 4+ layers of statistics to buy a new lamp. Perhaps all of the indirect work could be done off raw ratings behind the scenes to produce a general "adjusted average" for display?
(More realistically, the serious-problem case only seems solvable by reading text reviews, and just devaluing outright fraud and always-angry cranks would be a massive improvement over existing systems.)
That's true. And vice-versa. Reminds me of the infographic where they compare American and Eastern-European responses to quality (https://i.imgur.com/h4ybnAN.jpg).
I think the other interesting factor is that the more you tailor your life’s experiences, either in actuality or in perception, the less happy people seem to be. I believe this to be similar to the choice of paradox, where extra choices beyond a small number not only stress out the chooser, but result in less long term satisfaction in the final decision.
Back when I was a kid the choices available within the budget of a lower middle class person of holiday, house location, schooling, medical care, dentistry, food, restaurants, everything was much less.
You were delighted to even be in a restaurant, on best behavior and it would have to be terrible to warrant complaint.
People can now eat and drink pretty much whatever they want from around the world at any time of the year having service business available to cater to every whim that people of a certain level affluence can have their lives very tailored to their tastes and expectations of service.
To then not get that, not be treated as the in crowd or perceive being valued, can be very jarring.
Easy to dismiss as first world problems but I think it’s a real point.
Expect to see this, and/or its close cousin the McNamara Fallacy, among the top blogpunditry memes of the next few years. If only it were possible to buy stock in these phrases...
I don't know, but I prefer binary. I agree with the author about numeric ratings. If service meets my expectations, then I want to put a 3 or 4, which to me means average or above average. A 5 means not only were all expectations met but they surprised me by going way beyond my expectations. In fact this is how Yelp describes it:
1 - Eek! Methinks not.
2 - Meh. I've experienced better.
3 - A-OK.
4 - Yay! I'm a fan.
5 - Woohoo! As good as it gets!
But the subjects of review seem to interpret the ratings like this:
1 - A black hole.
2 - Don't bother.
3 - If you're desperate.
4 - Could be better.
5 - Met all expectations.
To compound the problem, many people actually rate like this, causing what the article calls "reputation inflation." The result is that I end up having to use this higher scale when looking for service, because enough people have been rating that way.
This is why I would prefer a binary up or down vote, with the option to comment. My 1-2 would would translate to No, and 3-5 to Yes.
I've also heard of
A. Something was wrong
B. Good job—met expectations
C. Something was unusually good
Sort of like ride-share "give a complement" feature, since they know everyone gives 5 stars.
Another option (that I've never seen) is to tell the rater their response will be weighted based on their previous scores. So if someone who always gives 5 stars gives a 4 star, it's interpreted as a negative response. But if someone who gives majority 3 star reviews gives a 4 it's interpreted as positive.
There's a name for this. I forget what it is, but I remember having the principle explained to me, and thinking to myself: "Holy fucking shit, this is some seriously out-of-touch economic high-finance quantit=ative bullshit." Like, the kind that always expects all graphs to go "up and to the right."
Basically, you sit down at a restaurant, order a meal, and on the way out, the host or hostess ask you to mark off checkboxes rating the experience on a scall of 0 to 11. Eleven being the Spinal Tap interpretation, and Zero being the hospital visit.
We now face three realities at this point, completely disconnected and untethered from one another, doomed to resolve as if it were some Cellular Automaton rule set from hell.
On the customer's side of things, maybe they let their small child mark the check boxes to satisfy the child's curiosity, maybe it's the waitress' mom dropping by for a visit to take pictures at their first summer job. Maybe it's an actual normal customer, a random person with with a pulse, fresh off the street. Anything goes. Straight A's or all balls.
The wait staff, have no control and don't actually know how they're being judged. By what criteria, or rules. They just know what it means to wait tables. They've been in a restaurant before, and they know what middle of the road is. Unless specifically advised of the level of service, they observe their surroundings, and follow their instinct, drawing on prior experience and whatever they learn as they go.
The analyst that receives the feedback however, operates according to rules both alien and strange to normal people. Only 9, 10 and 11 are positive ratings. All others are an insult to the business, and all parties associated with an 8 rating or less must be eliminated from the system. They don't want to see paying customers that aren't ecstatic, nor do they want employees delivering service that catches a middling rating. To them, a 9 represents a danger zone, threatening profits with a backslide.
The person who explained this principle to me was my boss' boss, and it was in this moment that I knew the ship I was on was aimed at an iceberg, and that I needed to escape. He was referring to our OKR process, while simultaneously explaining this principle in relation to our review process. It was a very "Steve Ballmer/Stack Ranking/Cut the Weakest Link" sort of discussion, and I stuck my thumb out and found another job, for better pay, and less demand within weeks.
I don't know where the specific article wandered off to, but about six months ago I read a thinkpiece talking about those "happy face/frowny face" buttons that have popped up. The article talked a lot about how the binary up/down, with no ability to comment on specific people/processes/whatever, actually improved both overall ratings and the businesses' ability to address negative ratings considerably. If I find it again I'll update this comment.
>The article talked a lot about how the binary up/down, with no ability to comment on specific people/processes/whatever, actually improved both overall ratings and the businesses' ability to address negative ratings considerably
I guess that makes sense. Individual end-users only ever see the tip of the ice-berg when interacting with a large organization, so they are likely to blame the parts of it that they interact with while ignoring the deeper root-causes. If you let them blame the customer service rep's attitude or the size that their vegetables are chopped then you wind up trying to over-optimize around making the reps smile or definining exactly how to chop the veggies. But if all you know is that people aren't happy with the experience you send a process minded person who is familiar with the whole stack of how things get done to figure out why the service reps are in a foul mood or how come irregularly chopped veggies are winding up on peoples' plates.
Well, this resonates. One of the things I do when scouring online reviews is to search out the negative reviews. I dismiss reviews that are irrelevant to the product itself and do (what I hope) is a reasonable job filtering out the less-informed negative reviews. When I find a good, informed negative review I tend to search through that particular reviewer's history to then find what they HAVE reviewed positively. Because I do tend to think that person has more expertise/knowledge.
Until I find a good, informed negative review on something they've rated positively.
An aside: This site has lovely design.