I seem to be the first to raise the Deming point of view again, but I'm glad to, so here goes:
One of the big problems with trying to rate workers is that most workers have approximately the same rating. In other words, the differences in ratings are not statistically significant; they are just as well explained by pure chance.
Building a system where you incentivise and promote on a random draw of cards is bad in and of itself, but doing it and then labeling it as being based on merit is actively hostile to anything you're trying to accomplish. You are building a culture of people stabbing each other in the back in desperation to win your lottery, and not a healthy environment of cooperation and focus on productivity.
Yes, sometimes you have a worker who performs off the charts in either direction. That's a great opportunity! If they perform much better than someone else, put them in a position where they can coach and train others in how to do things that well. If they are much worse, try training them or otherwise find them a different job they can do better.
Other than that special case, the only way to improve the performance of the system is to give the workers better tools, better information, better understanding, more cooperation, better ability to do their job. The manager is the only one standing in the way of the workers doing a better job.
But then how do you assign bonuses and raises when everyone performs approximately the same? Split them across everyone. That simple. Think in terms of incentivising the team, not the individual.
And things like promotions that can't be shared among many people? Ask the workers themselves who they think are a good fit for the position. Chances are they know that hell of a lot better than a manager will. And it is abundantly clear how to get a promotion: make a good impression on your team. Cooperate, teach, help out, be nice, do good work. And these are -- by a weird coincidence -- just the things you want to accomplish.
This entire premise, that people are usually more or less equally capable at a given task, doesn’t correspond to my experience at all, across several industries.
It seems ideological, or philosophical or based on limited industries.
It can also be dehumanizing to imply people can’t get any result from trying hard, or that everyone has equal abilities.
I’m just not exactly sure why you are saying this, what it is based on, and what the goal is here.
... [P]eople are usually more or less capable at a given task...
True. But few if any “jobs” held by knowledge workers are just a single, repetitive task, over and over, like Demming was observing when he studied productivity on assembly lines. So while you may find a team has variation among members when looking at a single skill, as a manager it’s really hard to pin down the success of the team to one thing done by one person. In fact, from a management point of view, “success” is often “we got the job done” and not “everything was done perfectly”, so a team member who does one or two things really well is treated more or less like an outlier, and their work averages out.[1] Unless...
Unless the manager has unwittingly pitted the members of the team against each other, in order to find the “top performer”! This will reduce productivity of the team overall as they waste time defending against each other. A net loss overall.
Don’t get me wrong, there are times when competition is healthy, but quite often it’s applied as a kind of micro-optimization that doesn’t really benefit the organization overall. (Many orgs have enough money to allow this nonsense, unfortunately.)
[1] Try and write a job description for someone on your team, and you’ll quickly realize how complicated it is to even know what skills are key, which are nice to have, etc.
There's an important distinction to make which I'm not sure I've successfully communicated: when I'm saying there generally is no "statistically significant" difference, I'm not saying everyone performs at the same level.
There can be huge variations in performance between people. But this variation tends to be internally consistent. You don't have one or two workers that perform multiple standard deviations away from the mass.
Think of it as drawing people from a distribution with large variance. When assigning bonuses to individuals, you're looking for statistical anomalies, i.e. you want to find that one person that appears to be drawn from a completely different distribution. That is what is very rare.
Large variation in performance but without any statistical anomalies still means everyone is performing "within the systems natural boundaries." There's no reason to suspect someone comes at it from a completely different angle.
That said, of course you want to minimise the variation of your system. If nothing else because it makes it easier for you to find the true outliers.
But you don't reduce variance by giving raises to the top performers -- that will, at the very best, make them perform better, thus increasing variance further.
Better try to reduce variance by bringing the other people up, which requires much more involved management than just giving more money to the ones with the high numbers.
> Better try to reduce variance by bringing the other people up, which requires much more involved management than just giving more money to the ones with the high numbers.
yes, bringing people up that are not doing as well, has in my (anecdotal of course) experience, resulted in huge overall gains for all-round team performance (people share info with each other, help each other, are more motivated etc)
i would add, that unfortunately, the trend seems to be in the opposite direction...
> And it is abundantly clear how to get a promotion: make a good impression on your team. Cooperate, teach, help out, be nice, do good work.
… it strikes me that the problem is that the way to get a promotion would be to seem to cooperate, teach, help out, be nice & do good work, not necessarily actually do those things — one might end up with promotees who simply know how to game the system.
It might still be preferable to the current state of affairs, which as you note hardly works.
If the assessment is done by the colleagues who benefit from those behaviors, at that point "gaming the system" means actually doing those things. You can only fake it when the evaluation is done by a third person / manager.
If you split the bonuses between the team evenly then outperformers will go to another company that will offer them personal incentive which will be bigger that way.
One good thing to come out of certain rating systems, like Uber and Airbnb, is that the users get rated too. If someone is a terrible customer, then they can probably make things a lot worse for the driver or host than vice versa, and without these scores, drivers might not be able to warn other drivers about them. I would hope that with other modern customer service jobs, that reverse-ratings are always employed. It is a measurable way to improve the jobs of customer service workers.
Actually, what I've found on Airbnb is that everyone just rates everyone high to either avoid rating retaliation or to avoiding seeming like a dick / hard customer. I suspect it's the latter -- there really is no upside to being an honest critic.
You'll rent a place that ends up being on the noisy highway and the worst rating you'll find is "great for early risers!"
The rating system does not work. Airbnb doesn't care about rating inflation because it just makes everything look like 5-star experience.
Uber is frustrating for other reasons. I've never made an Uber driver wait for me, I'll greet the driver, and I can't imagine how someone can be a better customer. I have a 4.1 rating and can't get VIP because of my rating. My girlfriend doesn't even believe in getting up to leave the house until the app tells her the Uber is there. She constantly calls uber for large loud drunk groups, redirects her rides, and yammers on her phone the whole ride. She has a 4.8+.
Aside, my girlfriend also rates all Uber drivers 3/5 because that's an average rating. They get a 4/5 if they go above and beyond. I actually though 5-star systems had merits until I met her. Now I'm firmly against them.
Part of the problem of rating inflation comes with the removal of problematic listings.
My girlfriend & I discovered large patches of mushrooms growing out of the master bedroom carpet in an otherwise very unhealthy AirBnB. I wanted to write feedback on the house, but I couldn't warn others since AirBnB's policy in such cases is to remove the listing. I spotted the listing not long after with a clean slate. I'm sure they cleaned the place, but I'm also sure they didn't tear out the floors and fix the root problem. Too bad no one will know since the ratings won't be a source of info in such cases.
The way to solve this would be to have an overlay everywhere, such as the one that tha browser extensions afford where you can see a layer of reviews that are truly independent.
Maybe AR will open a new door there... I know it would be an extremely hard problem to solve - it might involve cutting-edge techniques in AR, AI, computer vision and so on - but I think there might be a billion dollar market there.
The pain certainly exists as reviews could be useful, but they're trash almost everywhere (Uber, AirBnB, Google, Amazon, Deliveroo, etc.) due to platform-owner incentives.
Interesting. I wouldn't try to do this with "true" AR (magic leap / hololens) - not enough benefit - but just a location aware layer would do it.
Seems like you'd want to pair it with a browser extension (want to avoid making that booking in the first place), and the back-end engineering is very similar to what (IMO/AFAIK) you'd need to support similar reality overlays in "true" AR.
I mention AR because browser extensions would work well on the web, but you'd only be able to do the same on mobile with a system that cooperates, and there's a slim chance that Apple and Google - esp. the latter - would allow something of the sort to materialize. Plus of course with AR you could take it to the streets!
As for the business model of this independent platform, one could still make a ton of money out of it as long as its revenue-generating incentives were disaligned from the review scores, and I really think that's not such a hard problem to solve.
Ohhh. Phone as browser in physical space, AR as extendible medium.... I like.
For the AR-ish part... Oh, widgets! Right? Like clock, or weather, or the news thing. Constantly (modulo performance) update what you're showing on the widget based on location. Or dynamic notifications; "this is how many reviewed locations we know of at this location".
>on Airbnb is that everyone just rates everyone high to either avoid rating retaliation
I thought this is a non-issue because airbnb withholds reviews until both parties have written a review?
>or to avoiding seeming like a dick / hard customer. I suspect it's the latter -- there really is no upside to being an honest critic.
I personally don't understand this. If a place is legitimately bad, it's not being a "dick / hard customer", it's telling the truth. The upside is that you added slightly more information to the market.
True, it's also based on ethnicity. I when i lived for a while in a country with a wide anti-Russian sentiment, i saw my Uber rating going down, moving to the one more friendly to Russians, it went up... I would assume than minority people have it hard.
I gave up giving anything other than 5 because if I do that, I get hounded for an explanation. I know they think this "proves" their dedication to excellence!!1!, but it mostly proves that the ratings are essentially bullshit.
I’m in the same boat. I’m not worried about ratings creep for providers; normalizing based on a user’s other ratings is a solved problem. But being forced to write 300 characters about why you gave a 4 instead of a 5 means I only rate less than 5 to legitimately warn the public.
> I've never made an Uber driver wait for me, I'll greet the driver, and I can't imagine how someone can be a better customer.
This is really the bare minimum. I know a ton of drivers who care less about wait time than some other annoying tics their customers might have. I've lived in a region where the median rating seems to be around a 4.4. I've never met someone with a 4.1 uber rating.
I'm going to postulate that you're being VERY rude in a way you don't know. Maybe have someone ride with you and critique your behavior.
Does it need a lot of explaining? Presumably the workers are in it because they need the money. If a customer hits your rating they're threatening your ability to make the money you need, so there's no reason not to ding their rating in retaliation if they ding yours. And the other workers don't have a reason to care, you're making their clients more nervous to hand out imperfect scores too.
It's a regional thing. I have thousands of rides and a 4.86 so I know that's easy. But I know that drivers in India and Turkey just always fire off these 3 star ratings and stuff so when you travel you gotta be careful.
I don't feel like reverse-rating systems are actually a good idea without some checks and balances. You have no way to know how a driver decides whether or not to give a passenger a bad rating. We know from history (both pre-and-post-Uber) that sometimes drivers have an agenda in how they treat their passengers, either due to race, gender, or other factors, just like how sometimes a horrible passenger will mistreat a driver.
With driver ratings, I can see my driver's rating and they can see it too. The rating system in Lyft (not sure about Uber) also has buttons you can use to explain why you gave a bad rating or indicate what you liked about a driver, which presumably provides anonymous feedback to the driver on how they could improve their scores.
In comparison, a passenger with bad driver ratings gets no feedback or incentive to improve, and it's hard to know whether they could improve at all. If they have a bad rating from drivers they may never know and will just have a bad experience with the app, potentially forever. If you pick up a fare who already has a rating of 1.5/5, are you more likely to negatively interpret their behavior and give them a low score? Psychology is a factor here.
An anecdote: On my third ever ride with Uber, the driver told me I had a very, very low passenger score. That means one of my first drivers gave me a low rating, probably a 1 or 2. Why? I have no idea. He didn't know either. Maybe the driver was having a bad day or thought I was rude, or the driver had trouble finding my pick-up location. In the end the only outcome was that it took me longer to get a driver to pick me up afterward because drivers saw my low score. (I use Lyft now instead and have no problems)
If I get bad service or ripped off I want to be able to tell anyone and everyone "STAY AWAY!" but instead I know that if I do that no one will rent to me. Why would they? They can just make sure to choose only people that never post a bad review.
It could be a useful feature as long as the feedback is anonymous. Unfortunately, companies running ranking systems have their own best interests in the forefront which causes issues.
Exactly, it's an incentive game. Airbnb has an incentive to maintain high rankings across the board, because customers compare ratings across platforms as if they're equivalent. If they see a majority of 3/5 on Airbnb, they're just going to go on Booking.com and book a 4.5/5 hotel instead.
The only way to avoid this I can imagine is relying on cryptographic zero-knowledge proofs to defer ratings to an external network (ideally decentralized, but a neutral third-party ala TripAdvisor could work too). The platform gives the customer and provider a token when the transaction takes place, and they can use it to anonymously sign a review on the external ratings platform.
Not really...most of these services require to register with a phone number creating a pretty high barrier to create additional accounts for the common folk
If you wanted to rent an Airbnb and you couldn’t because your phone number was associated with a negatively reviewed account, anyone who had the ability to get a phone number has the ability to get a second. It may be a minor inconvenience but not a particularly costly one.
If I was a renter and wanted to disassociate from bad reviews, I imagine the easiest way to do it would be to transfer overship of the property to a holding company or similar. This is far harder to do and there’s far more cost.
I get that its immeasurably frustrating for a person to be distilled into performance metrics, but how else can we measure one's impact? Using a trust based method where a supervisor leans on their personal biases to determine an employee's value is fraught with issues as well. A large enough corporation is a machine and we can't base decisions off of how we "feel" the machine's parts are working. Hard numbers will win over time.
I work in the public sector where the pendulum is always swinging between trust and control. It's partly because we're not entirely driven by MBA's and our success criteria isn't earning shareholders value, but it's mainly because we've been around for so long because we can't go bankrupt when we fail. There are advantages and disadvantage to both approaches.
In a system of trust the key benefits are better products, happier employees who have less sick days and stick around longer, more creativity and new solutions that are appreciated by the community. The downside is that some employees, managers or contractors are going to take advantage of the trust in such a terrible way that you'll have no option but to swing towards control once the media finds out. A good example from my country is a decade back, when we had a lot of trust in our public sector and an elderly care facility was saving money by putting dirty diapers back on the elderly if the urine in the diapers didn't weigh enough. Crazy, right? But that's the stuff that happens with trust, you'll also see managers contracting out procurement contracts to companies they own themselves and other stuff like that.
On the opposite side you have the MBA based approach where everything is streamlined, measured and conformed. This is where you can build two hospitals for the price of 1,2 because the plan for them are exactly the same. It's where you LEAN every single business process up, and harvest the benefits. It's efficient, clean, and it slowly exhumes your business of value. Because the MBA approach very rarely manages to create something new, it's just really great at maximizing your current system. Eventually a neighboring city is going to do something wild, like figure out how to completely wipe out dyslexia by trying some crazy new method, because they trusted some hippie teacher with a plan. This is where you're forced to swing back toward trust and value tradeskills (not sure if this is the right word in English) because your politicians can't survive another period of having the highest dyslexia numbers in the country and nothing in the MBA playbook can help them.
This is a little simplified of course, and I'm certain things are different in the private sector, but I dare say, that if your organisation sticks around for long enough, it's bound to see the pendulum swing between those two outlines.
I think the bigger issue with the MBA approach is it's always possible to trade a little bit of long term hard to measure value for short term profits.
You can view the Soviet central planning system as the MBA approach scaled to federation of countries size. We know it has failed, people lost faith in it.
Central planning was a system with twisted incentives for some and no incentives at all for others. Even if the plan was right coming from the top, it went through layers of disinterested management. The feedback loop was broken as well since it could actually be dangerous to report the results of plan implementation in your area.
Yes, you've described why large corporations are hated. It's quite obvious that a handful of numbers are incapable of measuring a person's true impact, yet by god, they insist on trying.
All businesses want good employees, but what they want most are employees that can be replaced if necessary. They hire people to accomplish tasks and add value. I have felt underappreciated in corporate roles before, but then I realized that I was attaching too much of my identity to my job and in something where I had no real equity. Since then, I am much better at separating work from personal life, and I am happier putting my personal interests above the company's.
Yeah, I was a lot less upset about some of the crappy experiences I'd had working for evil giant companies after I read "The E-Myth Revisited".
My elite 10 person team was doing a great job on Device A, while Device B had a team of 200 semi-skilled foreign contractors just barely keeping things on track. Then, somehow, the company decided to reorg my super-skilled team and promoted the leader of the contractor team.
From 3-4 levels up in the company, the Device A team and the Device B team were IDENTICAL. They were both delivering things on time and under budget, even though team A was costing the company 1/5th as much, they were both fine. These products were generating hundreds of millions of dollars or maybe even billions in revenue per year, so the difference $5 million/year and $25 million was irrelevant.
If I'd realized this sooner, I wouldn't have put so much effort into doing a great job at the expense of my own interests, and it would've been just fine
In the end, you have to pay someone a wage which is....a single number. Say what you want about boiling down the complexities of job performance into such a limited scale, but this is what is required.
I'm a bit on the spectrum and I absolutely love having my performance distilled down to a set of metrics. It clearly communicates to me which types of actions are valued in a way that I will often miss from normal social interactions due to my blindness to social queues.
The way I perceive work is very helpful in this regard. To me, the purpose of work is to maximize my income/lifestyle while not dipping below a lower bound threshold of misery. I don't seek personal or emotional fulfillment from work at all. I did that when I was younger and, for me, it was a bad way to approach life. This way I'm much happier and my focus/attention is more on the kinds of things I hope will wind up in my obit: family, relationships, charity, etc.
I wonder if sales people ever write articles about how they're at the "mercy" of their sales metrics. The idea of it seems hilarious.
It's all well and good until you get judged by a bunch of completely borked-up, unfair metrics. The computer sets you to running halfway across a gigantic warehouse again and again, then measures your "picks per hour" vs somebody who consistently ends up in a tiny corner of the warehouse. Your manager agrees that it's not a problem and tells you not to worry, then a few months down the line some algorithm decides to give your lucky coworker a big raise and you don't get a boost at all.
Thankfully I haven't had to deal with any metrics like this for more than a couple of months at a time.. Since engineers' specialty is to optimize ratings, it's an obvious-enough waste of time to apply them that they're usually short-lived.
You should have more than one supervisor (and solicit information from peers). But basically...yes...using judgement, learning how to assess skills...that is all part of management. You cannot turn a bad manager into a good one with information.
I have seen no evidence that "hard numbers" work (they aren't "hard"...there is usually almost no justification for a particular metric). And it feeds on innumerable human biases (inability to confront uncertainty, using "hard" information to justify an emotional conclusion, creating meaningless targets, etc.).
In my experience, the practical issue is two-fold: one, people are just going to do what they do anyway but they ignore warnings signs/are more unflexible and two, you optimise for data that is obvious, collectable, and (usually) completely irrelevant.
This is a trend, it will pass as all these trends do. It started off as quite a good idea (businesses were often failing to set targets or quantify success/failure) and has just got totally out of control.
I think the core problem is having one universal metric. Take Uber or Lyft as an example. Some people like silence, some like conversation, some like music. If the ML models were better or if it were based on a web of trust we wouldn't need to have a score that treats a 4.5 as universally better than a 4.6 or similar.
> I get that its immeasurably frustrating for a person to be distilled into performance metrics, but how else can we measure one's impact?
I think it may also be useful to talk about the reliability and validity of all the things that are measured.
Also, how useful is this stuff among the different fields? The article talks about gig workers and manufacturers. Would be interesting if these metrics were applied rigorously to other fields like politicians, talk show pundits, and forecasters.
No, no, this rigorious ratings are for lower class endavours.
Higher classes get a title, which is a one off rubber stamp of quality and then will never again be judged on performance standing alone. Dr. Senior. Dipl. Ing.
Except by there peers, who will lower a silk of omerta over "internal" affairs like a lack of quality in work.
Of course, between these professions, there are meta-power struggles, which will play out, in a court- stylized fashion, a eternal ritualized war between castes like "management" and "technical execution".
Metrics, as demanded by one caste, are worthless, as they rely on a working interface provided by the oppossition or in depth knowledge, making a allegiance two too factions necessary.
Thus all that remains of them is a ritualized insistence, to compensate for the creeping feeling of a lack of controll. For those who need them, can not get them and be certain they are accurate.
Making metrics, actually - a flag of truce.
Yet people who passed laboratory science classes in college, people who should really know better, choose to govern by the metrics they "feel" are causally related to outcomes.
Maybe you really do have a formula that outperforms a panel of subject matter experts. It does happen sometimes. Not for long, because the experts start using it. But the bar for making this kind of claim is, at minimum, peer-reviewed experimental results that reproduce.
Human experience is a rich dataset. Intuition is a sophisticated machine. It blows my mind that people will so easily dismiss all of that in favor of a few bits of information and some 3rd grade arithmetic they just made up.
It's not scientifically rigorous just because there's a table and a graph. Come on!
A machine in which the parts of are made up of humans with 'feelings'. Distilling human beings down to cogs in a machine is bad enough when it comes from executives. This belief that corporations are some kind of non-human machine like entity is part of why the world's so fucked today.
Maybe the world would be a better place if corporations weren't run as inhuman machines with no care or regard to their employees or the world in which they operate.
Before corporations were people, in the US at least? http://reclaimdemocracy.org/corporate-accountability-history... Not at all going to claim it as ideal, but it's a starting point for considering other possible alternatives to today's giant, faceless corporations where nobody within the corporation is easily held accountable, etc.
Professors have been teaching and getting student course evaluations since forever. At institutions and roles where the admin knows little about the subject, they rely more on the course evals. At institutions with more engaged faculty and with more serious teaching, course evals are less prominent and defer to other forms of assessment (faculty observations, teaching statements, etc.).
Historically professors have tenure (admittedly less so now, I believe), so how they score on evaluations has no real bearing on them keeping their jobs.
I had a professor who was a terrible lecturer, and he knew his student evaluations were terrible every time. As he handed out the evaluations he would tell us that he did not read them, and did not care what we wrote.
I graduated twenty years ago, but just checked the department's website. He's still employed!
Fair enough, but even the tenured professors were untenured at some point before so they did matter and were not problematic enough to deny tenure. And tenure doesn't really apply the same for adjuncts or some lecturers, who are perhaps even more dependent on student evals. And of course it depends on the seriousness the university takes teaching, as there are R1s that are more research oriented and care less about teaching, but they account for 1% of all universities.
Even as a grad student, my teaching evaluations couldn't negatively affect me. The only people it could have real, negative consequences for were instructors, who work on contracts. As an example, my department head (perhaps unintentionally) sent everyone an email saying any scores of less than 3.73 on a 1-7 scale were grounds for termination of that instructor's contract. I did not envy them at all, basically being highly educated customer service workers.
Every large-scale student evaluation I have ever seen has been massively uninformative because students on different courses usually have drastically different expectations.
At my university, the law school was top-notch (at least, it is the best in the country) and it got the lowest score by far every year. Why? High expectations, huge number of students competing hard. The best school? Theology. Why? Low expectations, not many students in a far less competitive environment (And I studied in both and in probably four or five faculties across the school...Law was the best, Theology the worst...not even close, the former had people from Law firms, you got full feedback on everything...the latter, you usually got no feedback on any work).
This is really the point...now, you can do some kind of grouped mean model or you can compare over time...but what is the actual point? The reason they do this (or did this at my uni) was to compare departments/courses...but everyone knew it was bullshit...people lost jobs (I actually met the Dean who masterminded this, he came from the private sector, taught a minor course in the business school, and was utterly clueless).
The usual argument that has been given in favour of evaluations is to show a correlation with final exam scores. The two problems being (1) small samples and (2) soft-ball exams leading to higher evaluation scores.
Humans are not able to accurately evaluate their own learning. When we experience fluency or are entertained, we substitute those attributes for the attribute of ability to recognise or recall information.
Conventional wisdom is that student course evaluations are primarily driven by the grades that students expect to receive. There is strong pressure to give out easy grades. I overheard two professors who were teaching different sections of the same course, get into a shouting match over grades. One of them was up for tenure, and wanted to give easy grades. He was quite explicit about this.
The alternative is management who knows the domain well enough to understand those they supervise. That is expensive and hard in itself to ensure reliably.
It is a matter of the push and pull of scale advantages at varied sizes essentially which is essentially universal but varies by domain.
Hard numbers have been winning, and thats been the problem. Any time you have a set of numbers to describe a thing, you're creating an imperfect model of that thing. When you only consider that imperfect model, the things moves in the direction of fulfilling that model. You can't say you care about employee well-being, unless you have a metric, or several metrics, it won't ever be done.
I'd rather be at the mercy of _relatively_ objective metrics than at the mercy of whoever my boss happens to be. (I happen to like my current management chain, but people come and go...)
Meh.. you're still at the mercy of whoever your boss happens to be. They just need to put a little effort into framing the metrics the right way or to make sure you're getting work that doesn't touch on the right metrics. The idea that performance metrics are fair is just corporate propaganda designed to make you have the incentive to improve them, thus being easier to control and replace.
>or to make sure you're getting work that doesn't touch on the right metrics.
What? The metrics should be distilled from the overall company/org/team goals and mission and if you're working on things that don't align with those of course you're not going to have a good performance review.
Think more subtly - if you need to meet a metric/goal of 50% of your time on phone calls and 50% on tickets, a manager who wants to tank your metrics will game the system to feed you less than needed to make your metric on one or the other. Just because a metric exists does not mean it's applied fairly or evenly to all employees.
More generally, I'd prefer being judged by a large number of people rather than one or two managers who may or may not have any idea what's going on.
I left my previous team partly because we went through three managers in less than a year, and I rarely saw them. I assume the performance feedback for the team was effectively nonsense.
They need to do controlled trials whereby they try their best to truly and objectively measure a sample and then compare those to real ratings.
I suggest that for most things, customer ratings are completely irrelevant.
They might want to ask specific, relevant questions, such as: "Was the car/flat generally clean and tidy" "Were there any problems entering the facility" on the customer side "Did the customer generate problems with the authorities" "Did the customer disrupt your ability to drive" "Did the customer make repeated requests for out-of-bounds services even when they were informed such requests could not be fulfilled" "Did the customer party include more individuals than indicated on the reservation"
etc. etc.
Then they can glean specific bits of information and provide guidance.
Otherwise, general reviews are pointless because they're muddy, set to different standards, emotional, unspecific etc.
> We should start a rating boycott where we rate highest for all ratings.
This company I've worked at for about a year now is the first place I've been exposed to worker ratings in my career, and this has been my strategy. I refuse to play the game, and give everyone a perfect score no matter what. It absolutely does not benefit us workers in any way to give someone less than perfect marks across the board. It is literally nothing but a tool used by management to provide justification when it comes time to downsize. Furthermore I've noticed that it tends to breed a culture of distrust and backstabbing among colleagues, who may use the opportunity as retribution for personal slights. It's an awful policy.
One of the big problems with trying to rate workers is that most workers have approximately the same rating. In other words, the differences in ratings are not statistically significant; they are just as well explained by pure chance.
Building a system where you incentivise and promote on a random draw of cards is bad in and of itself, but doing it and then labeling it as being based on merit is actively hostile to anything you're trying to accomplish. You are building a culture of people stabbing each other in the back in desperation to win your lottery, and not a healthy environment of cooperation and focus on productivity.
Yes, sometimes you have a worker who performs off the charts in either direction. That's a great opportunity! If they perform much better than someone else, put them in a position where they can coach and train others in how to do things that well. If they are much worse, try training them or otherwise find them a different job they can do better.
Other than that special case, the only way to improve the performance of the system is to give the workers better tools, better information, better understanding, more cooperation, better ability to do their job. The manager is the only one standing in the way of the workers doing a better job.
But then how do you assign bonuses and raises when everyone performs approximately the same? Split them across everyone. That simple. Think in terms of incentivising the team, not the individual.
And things like promotions that can't be shared among many people? Ask the workers themselves who they think are a good fit for the position. Chances are they know that hell of a lot better than a manager will. And it is abundantly clear how to get a promotion: make a good impression on your team. Cooperate, teach, help out, be nice, do good work. And these are -- by a weird coincidence -- just the things you want to accomplish.