It's interesting. I have seen rating systems work – in the military. They're harsh to some, but in some situations, you have to filter out B and C players to keep the A players alive. I do think transparent systems w/360º peer reviews are better than many others. I wonder what else is out there that might work.
We’re talking about OERs and NCOERs right? Those pencil-whipped, boilerplate evals where if you have a 300 PT score and don’t get a DUI/beat your spouse, 1/1? The process that takes 24 months to kick someone out?
I don’t think you know what you’re talking about in this regard and/or didn’t serve, candidly.
I was a Green Beret for 15 years and have worked for DOD/IC for a subsequent five. The rating system I described was used throughout SFQC and on some ODAs and at multiple schools like SOTIC and SFAUC.
NCOERs/OERs aren't great, I agree. They're almost as bad and useless as command climate surveys.
I think he meant "bad" as in incompetent. In the context of the top parent, you'll have many more B and C players than A players, so trusting the majority works against the A players.
You mention rating so maybe you're talking about enlisted only, which granted I don't know how those work.
But the up-or-out system used for officers in the US military is notorious for being primarily political after like O3 and historically almost comically incapable of rejecting unqualified but politically sophisticated/connected officers.
SNCOs have up-or-out too, for sure. I agree that the promotion system across the board is FUBAR. I'm talking about other sytems, typically at the unit level, that can be used to supplement or even side-load the current 'official' rating system, which is a joke.
Oh I see, that makes more sense. I don't have any particular experience or insight, just from an area where military service is routine so I've heard a lot secondhand. Everyone I know in it or retired seems to think the promotion system is completely fucked though. They either fume because it worked against them or laugh because they could work it, but no one seems to think it's effective.
Prediction markets : Have a system where people can make predictions in semi serious fashion in any field, record them with reasoning and probability numbers and match them later on. Atleast this will prove how deeply they know about a field they are interested in. Tagline: Prediction is Intelligence
How do you compare the results? Most of the time you can’t know the actual probability of events being predicted even after they happen, and so it’s apples and oranges.
If you predict that it will rain on Friday, and I predict an alien invasion before Christmas, and both actually happen… Does that mean I was smarter because my event was vastly more unlikely? How would you score these two successful predictions and what possible value could those scores have? Where was intelligence applied here?
It would atleast show both of you are good in your respective fields. Also the predictions could be mild in nature since everything is based on hunch, we want the guy who gets the best hunches.
I've seen multiple. The one I like is 'pinks and blues' where you're given two of each - a pink and a blue - and you write out why you think two dudes should get a blue (good) and a bad (pink). If you start accruing a bunch of pinks, there's a problem.
It's very simple and probably wouldn't pass muster with an HR department, but it was effective at getting rid of turds.
Can you split the votes as you want among the candidates or does each have to get at least one review? How often do you rate? Do you only rate peers, or new hires? What about superiors?
It's a form of limited voting, with both approval and disapproval votes. In a unit, maybe there are 20-40 "candidates" and you are given four votes, two up and two down. This will probably identify a couple people receiving more than one vote. Limited voting has had interesting results in things like city council elections.