Yes, that was my point. The article failed to make it clear. It is worded such that it could go either way.
I agree they may be outliers. However, I can't dismiss out-of-hand the possibility that they are actually better. Outliers are common... but so are systematic/correlated weaknesses within large organizations.
> but so are systematic/correlated weaknesses within large organizations
Indeed, that's such a big problem that there's a sort-of famous book written by a CIA analyst, "Psychology of Intelligence Analysis", that gives a pretty clear breakdown of how and why analysts fail to reason correctly about situations.
It also included examples of how Robert Gates (yes, the later SECDEF) managed to improve analyst reporting by actually reading the reports submitted and figuratively shredding the ones which were bad.
Before that there had been effectively no downside to submitting reports which were crap, intellectually speaking, and eventually things had become just professional-sounding educated guesses.
Large agencies don't have the benefit of small groups self-critiquing their own performance, so they have to try very hard to set and enforce standards of quality. When they don't do this it should not be surprising when quality drops.
Total guess here, but you would imagine that since it is a project based on statistical prediction, they would continue to measure their ability to predict.
Yes, that was my point. The article failed to make it clear. It is worded such that it could go either way.
I agree they may be outliers. However, I can't dismiss out-of-hand the possibility that they are actually better. Outliers are common... but so are systematic/correlated weaknesses within large organizations.