Hacker News new | past | comments | ask | show | jobs | submit login

If you have 3000 people predicting the outcomes of events with say a probablity of 0.3 after 6 trials there would be 2 people who got all of them correct and many others who would have a very good record out of pure chance alone.

Unless they publish the detailed numbers we can't be sure of whats going on here.

If some person with access only to normal sources is able to perform so well against people with access to lots of classified sources and information , what are we to conclude ?

Either they are just lucky or the people at the CIA aren't as competent as we think they are.

If we are unwilling to accept any of these theories then we need to know what makes these people good at it. I don't see how just plain simple analysis can beat people at the CIA.




But the point here seems to be to use the group average, not to find outliers with good prediction ability per se.


That's exactly it, which is why I was so confused about the "team" of "superpredictors". While you might learn something by comparing their methods with everybody else, you still have to do the same amount of scrutiny on everyone in the crowd to learn anything useful.

With enough data, you can apply an automatic adjustment to even the most horrible predictors, to make their contribution to the final result more useful. If, for instance, someone always predicts exactly 10% more favorable results for Israel, independently of the facts, you can just reverse that for that one person on every question mentioning Israel, to produce a bias-adjusted result. That is tremenddously difficult, though, and not nearly as useful as just increasing the size of the crowd.

Which is the whole point. With a big enough crowd, you don't need to do that at all, because the stupid little biases unrelated to the facts tend to be noise, not signal.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: