I took part in the last round of the Good Judgement Project, and finished top 20. I also work in the field (although around more specific incident forecasting than the GJP worked in).
I think the points made in the book (and on the GJP blog[1]) are useful. The idea of trying to put probabilities around your assumptions is pretty useful for example.
The other lesson I learnt was that predicting things to stay the same is generally a safe bet. Often the challenge is working out which side of a prediction require less things to change.
If you are interested in the details, [2] is a pretty good overview.
They make it sound simple: apply a rational, scientific mindset to life. Talk about easier said than done. Most of us sink into easy-to-reach reactions that are only good for the simplest situations, and sometimes not even then, and often come with emotion or resentment. Still, the former mindset is a nice ideal to shoot for.
I have an honest question. If I believe that I would make an incredible forecaster, where can I start testing myself? Is there something that requires significant less amount of information before starting to forecast, than, say, stocks?
Except that's not entirely true and a few minutes thought should convince you that some people are better at playing the piano than others, even after years of practice.
Not all practice is equal. Someone who puts in x hours with the help of world class teachers and approaches the task with a focused plan will beat someone who practices just as many hours on their own without a plan or focus.
I've read a number of studies that contradict your statement. They say that the ONLY thing that matters is the amount of practice and natural skills play no part in level of mastery.
I assume that that's intended as a reference to a simple method of disproving the contention that only practice time matters.
(It certainly supports the idea that practice time on the specific task matters -- which is noncontroversial -- but it doesn't support the idea that practice time alone determines performance.)
I highly doubt anybody can make such a study, because nobody can get such an experiment? What did they do? Measured talent in infants, forbade them to practice and compared to practiced individuals? Heh :) Don't be ridiculous. The studies you supposedly read are junk.
> Don't be ridiculous. The studies you supposedly read are junk.
Pretty harsh words for somebody who obviously has no background in psychological research :)
Of course it would be impossible to construct such an experiment. OTOH its not that hard to design robust field studies for topics like this one. And that has been done, many times.
In general, you don't have to measure talent in infants, you just have to be able to correctly predict future perfomance without talent as a variable.
Try to get this book from a library near you for an overview of the current state of expertise research:
You most certainly cannot dissever talent from the rest of the variables.
Field studies? Pp-lee-ase. Take that voodoo science somewhere else. Field studies' purpose is not to give answers but to definite more concrete questions. To serve as a starting point for a controlled experiment and that might be impossible, like in this case. Just because some professor says something, that doesn't mean he is correct. There is more falsification in psychology than in nutrition and medicine and that certainly says something. A lot, if not the overwhelming majority of psychology researchers do not understand p-values and simple theory-hypothesis construction, let a alone the structural equation modeling menu dialogues in SPSS they click on. All they have is theories. No, not like gravity. More like Jesus and Muhammad - unsubstantiated theories.
I read a metric ton of psychographic research because of my job, which is in quantitative marketing research.
Here's an experiment: get a million people to guess the results of a thousand coin-tosses. By the end, a miraculous few will have a 100% record and will be hailed as geniuses. Books will be written about this incredible group. These books will find the traits that, by chance, the group members happen to share with each other. ("My god, they all have brown hair!" "Yes, but they come from eclectic backgrounds - housewives, factory workers, math professors." "How fascinating!") But what is the probability any one member of that group correctly guessing the result of the next coin toss?
you get one person to guess the result of 1000 coin tosses that's a 1/(2^1000) probability of one person being right. Get a million people to guess it it's (10^6)/(2^1000) probability of one person being right.
Lets put it in perspective; the chance of one person out of a million getting a 100% streak is 9.332 × 10^-296 %. That's significantly less than 0.000000000000000000000000001%.
If there are people who have a 100% record of being correct on every guess, the probability of them being geniuses is way higher than the probability of them being lucky.
That being said, I get your point. We don't have enough metrics from the research to determine whether or not these super forecasters are just lucky or geniuses.
It turns out not to matter (since the numbers are so small) but your math implies that 10^305 people guessing would give a probability of about 9,300 that at least one of them is right on all 1000 coin flips. Since probabilities can't be larger than 1, that's a bit of a problem.
For anyone interested, the full set of steps (that produces a numerically identical result):
Prob[1 or more in 1,000,000 right]
= 1 - Prob[all 1,000,000 wrong]
= 1 - Prob[person 1 is wrong AND person 2 wrong AND ... person 1,000,000 wrong]
= 1 - Prob[person 1 is wrong]^1,000,000
= 1 - (1 - 0.5^1000)^1,000,000
= 1 - exp(1,000,000 * log(1 - 0.5^1000))
= 1 - exp(1,000,000 * log1p(-0.5^1000))
≈ 1 - exp(1,000,000 * -9.33 × 10^-302)
= 1 - exp(-9.33 × 10^-296)
= -expm1(-9.33 × 10^-296)
= 9.33 × 10^-296
log1p(x) = log(1 + x) but is more accurate when x is near zero.
expm1(x) = exp(x) - 1 but again is more accurate when x is near zero.
Both are necessary here to get a result other than "0".
It's perfectly reasonable to approximate a+b=a+b(1-a) when combining rare events without announcing it to everybody. Likewise with n repetitions of that.
You should read Tetlock's other work about foxes and hedgehogs in Expert Political Judgment. It presents a theory of why some people are indeed better forecasters than others. It's not lucky or genius, it's more about people who are able to consider several different data points, consider historical precedent, and make a relatively unbiased judgment about what's going to happen. These are the same types of people he sought out for the Good Judgment project, and its these same types of people who were considered Superforecasters making hundreds of forecasts over the course of a year about geopolitical issues.
Presumably you think that the people running the study didn't think of this?
Here's a quote from a New York Times article about the project -
> In the second year of the tournament, Tetlock and collaborators skimmed off the top 2 percent of forecasters across experimental conditions, identifying 60 top performers and randomly assigning them into five teams of 12 each. These “super forecasters” also delivered a far-above-average performance in Year 2. Apparently, forecasting skill cannot only be taught, it can be replicated.
So the answer to the question "What is the probability any one member of the group correctly guesses the result of the next coin toss?" appears to be "reasonably high".
So you're suggesting instead that 'superforecasters' exist, but that they haven't been discovered by finance? It'd be a pretty lucrative market for people able to predict the future. I think it's far more likely that it's less than excellent science.
I think the points made in the book (and on the GJP blog[1]) are useful. The idea of trying to put probabilities around your assumptions is pretty useful for example.
The other lesson I learnt was that predicting things to stay the same is generally a safe bet. Often the challenge is working out which side of a prediction require less things to change.
If you are interested in the details, [2] is a pretty good overview.
[1] http://goodjudgment.com/gjp/
[2] http://www.nesta.org.uk/sites/default/files/1502_working_pap...