Hacker News new | past | comments | ask | show | jobs | submit login
Deliberation increases the wisdom of crowds (arxiv.org)
71 points by mpweiher on Sept 10, 2017 | hide | past | favorite | 15 comments



An interesting observation on the wisdom of crowds for me is para-mutual horse track betting. The odds given for each horse, which determine the payout for a bet are determined by how many people make that bet. Very roughly, in para-mutual betting (the most common kind) there is a single pool of money bet on a race. If 200 people have placed the same bet on a horse that wins, the pool is divided amongst the 200 (less the say 10% held back as profits for the track) .

My initial thoughts were that there must be some inefficiency in the odds as people either avoid or engage in betting at certain posted payout ratios. To my surprise, it appears that the actual odds reflect the actual probabilities that the horses will win. Somehow, the crowd at the track is able to bet on race after race and correctly, as a group, determine the probability of winning for all of the horses running. At least they predict it well enough that given the vig (slang for the percent held back by the track) there is no simple mathematical strategy that works to achieve winnings above chance.

I'm not a gambler, but I have read a few academic papers on race-track betting[1]. It surprised me that the odds (which determine the payout on a bet--horses with high odds pay out more while being believed to have more remote chances of winning) so accurately predict the chances of a horse winning that there is no simple mathematical strategy that is worth pursuing.

[1] This was back around 1975 so there is likely to be much more research available now. I remember one paper published in an operations research journal around that year that covered harness racing, but I can't seem to find it now.


The improved accuracy is all due to the increased guess variance between groups, right? If so, it isn't an unintuitive result. I feel as if the authors cheated a little bit in the abstract when they wrote that their findings go against the "prevailing view". Deliberation is expected to reduce the guess variance within groups, but not between them.

Great paper, anyway. But it made me wonder: how large the groups can be before they start reducing the accuracy of the aggregate guess?


I considered that too. Maybe it's just some strange wording. Either way, I think the reasoning in consideration of bias sounds solid.


Uh, certainly lots of people actually know the answers of questions such as "the height of the Eiffel Tower"? It's not then surprising that the crowd accuracy increases when people can talk?

My impression is that the tasks where an "independent crowd" outperforms a "dependent" one is on tasks where it's highly unlikely that any one participant can know the answer for sure. (estimation number of objects, etc.)

I didn't easily find any references to such papers, but this link is one confirmation http://www.bbc.com/future/story/20140708-when-crowd-wisdom-g... (search for "Flawed thinking")

EDIT: Wikipedia seems to be a decent entry point https://en.wikipedia.org/wiki/Wisdom_of_the_crowd#Problems


Sure, but what if several people in the crowd think they know the height of the Eiffel Tower? Does the crowd always go with the most confident individual? The most credentialed?

Presumably the crowd is able to take into account hundreds of these factors and arrive close to the truth.


What circumstances bring out the 'madness of crowds' as opposed to the 'wisdom of crowds'?


Maybe there are different subsets of "leader types" that step forward in different circumstances. When there is a sense that some action should be taken to correct some wrong, but there is nothing particularly productive the group can achieve at that time, a different sort becomes the most visibly confident and people follow.


Probably like a shockwave: when an actionable headline disperses faster than the speed that the whole mass of information can propagate.

If you're not in serious business, research, or operations yet, then you probably haven't experienced the most common state of things: lots of people you're trying to convince want all the information before making a decision or even pushing the headline to anyone else.


I wonder, how are these results skewed for more complex problems where certain questions and answers are taboo? How easy is it for the wisdom of the crowd to fail once global bias and majority censorship are considered?

Are echo chambers ultimately better or worse for society?


The findings make sense to me... but only as long as there's no money involved, e.g., awards for the best individual prediction. Money might change the incentives for participants to deliberate in good faith.

Also, I suspect the findings would look very different if there were a market on which participants could make money by betting on and against individual predictions. I imagine one would end up with a kind of Keynesian Beauty Contest: https://www.ft.com/content/6149527a-25b8-11e5-bd83-71cb60e8f...


As a matter of fact, betting markets are a fantastic way of aggregating opinion in the wisdom of the crowds literature. If you have some option that will pay out $1 if something is true, and you think there are 70% odds of it happening, you'll buy it if the price is less than 70 cents.


It's fun. A few tips from my predictit.org (healthy:) addiction:

Never buy at the asking price, put in a buy order lower and let the market come to you. Unless it's a question that will be resolved soon, it's likely your buy order will get filled.

Pay attention to how many orders are on each side, often people will "prop up" a market with a series of small orders, which at a glance make it look like a particular position is worth more than it really is.

Put in small orders, over and over, don't make big buys unless you can afford to lose. Each day you have more information, not less.

Never under-estimate how far the other side will drive down or up a question. Your perception of what the other side will do is prob wrong, regardless whether you are actually correct when the question is called. (for example, you could buy "Yes" for T2016 @ $0.14 (on the dollar) when the Billy Bush tape came out).

Stick to what you know.

Make sure you understand linked markets before buying a bunch of stuff at 0.99.

Keep a strong cash balance. If you put in too many buy orders, you will wake up, they got filled, and your long standing buy orders are now canceled because you couldn't cover them and now you go to the back of the line when you place them again.

Always have some sell orders on every position. The swings might hit them and then you can re-buy at a lower price.

Consider that there may be players that are not out to make money.

Never use the comment sections for anything other than comic relief.

Keep track of your time investment, even if you are making money, it might not be worth it. In the case of predictit which is one of the few options for US players, you cant risk more than $850 on any single question (well.. really $1650 on linked markets) so you are not going to get rich.


Thank you for showing me(/us) this, I've not heard of it before. Thank you twice for your personal wisdom as to your thoughts/experiences.


Can this be explained by confidence levels? I'd guess that "small group consensus" values would be a skewed towards the values from people who say "I know this" and away from those who say "I have no idea, here's a wild guess".


When applied to bits of trivia with concrete answers, I can see this as true because the most confident voices that persuade the others are probably more or less in agreement and actually correct.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: