Hacker News new | past | comments | ask | show | jobs | submit login

I think maybe a more interesting question is how to explain the extreme disconnect between data volume and confidence in the two examples compared. The election has a huge number of polls, analysts and subject matter experts focused on it. And yet our ability to make predictions about it is comparable to an A/B test where 30 observations have been made total?

The only sense I can make of this is that:

- Predictions about stochastic dynamic systems are hard, especially when there can be these exogenous variables intervening from out of nowhere.

- Competitive situations are hard to predict, especially if they change based on the measurements produced about them (i.e. the campaigns strategize based on polling info). This effectively makes those measurements less predictive.

To me, this suggests that all the polling and analysis are a waste of resources and attention. The expected value of information is extremely low. We should kick off campaigns on Halloween, talk about them for a couple days, and vote immediately. All the pollsters and analysts can be more usefully deployed towards studying other systems.




This seems to be assuming that the value of all this coverage and analysis is in the correctness of the prediction.

It seems more likely that the media companies profit enormously from all of this coverage in spite of (and perhaps even because of) the low certainty provided by this polling and analysis.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: