Hacker News new | past | comments | ask | show | jobs | submit login

I always found statistics deeply disturbing when I was studying it at school.

You can establish a null hypothesis and then test at some confidence interval (say, 99%) and find that your null hypothesis still holds.

But, being somewhat ingenious, you may decide to lower the confidence interval (to, say, 98%) and find that your experimental evidence is now significant enough for you to reject your null hypothesis and accept your alternative hypothesis.

Lies, damned lies, and statistics, indeed.




This is why different communities accept p-values at certain levels (typically <0.05). You have to live with some amount of uncertainty when the data generating mechanism is random... that is unfortunately the nature of the beast.


I understand this. But let's say I ran some experiments and collected some data in an attempt to disprove theory X. What does it mean for me to say that at the 99% confidence level X is still true, but at the 98% confidence level it is not true? I just find it a bit spooky, is all.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: