> For me the take away is that non-trivial independent replication of results still stands as the gold standard for experimentation.
Agreed. Too bad funding agencies rarely if ever give you the money to do it :(.
The takeaway here is that statistics is arguably one of the most nuanced quantitative fields out there. It's really easy to shoot yourself in the foot, particularly with p-values.
I think every statistical test has its place, but my personal favorite lambasting of p-value testing is Steiger and Fouladi's 1997 paper on non-centrality interval estimation [1].
As an aside, Steiger was my graduate statistics professor several years ago, and probably the primary reason I know this paper even exists. If you enjoy the harsh treatment of significance testing in the paper, just imagine hearing it straight from the horse's mouth during lecture :).
Agreed. Too bad funding agencies rarely if ever give you the money to do it :(.
The takeaway here is that statistics is arguably one of the most nuanced quantitative fields out there. It's really easy to shoot yourself in the foot, particularly with p-values.
I think every statistical test has its place, but my personal favorite lambasting of p-value testing is Steiger and Fouladi's 1997 paper on non-centrality interval estimation [1].
As an aside, Steiger was my graduate statistics professor several years ago, and probably the primary reason I know this paper even exists. If you enjoy the harsh treatment of significance testing in the paper, just imagine hearing it straight from the horse's mouth during lecture :).
[1]: http://www.statpower.net/Steiger%20Biblio/Steiger&Fouladi97....