> Authors of psychology papers are incentivized to produce papers with p values below some threshold, usually 0.05, but sometimes 0.1 or 0.01. Masicampo et al. plotted p values from papers published in three psychology journals and found a curiously high number of papers with p values just below 0.05.
This is on topic; there is a discontinuity there which is an example of the same type of thing the rest of the post talks about.
But it's not the biggest problem illustrated by that graph. The dot at "p is just barely less than 0.05" is an outlier. But it's an outlier from what is otherwise a regular pattern that clearly shows that smaller p-values are more likely to occur than larger ones are. That's insane. The way for that pattern to arise without indicating a problem would be "psychologists only investigate questions with very clear, obvious answers". I find that implausible.
The graph shows that those low p-values are more likely to be in papers, not that they’re more likely to occur. Is that suspicious? I don’t know enough about it to judge.
> The graph shows that those low p-values are more likely to be in papers
This is an important distinction, in my experience [0].
Many papers will report a p-value only if it is below a significance threshold, otherwise they will report "n.s." (no statistic) or will give a range (e.g. p > .1). This just means that in addition to pressure to shelve insignificant results, publication bias also manifests as a tendency to emphasize and carefully report significant findings, while mentioning in passing those that don't meet whatever threshold.
[0] I happen to be working on a meta-analysis of psychology and public health papers at the moment. One paper that we're reviewing constructs 32 separate statistical models, reports that many of the results are not significant, and then discusses the significant results at length.
> Many papers will report a p-value only if it is below a significance threshold, otherwise they will report "n.s." (no statistic) or will give a range (e.g. p > .1).
But the oddity here is a pronounced trend in the reported p-values that meet the significance threshold. The behavior you mention cannot create that trend.
> The graph shows that those low p-values are more likely to be in papers, not that they’re more likely to occur.
It looks to me like the y-axis is measured in number of papers. The lower a p-value is, the more papers there are that happened to find a result beating the p-value.
So low p-values are more likely to occur a priori than high p-values are. This is most certainly not true in general. We might guess that psychologists are fudging their p-values somehow, or that journals are much, much, much, much, much, much, much more likely to publish "chewing a stalk of grass makes you walk slower, p < 0.013" than they are to publish "chewing a stalk of grass makes you walk slower, p < 0.04".
I've emphasized the level of bias the journals would need to be showing -- over fine distinctions in a value that is most often treated as a binary yes or no -- because it is much easier to get p < 0.04 than it is to get p < 0.013.
Conditional on being published, this is true. Hence studies of the file-drawer effect and what not.
More generally, scientists are incentivised to find novel findings (i.e. unexpectedly low p-values) or lose their job.
Given that, the plot doesn't surprise me at all (Also, people will normally not report a bunch of non-significant results, which is a similar but unrelated problem).
I think what they meant was that we would expect the distribution of p-values to be uniform, if we had access to every p-value ever calculated (or a random sample thereof).
Publishing introduces a systematic bias, because it's difficult to get published where p>0.05 (or whatever the disciplinary standard is).
> Publishing introduces a systematic bias, because it's difficult to get published where p>0.05 (or whatever the disciplinary standard is).
That explains why the p-values above 0.05 are rare compared to values below 0.05. But it fails to explain why p-values above 0.02 are rare compared to values below 0.02.
I agree with your point from your previous post, that lower p-values are harder to get than higher ones, at least if one is looking at all possible causal relationships, but there are at least two possible causes for the inversion seen in publishing. The first is a general preference for lower p-values on the part of publishers and their reviewers (by 'general' I mean not just at the 0.05 value); the second is that researchers do not randomly pick what to study - they use their expertise and existing knowledge to guide their investigations.
Is that enough to tip the curve the other way across the range of p-values? Well, something is, and I am open to alternative suggestions.
One other point: while the datum immediately below 0.05 would normally be considered an outlier, the fact that it is next to a discontinuity (actual or perceived) renders that call less clear. Personally, I suspect it is not an accidental outlier, but given that it does not produce much distortion in the overall trend, I am less inclined to see the 0.05 threshold (actual or perceived) as a problem than I did before I saw this chart.
> Personally, I suspect it is not an accidental outlier, but given that it does not produce much distortion in the overall trend, I am less inclined to see the 0.05 threshold (actual or perceived) as a problem than I did before I saw this chart.
Don't be fooled by the line someone drew on the chart. There's no particular reason to view this as a smooth nonlinear relationship except that somebody clearly wanted you to do that when they prepared the chart.
I could describe the same data, with different graphical aids, as:
- uniform distribution ("75 papers") between an eyeballed p < .02 and p < .05
- large spike ("95 papers") at exactly p = 0.4999
- sharp decline between p < .05 and p < .06
- uniform distribution ("19 papers") from p < .06 to p < .10
- bizarre, elevated sawtooth distribution between p < .01 and p < .02
And if I describe it that way, the spike at .05 is having exactly the effect you'd expect, drawing papers away from their rightful place somewhere above .05. If the p-value chart were a histogram like all the others instead of a scatterplot with a misleading visual aid, it would look pretty similar to the other charts.
Well, you could extend this mode of analysis to its conclusion, for each dataset, and describe each datum in the data by its difference from its predecessor and successor, but if you do, does that help? I took it as significant that you wrote "...but it's an outlier from what is otherwise a regular pattern that clearly shows that smaller p-values are more likely to occur than larger ones are" (my emphasis) and that is what I am responding to.
I think we are both, in our own ways, making the point that there is more going on here than the spike just below 0.05 - namely, the regular pattern that you identified in your original post. If we differ, it seems to be because I think it is explicable.
WRT p-values of 0.05: I almost, but did not, say that if you curve-fitted above and below 0.05 independently, there would be a gap between the two, and maybe even if you left out the value immediately below 0.05. No doubt that would also happen for other values, but I am guessing that this gap would peak at 0.05. If I have time in the near future, I may try it. If you do, and find that I am wrong, I will be happy to recant.
> The way for that pattern to arise without indicating a problem would be "psychologists only investigate questions with very clear, obvious answers". I find that implausible.
Don't throw the Seldon out with the bathwater. I think there is a very real chance that the problems psychologist address are extremely probable in the society they investigate them.
In your model, a psychologist does these things in this sequence:
1. Choose a question to investigate.
2. Get some results.
3. Compute p < 0.03.
4. Toss the paper in the trash, because p < 0.03 isn't good enough.
But that's not how they operate. The reason there's a spike at 0.05 is that that's what everyone cares about. If you get p < 0.03, you're doing better than that!
So the bias in favor of even lower p-values is coming from somewhere else. It definitely is not coming from the decision point of "OK, I've done the research, but do I publish it?".
This is on topic; there is a discontinuity there which is an example of the same type of thing the rest of the post talks about.
But it's not the biggest problem illustrated by that graph. The dot at "p is just barely less than 0.05" is an outlier. But it's an outlier from what is otherwise a regular pattern that clearly shows that smaller p-values are more likely to occur than larger ones are. That's insane. The way for that pattern to arise without indicating a problem would be "psychologists only investigate questions with very clear, obvious answers". I find that implausible.