> statistical significance measures like the p-value are more epistemologically sound than any arbitrary rule of thumb-threshold on the size of the effect.
Keep in mind that the GP isn't saying the effect doesn't exist if it's in the single digits, but that it is inconclusive and/or insignificant. Insignificant in the human sense, not the statistical sense.
A 1% increase in this behavior? Irrelevant to almost everyone.
This, of course, is not even getting into the issue of the reproducibility crisis, much of which did rely on p-values. While I personally am happy to do p-tests, the skepticism of small effects is well founded. Were someone else to try to reproduce the effects and fail, the standard defense is that the results are sensitive to the methodology used. It's much easier to invoke that defense if your effect is 1% vs 20%.
Keep in mind that the GP isn't saying the effect doesn't exist if it's in the single digits, but that it is inconclusive and/or insignificant. Insignificant in the human sense, not the statistical sense.
A 1% increase in this behavior? Irrelevant to almost everyone.
This, of course, is not even getting into the issue of the reproducibility crisis, much of which did rely on p-values. While I personally am happy to do p-tests, the skepticism of small effects is well founded. Were someone else to try to reproduce the effects and fail, the standard defense is that the results are sensitive to the methodology used. It's much easier to invoke that defense if your effect is 1% vs 20%.