This is a no-op paper. They found nothing useful. Maybe an inkling of something extremely weak. Maybe. Certainly nothing that should be popularized, taken up by the media, or used to make any decisions about anything.
The kicker is:
> Effect sizes were small (largest peak d, 0.15).
To put that into language more people here would understand, it's like publishing a paper where your classifier gets 52% accuracy on a binary task (this is the zeroth order equivalent of a Cohen's d of 0.15), but you run it over so many ADHD patients that your confidence interval for that 52% accuracy is very small. Most people would still ignore it because it's likely still noise from other small confounds.
That is a paper that you should not put your name on. I certainly would not. Their effects are also more like 0.11 not 0.15; the largest peak misrepresents all of the rest of the results.
What they did is aggregate so much data that even a small inconsequential difference between groups that may account for nothing at all, that would go away with slightly different controls for confounds, becomes statistically significant.
A lot of the comments here drawing conclusions like, ADHD is about an overly integrated brain or whatever are really frightening. This means nothing. Science communication at its worst.
>That is a paper that you should not put your name on. I certainly would not. Their effects are also more like 0.11 not 0.15; the largest peak misrepresents all of the rest of the results.
>What they did is aggregate so much data that even a small inconsequential difference between groups that may account for nothing at all, that would go away with slightly different controls for confounds, becomes statistically significant.
>A lot of the comments here drawing conclusions like, ADHD is about an overly integrated brain or whatever are really frightening. This means nothing. Science communication at its worst.
Here are the paper's authors' affiliations:
Office of the Clinical Director, NIMH, Bethesda, Md. (Norman, Shaw); Section on Neurobehavioral and Clinical Research, Social and Behavioral Research Branch, National Human Genome Research Institute, NIH, Bethesda, Md. (Sudre, Price, Shaw).
It's hard for me to believe that the authors should not have put their name on this paper.
Do you have a background and credentials as valid?
> Do you have a background and credentials as valid?
I do. But we live in a world where people write you letters for promotion, review your grants, etc. Using your own name to say these things in public is bad for your career. So I'd rather not.
Plenty has been said in scientific publications about how junk like this leads to most papers being false, how we shouldn't do statistics this way, etc. Scientists know, but it hasn't been communicated to the public yet.
That being said, the beauty of science is that you don't need to see my CV.
A basic understanding of what Cohen's d is, or a simple translation (with some heavy assumptions) into accuracy as I provided, is enough to see that the paper is total trash. They found nothing, but they found it at huge scale, so the p value was very small.
By no-op I mean that it contributes nothing and means nothing. It's a waste of time for everyone. In a world were bean-counting publications wasn't how science was measured, I'm sure the authors would never have considered even writing this up worthwhile, never mind submitting it.
The term comes from assembly which often has a no-op instruction https://en.wikipedia.org/wiki/NOP_(code) Something that does nothing (although, that nothing may be critical depending on the instruction set).
"No operation"; in machine language it's an instruction that doesn't do anything, in broader tech lingo it's something that's moot or which requires no work. Commenter is basically saying that nothing actionable can be concluded from the paper's findings.
No-op is computer jargon for "no operation". It comes from CPUs having instructions (or commands) that do effectively nothing. Programming languages sometimes use the no-op term for functions (or procedures) that are placeholders that do nothing. These placeholder functions can be replaced later and perform some action. TL;DR: A "no-op" is something that does nothing, or as used by the parent, a paper that doesn't contribute anything.
The kicker is:
> Effect sizes were small (largest peak d, 0.15).
To put that into language more people here would understand, it's like publishing a paper where your classifier gets 52% accuracy on a binary task (this is the zeroth order equivalent of a Cohen's d of 0.15), but you run it over so many ADHD patients that your confidence interval for that 52% accuracy is very small. Most people would still ignore it because it's likely still noise from other small confounds.
That is a paper that you should not put your name on. I certainly would not. Their effects are also more like 0.11 not 0.15; the largest peak misrepresents all of the rest of the results.
What they did is aggregate so much data that even a small inconsequential difference between groups that may account for nothing at all, that would go away with slightly different controls for confounds, becomes statistically significant.
A lot of the comments here drawing conclusions like, ADHD is about an overly integrated brain or whatever are really frightening. This means nothing. Science communication at its worst.