I wouldn't be certain. A lot of researchers really are clueless about anything to do with statistics.
It takes much more work/time to do real research, so the funding system has actually been selecting for people who can remain ignorant (so it is not fraud) and just produce p-hacked "results". In many areas, this has been going on for multiple generations now and you are trained to do it as a grad student.
It's worse than just not knowing statistics. These errors were in a table in one of the published papers. Every single row is calculated wrong.
> (43.67 - 44.55)/sqrt(18.5^2/43 + 14.3^2/52)
should be 0.2551872, rounds to 0.26, he put 0.25
> (68.65 - 66.51)/sqrt(3.67^2/43 + 9.44^2/52)
should be 1.503114, rounds to 1.5, he put 1.38
> (184.83 - 178.38)/sqrt(63.7^2/43 + 45.71^2/52)
should be 0.556064, rounds to 0.56, he put 0.52
And that's just basic math, nothing fancy. So, he knows it's bad. I assume it's difficult to get a doctorate at Stanford and then become a professor at Cornell, so there's got to be more to this than him just being stupid.
There are variants on the t-statistic calculation. I assume these were all checked? Either way this should be mentioned in the methods of the paper, along with the software used... I wonder if they used excel, in that case who knows what may have happened. I've seen excel change numbers/names to dates, and all sorts of wackiness.
> a doctorate at Stanford and then become a professor at Cornell
As an aside, academia is no meritocracy. A lot of what's required to get posts at schools like these amounts to self-promotion and luck (graduating in the right year & from the right lab).
Of course hard work and technical competence is also required, so your point stands.
Yeah...wasn't saying that path implies genius. It should, though, denote the ability to do basic math, or the common sense to have someone proofread your papers. This is just so odd to me, has me wondering if he's trolling.
> I wouldn't be certain. A lot of researchers really are clueless about anything to do with statistics.
This is definitely true. I once saw a book with a title similar to "Statistics for Dummies" in a professors office. He had plenty of access to staticians at the university too. Unfortunately if a given field involves many people with ignorance of statistics, these problems may not be called out during the peer review process.
>"Unfortunately if a given field involves many people with ignorance of statistics, these problems may not be called out during the peer review process."
As I said, poor understanding of statistics has been a beneficial trait in many areas of research for decades now. Universities and labs are rewarded with funding because they are able to (honestly, but incorrectly) pump out more "results" than if the researchers were properly trained.
So the issue goes well beyond the problems "not being called out". The problems are institutionalized. Editors and reviewers will try to force you to commit the errors in order to publish (and hence have a career). That is what they have been trained to consider science.
> He had plenty of access to staticians at the university too.
Assuming the book is for his own learning, as opposed just curiosity or for students etc, there is still nothing wrong with learning from book. Assuming he really dont know the statistics.
And it definitely beats up people who don't know either, but prefer to stay ignorant just so they don't look like someone who reads beginners book.
It takes much more work/time to do real research, so the funding system has actually been selecting for people who can remain ignorant (so it is not fraud) and just produce p-hacked "results". In many areas, this has been going on for multiple generations now and you are trained to do it as a grad student.