Ask participants about 24 things and you are bound to find correlations that have no actual basis in fact for that sample size. They might have well have asked about their favorite colors and music while they were at it. This is an obviously bad study funded by industries that are not impartial, and really has no business being posted here.
I've heard similar statements in the past, but I think I lack the statistics knowledge to make sense of it. Would you mind expanding on that a little?
I.e., suppose that 3 of the questions were meant to test some theory that the researchers have, and the 21 other questions were thrown in for the reasons stated above.
Now suppose that the answers to the 3 seriously-chosen questions weren't predictive of long life, but 5 of the other 21 questions were predictive of long life.
It sounds like you're saying that somehow that 5-question correlation is bogus, but the 3-intentional-question correlation would be legitimate. But the only difference I see between the two groups is subjective: whether or not the researchers anticipated the questions being meaningful. I don't see how/why that would be relevant to making inferences about the wider population.
The danger is similar to any backtested algorithm. You can always go back and find theories to fit your data. That doesn't mean they will be predictive going forward. It also doesn't mean they _won't_ be predictive, and I would say they can form a good hypothesis for a new experiment. It's not that it's meaningless, it just lacks the power of scientific evidence. It's comparable to a non repeatable experiment, where the hypothesis counts as a first "repetition".
At least that's my interpretation as a non statistician, but I think there's a difference between statistical significance and scientific proof. The findings may be statistically significant without actually proving anything.
The problem is that the entire premise of the study is flawed, whether three questions were asked, or a hundred. Asking a hundred questions simply means it’s more likely your study will “succeed” in the sense of finding something to report that aligns with your agenda. Statistically, take any sample size. If you measure enough factors, you are bound to find some bias in your sample size.
Instead, a better study would consider actual science. Microbiology, biochemistry, etc and form measurable hypotheses instead of this spray-and-pray pseudoscience
Yeah I don't see how this is anything except a correlation, and the article is so sparse on analysis and data that it seems more misleading than informative. I've read before that exercising your mental capacity can also have a protective effect against loss of cognition in later life. I wonder if this study has any evidence that these food choices (lamb, wine, and cheese) are anything except a correlation for something that more intellectual people might tend to eat. The possibility wasn't even addressed, and no mechanism was suggested to explain why these foods would prevent cognitive decline.
And then this line: "That said, I believe the right food choices can prevent the disease and cognitive decline altogether. "
Really? That's a bold claim and I'd love to know what it is based on.
When doing simple questionnaire based studies like this, it should be standard practice to ask a few purposefully ridiculous control questions. Ask them which brand wine they like. What's their favorite color, sports team and grocery store chain.
Genetic Factors of Alzheimer’s Disease Modulate How Diet is Associated with Long-Term Cognitive Trajectories: A UK Biobank Study DOI: 10.3233/JAD-201058
>Conclusion:Modifying meal plans may help minimize cognitive decline. We observed that added salt may put at-risk individuals at greater risk, but did not observe similar interactions among FH- and AD- individuals. Observations further suggest in risk status-dependent manners that adding cheese and red wine to the diet daily, and lamb on a weekly basis, may also improve long-term cognitive outcomes
This was self reporting study, and pure correlation study.
The study found correlation but very small effect size.
The other way to read the study is that being relatively well off enough to afford a wine-and-cheese lifestyle vs beer and bread is what's neuroprotective. Turns out being rich nearly always helps you live longer!
> “While we took into account whether this was just due to what well-off people eat and drink, randomized clinical trials are needed to determine if making easy changes in our diet could help our brains in significant ways.”
So they are aware of the possibility of this effect, yet didn’t bother with it. Sounds like replication crisis to me.
That’s not what the replication crisis is; replication crisis is more like 20 groups doing experiments that have a 1/20 chance of success if the effect wasn’t real, the one who got the positive result publishing it, the 19 who didn’t keeping quiet about it, and nobody else bothering to replicate the experiment with the one success to see if it was real or fluke.
What you’re quoting is more of “follow up studies will look at this”.
> So they are aware of the possibility of this effect, yet didn’t bother with it. Sounds like replication crisis to me.
No, that is doing small study correctly.
The whole point of small scale studies is to figure out whether paying for bigger one is waste of time and money. Randomized clinical trials are expensive. There is no point of doing them if small studies dont show effect. However, if you do small study, it is correct to point out limitations.