Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They had a neurotypical control group, their effect size is massive, and clearly separates the autistic group from the control group. They reject the null hypothesis that there is no difference between the groups, and argue from literature for an alternative hypothesis. What's the problem?


There's a step left out, which is how they know that what happens in the BCO models also happens in real brains in developing fetuses. That might be context known from previous research, but it isn't explained in the linked article, as far as I can see.


I don't think you need that specific interpretation to decide that this work reveals something important. We have a phenotypic difference that leads to a big in vitro difference, and it's not a huge jump to infer that that would also lead to an in vivo difference, even if the in vivo presentation is unknown, or the exact details of the presentation aren't the same.

Here's the conclusions section from the research paper this article is summarizing:

By embryogenesis, the biological bases of two subtypes of ASD social and brain development - profound autism and mild autism — are already present and measurable and involve dysregulated cell proliferation and accelerated neurogenesis and growth. The larger the embryonic BCO size in ASD, the more severe the toddler’s social symptoms and the more reduced the social attention, language ability, and IQ, and the more atypical the growth of social and language brain regions.

This is not making any huge logical jumps that I noticed. All it's saying is they found a strong correlation. And the researchers seem to be well aware that there are still dots to connect. In the limitations section, it explicitly points out more-or-less the very thing that the researchers are being accused of not thinking about in this HN thread:

The genetic causes and cellular consequences of decreased Ndel1 activity and expression correlated with ASD BCOs enlargement remain to be specified. A limitation of most previous ASD patient-derived iPSC-based models is lack of within-subject statistical linkage of ASD molecular and cellular findings with variation in ASD social phenotypes. Without this, future ASD iPSC reports will continue to have limited impact on our understanding of the genetic, molecular and cellular mechanisms that cause the development and variation in the central feature of ASD: social affect and communication.

Which brings us to an important thing about interpreting popular science literature: it's unwise to assume that what's in the popularization of the research accurately reflects everything the scientists who published the work think or know. Attempting to eliminate these kinds of details is one of the primary goals of science journalism. For better or for worse.

https://molecularautism.biomedcentral.com/articles/10.1186/s...


I don't think the other poster was objecting to the excitement about this being a potentially important discovery, or criticizing the research or the researchers. They are objecting to the way the linked article presents it, specifically its claim that "Courchesne and Muotri have established that brain overgrowth begins in the womb."

As readers of this article we can fill in the blanks and imagine that there might be a well-understood and well-founded way to extrapolate observations of these BCO samples to fetal brain development, but we can equally well imagine that this extrapolation might be tricky or unreliable. So as lay readers we're left to guess which mistake the author made: did they overstate the conclusion based on a bad assumption, or, after already explaining so much about the research and connecting so many dots for the reader, did they forget to explain why we can confidently draw conclusions about real fetal brains from these in vitro models? Obviously the second is more forgivable, but it's annoying either way.


They don't. Doing studies on living human fetuses is an admin nightmare. So they use analogues. Black hole physicists also resort to analogues as thier subjects are similarly difficult to manipulate.


I would hope that the epistemological status of medical discoveries exceeds somewhat that of knowledge of black holes.


Not a good analogy since only one of those is actually present on Earth.


But both are untouchable.


You can image and get in vivo estimates of brain volume. Here is a paper that does so: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8583264/


I did not see information on the ancestry of the 9-10 ASD toddlers or on the even fewer neurotypical controls. May have missed this.

As for large putative effect sizes this is often due to subtle batch processing differences between cases and controls. Where they all processed by the same tech in an interleaved way? All stored in the same way over this very long duration study? Authors do discuss batch controls but with single digit sample sizes I regard statistics as fundamentally unreliable.

I am also not convinced by the claim of any over-proliferation of neurons in autism during development. It is certainly a highly controversial result. See notes above by “subiculum…” and the Li et al paper he cites.


The sample size is miniscule (14)


That’s true of most neuroimaging studies. Have you ever tried to get a bunch of people into an MRI for a study? Not easy, not cheap.

Like they said, the effect size is large. With a large enough difference, you can distinguish the effect from statistical randomness, even with a small sample size.

As with any study, this result must be replicated. But just waving around the sample size as if every study can be like a live caller poll with n = 2,000 is not helpful.


Also this idea that bigger is better with sample sizes can lead to problems on the other side, when we see people assuming an effect must be real because the sample size is so large. The problem is, sample size only helps you reduce sampling error, which is one of many possible sources of error. Most the others are much more difficult to manage or even quantify. At some point it becomes false precision because it turns out that the error you can't measure is vastly greater than the sampling error. Which in turn gets us into trouble with interpreting p-values. It gets us into a situation where the distinction between "probability of getting a result at least this extreme, assuming the null hypothesis" and "probability the alternative hypothesis is false" stops being pedantic hair-splitting and starts being a gaping chasm. I don't like getting into that situation, because, regardless of what we were all taught in undergrad, scientific practice still tends to lean toward the latter interpretation. (Except experimental physicists. You people are my heroes.)

For my part, the statistician in me rather likes methodologically clean controlled experiments with small sample sizes. You've got to be careful about how you define "methodologically clean", of course. Statistical power matters. But they've probably led us down a lot fewer blind alleys (and, in the case of medical research, led to fewer unnecessary deaths) than all the slapdash cohort studies that we trusted because of their large sample sizes that were so popular in the '80s and '90s.


Diet studies can also fall into a similar trap.

Huge sample size, but all food intake is self reported, or a tiny sample size where test subjects were locked into a chamber that measures all energy output from their body while being fed a carefully controlled diet.

The later is super expensive, but you can be pretty confident of the results. On the flip side it also miss any conditions that only present in a small % of the population.

You can see this with larger dietary studies where out of 2 cohorts of 100 each doing different diets, 15 or 20% on each group does really well on some "extreme" diet (e.g. Keto) but the group on average has no unexpected results.

If your sample size is 5, it is quite possible none of your test subjects are going to be strong responders to, for example, keto.

So then the study deadline comes out "Keto doesn't work! Well controlled expensive trial!"

Meanwhile the large cohort study releases results saying "on average Keto doesn't work".

But in reality, it works really well for some % of the population!

Some non-stimulant ADHD drugs have a similar problem. If a drug only works for 20% of the population, you need to be aware of that when doing the study design.


You seem to be implying that subgroup analysis never happens?

I guess I don't follow weight loss research closely, but I would be genuinely amazed that they don't do it, too, given how ubiquitous it is everywhere else in medical science. And the literature on ketogenic diets goes back over a century now, so it's hard to imagine nobody has done one. Could it be instead that people did do the subgroup analysis, but didn't find a success predictor that was useful for the purposes of establishing medical standards of care or public health policy? Or some other wrinkle? Or maybe people are still actively working on it but have yet to figure out anything quite so conclusive as we might wish? But that this nuance didn't make it into any of the science reporting or popular weight loss literature, because of course it didn't, details like that never do?

Disclaimer, I'm absolutely not here to trash keto diets in general. I have loved ones who've had great success with such a diet. My concern is more about the tendency for health science discussions to devolve into a partisan flag-waving contest where the first useless thing to get chucked out the window is a sober and nuanced reading of the entirety of the available body of evidence.


> Could it be instead that people did do the subgroup analysis, but didn't find a success predictor that was useful for the purposes of establishing medical standards of care or public health policy?

If we are all being generous with assumptions, this could very well be the reason.

I haven't seen much research on efforts of trying to predict what dietary interventions will most effective an individualized treatment basis, but I also haven't kept up a literature for five or six years.

Then again the same promises for ADHD medicine where now they are some early genetic studies showing perhaps how we could guide treatments, but the current standard of care remain throw different pills at the patient and see what they works best with the fewest side effects.

Of course dietary stuff is complicated due to epigenetics, environmental factors, and gut microbiomes.

That said progress is being made and the knowledge we have now is world's different than the knowledge we had 20 years ago, but sadly it seems outcomes for weight loss are not improving.


That's a great point. If your experimental methodology is flawed, it doesn't matter how big your sample size is. A study like this lets you gather some compelling evidence that you may have a real effect. Then you can refine the technique. Autism is a very active area of research, so I suspect we'll see other groups attempt to replicate this study and adapt its techniques while the original authors refine the technique and get funding to perform larger studies.


Here is a deep neuroimaging study of 52 fetal humans and their brain maturation states.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8583264/

I recognize the Catch 22 that the diagnosis is not possible until several years after birth. But a prospective study of this sort is “in scope” at UCSD. They already have big MRI studies of kids with hundred or even thousands of scans.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: