> While we’re here, it’s worth talking about the MAPS MDMA-therapy trial itself. In May 2021, to much media fanfare, they published their Phase III clinical trial in Nature Medicine. The trial reported enormous, nearly unbelievable effect sizes: 0.91 of a standard deviation difference in score on a PTSD questionnaire between an MDMA and a placebo group (both groups got therapy). That’s the kind of effect medical researchers fantasise about.
> Are these effect sizes plausible? Nature Medicine also published two critical commentaries on the paper. The first one points out the issue we discussed above: that the blinding was almost certainly compromised in the study, since the control group got a completely inert placebo and the treatment group surely could tell whether they’d been given ecstasy. Expectations could have played a major role, and might explain at least some of the huge difference between groups (the commentary also makes the interesting point that researchers used to be required to collect data on which condition patients thought they were in, but since 2010 this is no longer the case. It might be time to bring back this rule). The second commentary asked for studies with more relevant control groups and longer follow-up periods to check the safety of the treatment. The MAPS researchers also responded, somewhat limply.
> The journal also published an entirely uncritical commentary co-authored by Imperial College London’s David Nutt. Remember him? He was Fired4Truth as a drugs adviser by the UK Labour government in 2009 for criticising government policy on drug classification; he’s now a major psychedelics researcher who co-authored the psilocybin trial we encountered above. The commentary calls MDMA “remarkable” three separate times in about 1,000 words (it’s true the effect sizes in the study are remarkable, but what if they’re due to expectancy bias?). It also says it’s likely that MDMA “will be an approved medication within a few years”.
> So there’s a final potential harm that stems from the breathless hyping and cheerleading of psychedelic treatments way beyond the evidence: it could mean that therapies—which come with their own rare but real dangers—are approved and rolled out to large numbers of patients when we need a lot more evidence that they really are beneficial in the long term.
It feels like they actually mostly overlook this study in favor of their thesis that psychedelic research is bogus. The Nature critiques are less than convincing, either way they are doing another phase 3 trial so we will know.
My guess is if this scientist did psychedelics he wouldn't have written this article.
> My guess is if this scientist did psychedelics he wouldn't have written this article.
That was my thought too, though that also runs into the author's argument that psychedelic research is heavily biased by people who have tried it. On the other hand, I think people who have not tried psychedelics have never experienced how much of what they think of as objective, rational mind is actually fairly fluid, and how much the referential frame of experience informs and influences what we'd consider objectivity.
I don’t know if someone who has never taken psychedelics can adequately understand what it is like, unless they have experienced something similar without taking psychedelic substances. (That is, spontaneous, visionary experiences). There’s an intellectualization of what the person thinks this experience is about, as a substitute for direct experience.
A physicist might study physical phenomena, but what they work with are models about those physical phenomena, and then call that “reality”. That isn’t reality. Those are models of physical phenomena with high degree of predictive ability.
Rather, if you want to use an example of someone who attempt to directly experience reality, we’re talking about Zen practitioners or one of those non-dual teachers. If you hear them talk, they spend a lot of time saying very plainly, you don’t really experience reality directly. But people hearing it pretty much nod their head and think that they do.
As far as oncology research goes, you are right. It is biased towards study participants with cancer. Not only do the _researchers_ themselves are not experiencing cancer themselves (unless they got cancer), but what is _not_ being studied is what health and wholeness looks like and contextualizing cancer that way. Instead, we’re getting to know a lot about the mechanisms of cancer, and the various causal chains of cancer, and this limits our view to that of the body as a machine. That by somehow understanding all the fiddly bits, we would understand the whole, and we won’t.
The author, Stuart, dismisses the entire study based on: expectancy bias, and participants being able to tell whether they are in the treatment group or the control group.
What if this were applied to everything? "95% of participants preferred the meal of pizza to the one that was hard tack and water. However, participants might have just had pre-existing bias that pizza was tasty, and they were able to identify whether they were eating pizza. No conclusions can be drawn whether pizza is actually tasty or this is simply a placebo effect caused by the participants' expectations."
Out of curiosity, is there anything specific about them that you think make them weaker critiques?
> either way they are doing another phase 3 trial so we will know.
As long as (potential?) weaknesses are not repeated; for example, if loss of blinding is indeed a significant issue and the second trial also suffers from the same flaw, then I think it's not unreasonable to call into question the utility of that second trial.
I think these get a lot of traction because of the interesting proposal that, should this go through, the possibility of a future where when one is feeling a little down or a little bored they might pop into their local Walgreens and get a one-night dose of OTC MDMA to liven up the boredom and sadness of their life.
It's an interesting consideration that I think is part of the excitement and buzz these articles generate. I think many people would experiment with psychedelics and other harder drugs if they knew the safety was very high and that it might correlate to other benefits such as overcoming past emotional blockages or looking forward to tomorrow with brighter eyes.
I said all of that to say that I take these articles with a grain of salt. Some of these well-known drugs may very well have amazing positive benefits for a select group of people experiencing a correlated negative existence, but I strongly doubt that any of them will be a panacea for any of the more universal ills, that is to say, I doubt casual sadness or unhappiness will ever have a treatment in pill form. These treatments will most likely be best reserved for the people who have deep and abiding emotional and psychological scars that traditional holistic approaches ("whole-body", not new age woo woo) would be entirely unable to treat.
The article actually discusses that very paper:
> While we’re here, it’s worth talking about the MAPS MDMA-therapy trial itself. In May 2021, to much media fanfare, they published their Phase III clinical trial in Nature Medicine. The trial reported enormous, nearly unbelievable effect sizes: 0.91 of a standard deviation difference in score on a PTSD questionnaire between an MDMA and a placebo group (both groups got therapy). That’s the kind of effect medical researchers fantasise about.
> Are these effect sizes plausible? Nature Medicine also published two critical commentaries on the paper. The first one points out the issue we discussed above: that the blinding was almost certainly compromised in the study, since the control group got a completely inert placebo and the treatment group surely could tell whether they’d been given ecstasy. Expectations could have played a major role, and might explain at least some of the huge difference between groups (the commentary also makes the interesting point that researchers used to be required to collect data on which condition patients thought they were in, but since 2010 this is no longer the case. It might be time to bring back this rule). The second commentary asked for studies with more relevant control groups and longer follow-up periods to check the safety of the treatment. The MAPS researchers also responded, somewhat limply.
> The journal also published an entirely uncritical commentary co-authored by Imperial College London’s David Nutt. Remember him? He was Fired4Truth as a drugs adviser by the UK Labour government in 2009 for criticising government policy on drug classification; he’s now a major psychedelics researcher who co-authored the psilocybin trial we encountered above. The commentary calls MDMA “remarkable” three separate times in about 1,000 words (it’s true the effect sizes in the study are remarkable, but what if they’re due to expectancy bias?). It also says it’s likely that MDMA “will be an approved medication within a few years”.
> So there’s a final potential harm that stems from the breathless hyping and cheerleading of psychedelic treatments way beyond the evidence: it could mean that therapies—which come with their own rare but real dangers—are approved and rolled out to large numbers of patients when we need a lot more evidence that they really are beneficial in the long term.