Hacker News new | past | comments | ask | show | jobs | submit login
Warning Signs in Experimental Design and Interpretation (2007) (norvig.com)
107 points by chollida1 on Jan 7, 2019 | hide | past | favorite | 12 comments



> Lack of Double-Blind Studies

"We know there is a placebo effect wherein patients do better when they are told they are receiving a treatment: the patients' expectations play a role in their recovery. To make sure we are studying the effect of the treatment itself and not the patients' expectations, it is better to give all patients the same expectation. So we tell them, for example, "take this pill, it might be experimental drug X or it might be a sugar pill." The double-blind part is important because we don't want the experimenters to subconsciously tip off the subjects as to what group they are in, nor to treat one group differently than the other, nor to analyze the results differently."

one thing that always strikes me about double-blind placebo based studies is that they are often testing a substance with a detectable physiological effect against an inert substance with no detectable physiological effect.

this methodology seems fundamentally flawed. if you want a true test you would need to compare against a placebo that has a similar (or at least detectable) physiological effect but is not expected to produce any efficacious outcome. otherwise the person getting the real drug feels the physiological effect and gets a placebo effect while the person getting the placebo does not.


There are two known phenomena related to this:

* "Unblinding," where participants are able to figure out whether they're taking the placebo vs. the real medication by noticing whether they experience any side effects.

* The "nocebo" effect, kind of a reverse placebo effect where participants in the control group experience false side effects because they believe they might be in the experimental group taking the real medication.


I am under the understanding that Ritalin is sometimes used instead of a sugar pill for exactly this reason.


It seems that it would be unethical, for example, to make someone intentionally sick to mimic the side effects of the drug with none of the expected positive results.


Yes. Yes it would. But comparing against baby aspirin might be worthwhile. There is some (safe?) effect beyond sugar pills.

Remember, the patients don’t know each other, and don’t compare symptoms. Hopefully.


An expected physiological result can be just something like dry mouth or temporary diuresis. It doesn't have to go all the way to making a person e.g., nauseated.


(2007)

Most successful prior posting; https://news.ycombinator.com/item?id=7598581


This is all good info; I'll add that a confidence interval on the effect size[0] is more informative than a p-value. (This is subtly different than a confidence interval on the response in each group, which is discussed in the article: Warning Sign I5: taking p too seriously.)

[0] "It's the effect size, stupid" https://www.leeds.ac.uk/educol/documents/00002182.htm


I like this. But...

“As a further example he states that the difference in IQ between holders of the Ph.D. degree and 'typical college freshmen' is comparable to an effect size of 0.8.”

Boy, there is a lot to unpack there.


I slightly dislike that his opening paragraphs are wrong. The statement "The group with treatment X had significantly less disease (p = 1%)" does not mean "treatment X it will prevent disease." It only means that there were less disease with a significant probability of being true. IOW, the study might be 99% sure it reduces the disease by 5%.


I’m guessing English is not your first language. No where does the author make the implication you state, in fact it asserts the contrary.


But Norvig is asserting that many people interpret the first statement as "it's almost sure treatment X will prevent disease in my case". It's an example of a case where common sense misleads us.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: