Hacker News new | past | comments | ask | show | jobs | submit login
My Institutional Review Board Nightmare (slatestarcodex.com)
304 points by apsec112 on Aug 29, 2017 | hide | past | favorite | 214 comments



The author doesn't seem to take the history of medical research ethics (e.g. the Tuskegee syphilis experiments [1]) seriously. Having a sophisticated understanding of the ethical risks of research is a huge part of doing good science, and this post certainly doesn't communicate a high level of sophistication. If the author thinks that ethics review is "Blindly trusting authority to make our ethical decisions for us", then he clearly doesn't understand what ethics review is or how it works.

IRBs have well-documented flaws (for a more comprehensive look, read Laura Stark's book on IRBs [2]) but "IRBs are bad because they say I can't do whatever I want even though it's obviously the right way to do it" is not one of them.

[1]: https://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment

[2]: http://press.uchicago.edu/ucp/books/book/chicago/B/bo1218257...


While I agree that the author is being a little facetious (with the constant Nazi references and such), I don't think he is arguing in favor of removing ethical considerations in research, or even arbitrarily weakening the constraints put on scientific studies.

But what purpose is having a review board made up of people who can think for themselves if their behavior is going to be indistinguishable from a program running through a checklist without any care whatsoever for whether or not the checklist makes sense in that context?

Massive inefficiencies in bureaucracy are always hidden behind the fig leaf of consistency and accountability, but it's worthwhile to consider what aim they serve and at what cost.


>>Massive inefficiencies in bureaucracy are always hidden behind the fig leaf of consistency and accountability, but it's worthwhile to consider what aim they serve and at what cost.

This problem was largely solved by the development and acceptance of private IRBs. Going with private IRBs that meet 2x weekly, turn forms around quickly, and are willing to pre-qualify your work - at cost, of course - is the road many researchers take now.


How is that solving the problem???


....how does it not?

If you are asking the question: "How does this private market solution solve the problem of government/education/medical bureaucracy being inefficient," then I guess it does not solve the root cause. But trying to solve IRB problems involving that three-headed hydra is not likely to succeed. Going around the system is a feasible workaround.


Implementing a terrible process faster at great cost is not the same thing as solving the problem, nor is it the best possible solution short of "fix all bureaucracy".


I wouldn't call my experiences in private IRB to be terrible at all, honestly.


My comment does not conflict with that because by "process" I mean the mechanism purporting to increase subject safety, not your experience.


What basis is there for the assumption that private IRBs have a "terrible process" and do a worse job at protecting subject safety?


I agree that IRBs are not great -- the book cited above does a great job of unpacking how IRBs fail to live up to their promises, and this covers some good additional examples. I also agree that the author isn't necessarily advocating against ethical review as such.

I'm more concerned with the author's insistence that he knew the right way to conduct his study ethically. Peer review is a core scientific principle-- we don't allow researchers to make autonomous judgments about the quality of their work.


I don't think he's advocating that he should be setting the rules or self-evaluating. He just wants the rules to be sensible and sensibly enforced. He has justification to complain if the modifications they requested violated safety policies, risked patient well-being or defeated the purpose of the study. Policy that doesn't add any benefit deserves to be questioned.

Not to mention that the questions were already being asked. He just wanted to ask them earlier to compare against the eventual diagnosis. It's not the kind of study that should have been abandoned in frustration after two years, as happened here.


I think for his next study he should do one into the mental health of those who interact with Institutional Review Boards. If nothing else, it would have a nice recursive effect on the authors.


So, an example of how the author doesn't understand the point of these requirements is the 'encryption' (anonymisation) process, added to 'the files had to be kept together!'. The anonymisation process isn't about stolen files. It's about data leakage, and keeping identifying data out of result sets. If you're pulling your all-nighter to prep for the big talk tomorrow, for example, you're not going to accidentally include identifying data from your results page if it doesn't exist on your results page in the first place.

Edit: more important for the scientific process (rather than privacy) than 'data leakage' is 'data sharing', as yread points out below. Anonymised data can be quickly and safely shared, and others can run their analyses on your results. Non-anonymised data can't be shared. If you're interested in publishing a robust scientific paper, why would you be against opening your data for inspection?

> Not to mention that the questions were already being asked. He just wanted to ask them earlier to compare against the eventual diagnosis. It's not the kind of study that should have been abandoned in frustration after two years, as happened here.

Then he could have just asked them and done his own informal study. Nothing is stopping the doctor from saying "are you happy, then sad?" on first meeting a patient. But if you want to do a formal, publishable study, then you should have all your ducks in a row. Make sure your independent variables are properly controlled, make sure any ethical issues have been externally vetted, so on and so forth.

While the IRB certainly had some annoying concerns, so much of this author's frustration just simply wouldn't be there if he understood why those questions were being asked.


> make sure any ethical issues have been externally vetted, so on and so forth.

Sure, but the complaints of the IRB mentioned (and the auditor) seem to be far beyond ensuring practice is ethical. Instead, they seem to focus on following process only for the sake of process.

Why should the consent form have the title of the study? Why should the consent form contain a list of risks when there are none? What is wrong with having consent forms signed with pencil when pens aren't allowed. Why should the data integrity plan require periodic review (i.e. why should we have a data integrity plan - integrity plan). These are all indicative of a bureaucratic system that places too much emphasis on 'process', losing sight of 'outcome' in the end.


> These are all indicative of a bureaucratic system that places too much emphasis on 'process', losing sight of 'outcome' in the end.

Reminds me of something that Jeff Bezos (of Amazon.com) wrote in his 2016 Letter to Shareholders:

> Resist Proxies

> As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.

> A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.

https://www.amazon.com/p/feature/z6o9g6sysxur57t


Ethics committees are there to protect both the institution and the subjects of the study. One of those protections is avoiding misleading the subjects, unless absolutely necessary and beneficial. How many times have you seen people here on HN bitch about Company X's misleading marketing? It's exactly the same with human studies - people feel used and abused when they find out they were lied to. Similarly, jancsika below points out that the author is working on a patient population with literal paranoid people in it; they're not likely to respond well if they find out a questionnaire was for a different purpose than stated.

Sticking to a common set of rules and only deviating when there's very good reason is one way to help protect subjects. What's the 'outcome' here? A doctor wants to do a study. Why is that more important than the rights of the subjects? Yes, everyone who does a study thinks it's going to cure cancer and solve the national debt. They'll promise the moon in order to get their way. These processes are put in place to protect people against poorly-planned studies. And there's no way to know ahead of time that a study is 'trivial' - if you're working on humans, you need to be vetted. "But we already do this to patients anyway" is besides the point; if you let doctors bypass vetting because of that argument, you'd see all sorts of horrific stuff happening. Ethics committees didn't come about because bureaucracy invented them for the sake of it, they came about because people were being unknowingly tested on by medicos who promised that the study was 'beneficial for the common good'.

And what you find distressing is not what other people find distressing. Search for mncharity's comment elsewhere on this page, where people are distressed simply by being asked about viruses. Yeah, sure, that's not typical, but the counterpoint is: is the research beneficial enough to warrant causing distress to people who would otherwise have been left alone?

In short, this 'needless bureaucracy' is there to protect both the institutions and innocent people from researchers going 'rogue'.


No one is saying the vetting process should be done away with. They are saying it should be reformed to not be so god damn stupid.

There's like 50 wall of text posts in here that don't seem to be understanding that


I think it's quite easy to see the benefit of the author's study and their frustration is completely understandable - however, I can't help but feel that much of their frustration was simply because the rules impeded their progress, not that the rules were actually useless.

My major gripe whenever there is a long piece that decries the bureaucracy of various regulatory boards is that the complaints tend to be about how the bureaucracy is a personal inconvenience. Some of the gripes I'll absolutely grant; pen versus pencil and inflexibility for giving potentially violent persons a weapon probably needs some sort of leeway, but I think protectionary measures absolutely should be a brick wall; a surmountable one, sure, but only as a result of you actually trying a bit and demonstrating that your intended actions aren't going to do exactly what the regulation is trying to prevent, and that should be on the researcher using human participants to demonstrate.

The idea of easily avoidable and mutable regulatory functions seems contradictory - that is, a researcher shouldn't be declaring what should and should not apply to them. This isn't fear that they're Hitler and going to inject people with nastiness, it's fear of the dumb mistakes that every human makes and our often poor ability to predict the outcome of certain actions. I get it - they want to help people and the regulations are inconvenient for them; but having gone through many IRB processes myself, it's not insurmountable in the least bit, much less anything the author listed.


> But what purpose is having a review board made up of people who can think for themselves if their behavior is going to be indistinguishable from a program running through a checklist without any care whatsoever for whether or not the checklist makes sense in that context?

In general, most IRBs are there to help out, not to get in the way. It may feel like getting in the way, at the time, but they mostly are there to help.

For example: the wordings of your consent forms. FDA says that they need to be at an 8th grade level, +/- 0.2 grade levels. You can game MS word to get that to an 8.2 grade level but have it be 140+ pages, and a lot of researchers do that. This is especially prevalent in clinical drug trials where many Pharam companies try to make the consent form a contract filled with legalese. That egregious example is non typical, but highlights the issues with consent forms being readable and an actual consent form a person. This is where a decent IRB would help out and tell you that you need to re-write the consent form to be an something that is not gibberish. (Side-note: Most newspapers are written at a 6th grade level it turns out).

Another example is International research. There was a case I heard of where a group was going to go collect HIV data in Tanzania. They were just going to go out and collect blood samples and sexual health questions alongside. The part of Tanzania they were collecting from is rural and believes in very strange medical practices like eating albino people or having sex with virgins to cure HIV, though I forget exactly what. Balancing the local beliefs with data collection can be tricky in that case. Typically, you would like a local IRB in that region, but there was none. So the US based IRB decided that the researchers needed to go out to the religious and government centers in that part of Tanzania and ask them what to do and what questions they could ask, etc. This added a lot of time and expenses to the study to spend in country before data collection, but the US IRB held firm on that. This was a good idea, as the local peoples had a lot of reservations and wanted to see all the data themselves before it was sent out of country, so as to go off and then kill women that were HIV positive. If the US researchers had found this out beforehand, a lot of deaths may have occurred. They subsequently then left off a lot of questions and data gathering.

Researchers are human too and we forget things all the time and have a lot of pressures on us as well. IRBs should be there to help out and not be a road-block. If the IRB you are working with does turn out to be that, talk to other researchers and find if they have the same issues. If you all do, go to the IRB and explain to them that they are unreasonable, most of them do want to know that they are holding things up unnecessarily. If it persists, go to the hospital/university admin as a group and make the case. These advancements need to be triple checked, but they still need to go through to begin with. IRBs are not evil, sclerotic maybe, but not evil. Work hard to get them to be better, we are all in this together.


So the original study planned on sharing de-anonymized data of who had HIV with the local populace? And the IRB prevented that?


Essentially, yes. The naivete of the team was staggering. They thought, at first, that they local people just wanted to see the data and have a copy of it, for pure curiosity's sake. Nope, murder.


Woah, I assumed I misread that. Just common sense/decency says you don't share who has HIV with anyone who does not need to know. True in the first world, especially true in the third world where stigmas can run even stronger.


Yes, this is why we have IRBs, for the exact reasons like this one. When dealing with health and medical stuff, the stakes are usually not this high, but they can be very easily.


I think you're basically making a bucketing error here. As Scott Alexander points out, the exact same things he wanted to do were already being widely done elsewhere to no ill effect with no such worrying about consent. Why then does it make sense to group it under "research" together with the Tuskegee syphilis experiment and conclude things about it from that experiment, when the two have nothing in common except being research? I think that's a mistake of inference. Like, why should we conclude that we need that sort of consent to administer a questionnaire for experimental purposes (i.e.: gaining generally useful knowledge), but that there are no substantial issues of consent when it’s for diagnostic purposes (i.e.: gaining information about that particular patient) instead? Wouldn't it make more sense to group this case with the case with similar risk profile but a different purpose, rather than a related purpose but an entirely different risk profile? Something being research isn't what causes problems.


> why should we conclude that we need that sort of consent to administer a questionnaire for experimental purposes [...], but that there are no substantial issues of consent when it’s for diagnostic purposes [...] instead?

A patient consenting to being treated doesn't mean they'd consent for that personal information to be used for any other purpose.


The interesting thing is that HN is usually a place where people are usually quite militant about their data not being used for purposes other than intended.

From the responses in this thread, it seems HNer's are actually fine with their data being used for secret reasons without their consent, as long as it means less bureaucracy. "How dare you use my data for your own purposes... unless it helps you avoid an inconveniencing committee, then go for it!"


Maybe it's simply a matter of from whose point of view the story is being told. "They used my data without asking" vs "they won't let me use this data that I already have anyway".

It helps that it's painted as the poor resourceless scientist vs the powerful abusive entity (IRB). Rather than the poor resourceless patient vs the powerful abusive entity (hospital and its staff).


> Like, why should we conclude that we need that sort of consent to administer a questionnaire for experimental purposes (i.e.: gaining generally useful knowledge), but that there are no substantial issues of consent when it’s for diagnostic purposes (i.e.: gaining information about that particular patient) instead?

1. Researcher incentives. Data => publications => prestige. If the procedure actually contains some inherent risk (e.g., a surgery) then this mismatch between the priorities of the physician and the priorities of the researcher matters. Even (actually, especially!!!) if the physician and the researcher are the same person.

2. Publication. De-anonymization is a real risk even if you're careful to only talk about patient # N in the publication.

3. Patient preference. Just because I allow my hosting provider to access my servers for the purpose of maintenance doesn't mean I'm OK with them accessing my servers for the purpose of surveillance or to read my personal emails. Purpose matters. Some patients won't want data about them used in the context of a scientific study. Patients should and do have the right to insist on that preference. Frankly, the presumption otherwise is exactly why IRBs exist.

This story sounds stupid because -- aside from de-anonymization or data leaks -- there are actually no substantive risks associated with this study. Change "questionnaire" to "open heart surgery" and all of this process starts to make a lot more sense.

FWIW I'm not saying IRB's aren't completely overbearing when it comes to benign studies. They totally are. But that doesn't mean these processes aren't well-justified in other circumstances.

(Also, a lot of this pain is completely avoidable if you learn and follow the rules, which isn't actually nearly as hard as this post makes it out to be IME.)


"The author doesn't seem to take the history of medical research ethics (e.g. the Tuskegee syphilis experiments [1]) seriously."

This is the same dynamic that happened in, say, the 1950s with Communism. At that time, the Soviet Union was legitimately scary. They had the world's largest army, an arsenal of nuclear weapons, and were led by a power-mad dictator. It was certainly a wise move to defend against them. But eventually, Communism became a catch-all justification that could be used for any rule or policy, no matter how silly it was. Why, for example, did we need the words "under God" in the Pledge of Allegiance? Because of communism, even though it didn't help the Western military position at all. Likewise, any challenge to these policies could be dismissed as "naive", "uninformed", or "not taking the threat seriously".


The best thing about this is that we recreated this mentality when dealing with terrorism. The current brown scare that we are living through is almost exactly the same as the red scare of the 1950s.


The IRB process he described did not have a sophisticated understanding of the ethical risks of research. It imposed arbitrary and useless rules on an experiment with few if any risks.


IRB review procedures are flawed, I agree. But ethical review by a third party is enormously important for research with human subjects. I'm alarmed by the notion of researchers deciding unilaterally that a study has only "nonexistent risks" -- that's a recipe for disastrously unethical scientific conduct.


The point is that the decision should not have been unilateral - the author and his colleague had a very good case for their position, based on the fact that they were not asking anything that was not being asked anyway, and not recording anything that was not being recorded anyway. It was mindless dogma on the part of the IRB that made it unilateral. Meanwhile, a practice with a real potential for risk went unstudied.

You might argue that there was risk to the patients in the one part of the procedure that was different than what they would experience anyway - the actual asking for consent - but if so, what would you propose doing about it?

You have made a straw man out of this one case, and are defending it with all the dogma displayed by this IRB.


I don't think the author was proposing getting rid of oversight completely, just that said oversight should, you know, make sense.


I don't think the author is in any way arguing that research should be completely without oversight. Just that current oversight is uselessly burdensome.


I'm deeply disturbed to see comments here defending this. The top comment no less! At every single step the things the author went through were absolutely ludicrous and indefensible. Actual real people are getting hurt by the IRB. This study alone could have saved thousands of people from getting misdiagnosed as bipolar. God knows how many tens of thousands of other similar studies have been stopped by this bureaucracy.


> At every single step the things the author went through were absolutely ludicrous and indefensible.

I disagree. Even the one that he makes sound the most ludicrous ("IRB required pens, be we were only allowed to use pencils") can be rephrased as: "Your hospital had reasons to be concerned that patients might stab themselves or someone else with a pen, yet you don't even think of giving them pencils as any sort of risk".


The hospital allowed them to have pencils and apparently had no issue with it. Presumably patients fill out forms with pencils all the time. Even the IRB failed to complain or notice this "danger". They never complained that pencils were too dangerous, just that it didn't meet the arbitrary signature requirement that it be a pen.

And you say this is the most ludicrous requirement, but it's actually the least. After all it's not entirely the IRB's fault, they didn't ban pens in the hospital.

But it demonstrates exactly what is wrong with bureaucracy. Maybe one bureaucracy with one set of arbitrary rules could be tolerated. But once you have two interacting they can create contradicting and incompatible rules. One bureaucracy bans pens, the other bans pencils, and you end up with a world with no writing implements at all.

I'm sure the regulation sounded entirely reasonable at the time. Someone had to put together a set of rules on how to get consent. And they think pens are supposed to be the proper way of writing a signature, because banks require you to sign checks in pen. Because long ago people would erase checks written in pencil and commit check fraud. At no point did they ever consider the history of that rule or that it doesn't make much sense applied to consent forms. At no point did they ever consider that there might be a hospital somewhere that bans pens for whatever reason.

And I can't blame them, why would they? You can't anticipate every possible edge case. This is a website made up of programmers, we should know that better than anyone. And yet at no point did anyone with common sense come along and make an exemption for that rule. When rules become fixed and inflexible they do things that weren't intended by the rule writers.

And every single thing in the article is like that. Some rule that might not be so bad or make sense in isolation. But combine it with 10,000 other rules, and total weight becomes overwhelming. And as a result you get a bunch of people misdiagnosed with bipolar disorder, and god knows what else since doctors can't do the research necessary to find out.


> At no point did they ever consider the history of that rule or that it doesn't make much sense applied to consent forms.

Or maybe they did?

> At no point did they ever consider that there might be a hospital somewhere that bans pens for whatever reason.

IRBs are part of each research institution. The unspoken implication in OP's story is that the psychiatry department of his hospital had never conducted research with humans before him. At least not any research that required consent. If they had, they would have faced the need for pens. "Requiring IRB review and a consent form" is almost the opposite of an edge case for a research institution.

> Some rule that might not be so bad or make sense in isolation. But combine it with 10,000 other rules, and total weight becomes overwhelming.

I can't really feel that steps like being forced to blind your data, or to store that data somewhere it can't be read by random people, are overwhelming, nor ludicrous, nor indefensible. I understand that learning by hitting walls is frustrating, but those things are proper experimental procedure. Maybe his professors should have had a lecture about them (mine did). Or maybe it was the purpose of that video that he thought was a waste of his time because he's not a Nazi.

> Even the IRB failed to complain or notice this "danger". They never complained that pencils were too dangerous

If I'm reviewing that application and on the risks section I see "paper cuts lol", and the applicant then asks me to allow pencil signatures because pens at his department are a risk, I would conclude he's not taking any of it seriously enough.


Just playing devil's advocate, what is the possible danger of accepting a consent form with a signature in pencil? Is someone going to erase a signature from the consent form? (And how exactly is that a potential risk or danger?)

I'm pretty sure that it's not any harder to forge a signature when it's made using pen or pencil. It's just a mark that says "I agree." It's not a bank form where the validity of the amount tendered could be called into question.


I could imagine this:

"Oops, I forgot about presenting my subjects with the consent form and now I've got 100 results collected during a whole year of work.

I definitely don't want to throw all of that work away... But if I contact them and make them sign the form now using pencil, I could erase the date and change it to each subject's day of experiment. Nobody would notice."

Pen raising that barrier. I agree that against an adversary fully committed to fraud it doesn't matter. But that threat model is exceptionally rare, and an IRB can do little about it anyway (e.g. the whole application could be made up, the experiment consisting of something completely different). The realistic threat they fight against is experimenters that don't know better, and think there'd be no harm in their actions.

And this is just me coming up with a possibility in a few minutes. Reality tends to be richer than a single person's imagination.


Some researcher who paid $40 to 100 people over the course of a year is going to go back and contact all of those same people again, to sign a release form, and they all have to use pencils... and nobody is going to call him out on it and ask why is it important that I sign this form with a pencil?

The fact that I actually agree with you, and think this might work in fact inside of any hospital in the US, says more about the byzantine complexity and scary state of our medical system than anything else.

Because I could totally see 100 people being told they have to sign a thing, while in a hospital, with absolutely no good reason given, ... and all 100 just signing it after some variable x*N amount of huffing, then get on with their day.


That was not the IRB's objection. I don't know where you're getting that (except as a devil's advocate position you thought of?). They were objecting to him using something other than pen (even though pencils were in common use for these very patients at that hospital with no IRB objection), not insisting that he use something safer than both pens and pencils.


I didn't say it was their objection.


>>>Even the one that he makes sound the most ludicrous ("IRB required pens, be we were only allowed to use pencils") can be rephrased as:

It looked like you were giving an alternate phrasing of the IRB's actual argument, not a separate one that also argues in favor of disallowing pencils (as well as the pens they had no problem with).


Ah, I see how I worded it confusingly.


>I'm deeply disturbed to see comments here defending this

Think about the about experiences and opinions that are common to the people on HN.

Bureaucracies that have lots of rules to cover every eventuality and interpret those rules as if they're a computer executing code isn't an intolerable concept to a group that can be generalized as "tech workers in CA."


Did you read the post? There were multiple ridiculously stupid things the IRB insisted on that had no impact on safety and were pure bureaucratic BS, the insistence on blinding patient's names with the blinding instrument right next to the blinded data, the signing in pen only, the person in the Research office having to do the course when they did not do research. All of this for a study that involved taking notes on things the doctors were going to be doing anyway, which posed no risks to anyone.


> the insistence on blinding patient's names with the blinding instrument right next to the blinded data

The requirement wasn't that the code should be next to the data, if I understood it correctly; rather that it be blinded and stored safely. Storing them together was the least-effort way he found of passing the requirement, partially defeating its purpose.

So if anything that's an argument for adding even more requirements to the IRB process in his hospital. Because grad students that feel sufficiently aggravated will do things like that.


He most likely has sufficient training and knowledge about the ethics given his profession. Curious what credentials you have that might make you more qualified to speak on the ethics of medical research, but in any case I think the point he makes is that it's possible to get so caught up in the letter of the law that you run roughshod over the spirit of protecting people that brought about the laws in the first place. This seems to often be the case with concerns about data sharing and personal privacy. If you read his whole piece he discusses this and you can extrapolate that as the result of the irb stonewalling, as well intentioned as it presumably was, many patients are still suffering from not only violations of the privacy that the irb was concerned about, but those same patients may also have erroneous and stigmatizing diagnoses being included in the information that's being freely shared. The IRB in this case is doing damage by preventing research that might address this. Pinker's made some worthwhile comments on this if you're interested in this topic.

You probably will get downvoted because you come across as being entirely dismissive of expertise or thinking that is not your own. I think you general point about the importance of oversight isn't lost on anyone.


> He most likely has sufficient training and knowledge about the ethics given his profession.

Sure but the mockery in that post suggests he either doesn't care about them or thinks they're stupid.

Also do we really have to point out that just because someone is a doctor doesn't mean they have any authority when it comes to medical ethics? I mean, there's hundreds of cases amply demonstrating this even just this decade.


If you don't think irb's are fitting targets for such mockery I'm inclined to be strongly suspicious of your claims to have had sufficient dealings with them. They are the DMV and IRS of research rolled into one. I've never met anyone who disagreed with that, even amongst those who served on them, critically important though they may be.


> strongly suspicious of your claims to have had sufficient dealings with them

You definitely should be since I have made precisely zero of those claims on this website or any other.


> If the author thinks that ethics review is "Blindly trusting authority to make our ethical decisions for us", then he clearly doesn't understand what ethics review is or how it works.

What is the basis for this statement? The author states that there is no oversight or governing body for the IRB. Is this incorrect (and if so, who is it)? If not, then the author's criticism seems apt.


Not the parent, but I reached that same conclusion.

You're not trusting them blindly because any and every ethical protection you decide your procedure needs, will be there. In addition to the ones the IRB thinks need to be there too.

When you implement a workplace safety policy in your company, making it comply with OSHA regulations isn't "blindly trusting authority to decide what is safe". If you think a practice is unsafe, yet OSHA thinks it's safe enough, they're not gonna prevent you from taking more precautions.

Same deal with code reviews. You aren't blindly trusting your colleague to decide what is bad or good code. You're adding their polish to yours.


Exactly! As someone working in medical research data sharing I am sometimes stopped from doing things the simple way by our IRB but I much prefer them being there and saying this is ok and this isn't versus just doing it the best way I can only to miss a detail and then be sued/charged with mishandling information.

I don't understand how a medical doctor can write such a simplistic post. Of course your consent forms need to mention that you will access medical data if you look at diagnosis even though they are your patients! You have access to all kinds of data for treating the patient (so called primary use), but you should have the absolute minimal set of attributes for research (secondary use). Who judges whether these attributes are really needed? IRB!

Of course you need to have you paperwork in order. This is not only your paperwork but also the hospital's! If you fuck up the newspapers are gonna carry headlines: doctors in this hospital leaked patients data. Do you think patients will want to come to that hospital?

Of course your investigators need to go through training on research ethics! In fact you should've put them through more training so that they fill in the damn consent forms properly!

Of course you need to separate personally identifiable information from the actual research and encode patient identifiers. Ever heard of publishing your data next to your research? When someone asks for the data from your study and you would forget to scrub it you would leak data. Encoding it and having it in two separate folders in one cabinet makes data sharing simpler. It protects you from making a stupid error.

Of course they can be next to each other in the cabinet. That cabinet is fucking locked! And it's in a hospital next to other private data. The risk is with publishing it!

Of course you need to fill in New Study Application, describe your study design and consent. IRB judges research studies, how would they do it if they didn't have that? You need to describe that stuff when asking for a grant and most studies are funded by grants so for most people it's no extra work.

Of course you need a monitoring plan and study meeting monitoring problems. If you had one you would notice that the newbies were doing a shitty job! Them doing a shitty job is not only bad for you but much more importantly it wastes patient's time! Patient's didn't come to hospital to improve your credentials!

The fact that there is some less than ideal stuff already used doesn't mean that your new study can be shit. That's like being angry about your pull request not getting accepted cause your indentation is all over the place and saying "but your indentation is already inconsistent". If there are style guidelines that's the way to do it. New code needs to follow them. Old stuff might get fixed or not but new stuff should be done according to guidelines.


I think you make his point. You work in medical research, all of these hoops to jump through are a real benefit to you. They keep you from being sued and act as moats that protect your livelyhood from external disruption. Do the hoops protect the participants in the research? Of course. But at what cost to the participants and at what cost to less well equipped researchers and at what cost to science itself? It's good that you won't be sued, and your training in filling in the various forms is commendable but maybe there's another side to things you're too inculcated to see clearly.


> They keep you from being sued [but] at what cost to the participants [...]?

The people suing you would be the participants, because you would have already harmed them. Not getting sued by them is just a derived benefit of the primary goal of protecting them from you. And the protections extend well beyond what a patient might bother suing you for: it's an ethical committee after all.

> at what cost to less well equipped researchers and at what cost to science itself?

Completely secondary considerations, for very good reasons. Nazis weren't the only ones to harm patients; also well-intentioned scientists that thought of the greater good for science itself, and grad students that lacked resources or equipment.


[flagged]


This is the sort of claim that requires a link.


The initial meeting with the IRB where they proposed ridiculous and irrational changes to the consent forms was strikingly reminiscent of every conversation I've had in a third world country where someone was asking me for a bribe, but couldn't say it directly.


I've noticed how often big projects will allocate something like 5% of their budget for a few things like 'bioethicists' and 'community' grants; you never seem to hear about what they produce or what important findings they make, but the money does get spent...


Money always gets spent. It is in its nature.


I thought I remembered a proposal to create some new categorical exemptions from IRB review, one of which was somehow related to studies where you just talk to people. (I've been interviewed by several social scientists and very often their "protocols", consisting exclusively of informal interviews, had clearly been through extensive IRB reviews, which felt super-odd because journalists wouldn't need any ethical review at all in order to carry out exactly the same interview in exactly the same way.)

On the other hand, you clearly can harm people in various ways by or as a result of interviewing them, for example by breaching confidentiality, by being forced to breach confidentiality, or by making them feel bad about themselves by insulting them during the interview or just by bringing up painful and distressing memories. And also it's hard for researchers who aren't doctors to extend the same level of legal protection to what their research subjects tell them:

https://www.socialsciencespace.com/2017/01/social-science-ne...

So maybe one idea would be to create a categorical IRB exemption for studies that combine non-invasive observations, or existing datasets, with interviews not reasonably expected to be distressing to the subjects, and conducted subject to otherwise existing norms on patient confidentiality and medical privacy? (recognizing that limited legal protection for that confidentiality still poses problems for some studies)

(I'm certainly happy to apply the Chesterton's fence principle and learn more about the history of unethical experimentation before seriously advocating this.)


> interviews not reasonably expected to be distressing to the subjects

I did some guerrilla street usability testing, of animated science education video fragments.[1] About the size of objects - sort of like Powers of Ten. During red lights at busy street crossings.

Even with a very small test population, I saw people distressed by surprising things.

Mention of millimeters reminded an elderly Brit of unpleasant childhood experiences when learning metric.

A college student expressed distress over a character breaking the head off an (enlarged to arm-sized) T4 bacteriophage.

I had someone run away, across the street, saying "how could you show me something so disgusting!", following a quiet background sound effect, a child's hacking cough, accompanying the word "virus". When I later stopped the video immediately after it to get feedback, most people were "what cough?".

Many people were variously distressed by the mention or discussion of viruses and bacteria. Which I didn't expect, but in retrospect seems unsurprising - for many, they have strong and negative associations. They get very bad press. Think "fun story about viruses!" having the emotional flavor of "fun story about genocide!".

So at this point, I'm unclear on what can reasonably be expected to not distress people.

"How about this nice weather we've been having?"... "Houston has been bringing back painful memories of my family losing its home and business to flooding. :(" Somewhat tongue in cheek, but if you are aiming for do no harm...?

[1] There are some bits of the test videos in the "How to remember sizes" section of http://www.clarifyscience.info/part/Atoms .


I'm not sure if it's politically acceptable to say this these days, but maybe it's ok if people experience discomfort sometimes. It's ok to see icky things, or are 'forced' to remember an unpleasant memory. It's not the responsibility of the people around you to protect your feelings, and your bubble.

There is often more value in science existing than in some marginal perceived harms incurred from a handful of people getting interviewed. Your emotional wellbeing is your responsibility, not the responsibility of the people around you.

And on a personal level, we don't grow from being comfortable. The opposite - we grow through discomfort. We grow through facing our demons and realising that they don't actually kill us.


I dunno about you but I think myoviruses look creepy as shit. https://i.pinimg.com/236x/cb/e0/59/cbe0591ea18e0afa4686bd3f4...


Creepy? The flailing sticky legs that glom on to you? The teeth that punch a hole in your side? The high-pressure vessel driving high-speed injection? Which only gets the DNA string half way in, so it squirms the rest of the way in, ratcheting as molecules attach and begin transcribing your corruption? The minute from bite, to being doomed?

As you frolick in the summer waves, every cup of seawater is war zone, filled with massive and deadly warfare between bacteria and viruses. Ten billion combatants. Bacteria shedding extracellular vesicles like anti-missile flares. Mass dumping of chemical weapons. Suicidal sacrifices. Exploding victims. 2 day bacterial survival - 50%. And most, they haven't been cultured, haven't been sequenced - we haven't even given them names.


The problem is "reasonably expected." For an recent example consider the 70.000 Tinder (?) profiles now located on the piratebay. The guy who did that was likely preoccupied with questions like which scraper to use, and just never thought about "edge cases" like homosexuals in Saudi Arabia. Clearly someone is not reasonable here, but is this a reasonable oversight on the researcher's part?


Other issues also still apply even when there is no physical risk. For example, vulnerable populations like prisoners or children or your students may feel that they are (and may really be) forced to participate, that they can't say no. Some may be unable to give proper consent, such as weakened hospital patients on morphine or severe psychiatric patients.

> journalists wouldn't need any ethical review at all in order to carry out exactly the same interview in exactly the same way.)

Journalists have codes of ethics and laws they are subject to.


Here's a famous example of a "just talk" experiment that went wrong (it literally created the Unabomber):

https://www.theatlantic.com/magazine/archive/2000/06/harvard...


From the article: "Murray subjected his unwitting students, including Kaczynski, to intensive interrogation — what Murray himself called 'vehement, sweeping, and personally abusive' attacks, assaulting his subjects’ egos and most-cherished ideals and beliefs."

I guess that's technically "just talk" but it's not what most people would think of as "just talk".


Didn't they dose him with LSD?


> which felt super-odd because journalists wouldn't need any ethical review at all in order to carry out exactly the same interview in exactly the same way.

Journalists don't have a history of injecting you with syphilis just to see what happens, though.


Every large group will have some people who commit horrible acts, including the media. "Hate radio", for example, was a major contributor to the Rwandan genocide. https://en.wikipedia.org/wiki/Radio_T%C3%A9l%C3%A9vision_Lib...


Pople who are intentionally committing a genocide are out of scope for IRBs, since they are already refusing any kind of oversight because they know what they are doing is wrong.


Neither do anthropologists or computer scientists, who are both now subject to IRB review in universities.


Computer scientists will do stuff like expose people in repressive regimes to internet censors (http://conferences.sigcomm.org/sigcomm/2015/pdf/papers/p653....) or hack your Facebook account and slurp all your data (http://www2013.wwwconference.org/companion/p751.pdf)

Though ENCORE was done with IRB approval, the community is mostly unsure how that was possible...


Thanks for the examples. I'd agree that they show that computer science research is capable of harming people.

Edit: although I don't think they change my intuition that it's strange that high-risk invasive procedures that people expected to cause grave injury are dealt with by the same oversight mechanism as interviews (even though I'm very convinced of the ethical importance of strong confidentiality protections for interviews).

Second edit: I'm also aware that IRBs aren't only inspired by Tuskegee and Nazi experiments, but also by stuff like the Milgram and Zimbardo experiments which didn't involve invasive interventions.


In IRB's that I've participated in, low-risk interviews can go through expedited review, and though it's still happening under the IRB, it's far from the same process as something involving say, injections, imaging, gathering information about potential criminal involvement, or deception.

Though, every IRB is different. Which is another problem!


Who has ever done that?


As mentioned elsewhere in this thread, https://en.wikipedia.org/wiki/Tuskegee_syphilis_experiment is one of the most important parts of the history of IRBs.


Tuskegee syphilis experiment, ok. However, as bad as that was, no one was injected with syphilis.


I thought I remembered that the Public Health Service intentionally infected people in order to see what would happen, but Wikipedia seems to say that they intentionally withheld treatment from people who were already infected and deceived them about this, rather than causing the infections themselves (although some of the study subjects spread the disease to other people during the course of the study, which were new infections that could have been prevented by treatment).


I believe a modern design for this study would compare different kinds of treatment (but not the elimination of treatment... yikes).


Modern policy for medical research is that everyone gets a best available known treatment, and some people also get an experimental unknown treatment.


There must be cases where this isn't practical. E.g., if the normal treatment for greyscale is to cut off the arm before it spreads, how could you try any alternative treatment?


It isn't, and the answer is "you can't get good data". A good example would be immunotherapy for cancers, which has to be tested alongside treatments with adverse impact on immune function


Isn't the net result similar anyway?

1. People who should have been treated weren't.

2. As a result of #1, more people would have been subsequently infected.

Additionally:

3. Distrust in healthcare is engendered amongst a marginalized population, with potential to lead to further incidences of undiagnosed ailments.

The net result is similar, more people are infected than would otherwise have been. Just because the infection mechanism is less obvious doesn't necessarily make it less bad.


I can't agree. The authors wounds at every stage in the narrative were self inflicted.

(1) The study title doesn't work at the top of the form? Change it! Surely there is some other arrangement of English words that would satisfy both the author and IRB.

(2) They leave the risk section blank because minimal risk, and then complain about the requirement for pens because the patients might stab themselves if given pens. Looks to me like they didn't spend much effort thinking about risks to the patient. If they had just mentioned the pen stabby thing as a risk factor, and given "don't use pens" as a risk minimization plan, they wouldn't have had a problem.

(3) They poopoo the training because Hitler, and then get upset when they unknowingly violate a personnel rule that is most definitely covered in any decent IRB/GCP training.

I could go on, but every new thing I enumerate makes me angrier. Life is too short. Look. IRB applications suck, but human experimentation is serious business.

The rules aren't there to make the study easy, the are there to (a) look out for the patient (respect), (b) evaluate risk and benefits of the study (beneficence), and (c) ensure that the benefits are distributed to the group bearing the risk (justice).


From reading the post it's not at all clear to me that 2 of the 3 goals you list were accomplished by the rules.

A) Instead of actually looking out of the patient a list of arbitrary rules were imposed.

B) The risk (minimal) was not properly evaluated but instead massively overstated.

Given this failure of the process to meet said goals it seems perfectly reasonable to question the process.


(A) The rules are not arbitrary. They are the product of much careful deliberation.

https://en.wikipedia.org/wiki/Belmont_Report

(B) Is experimentation on suicidal patients really minimal risk? Maybe, maybe not. But failure of the investigator to show they have thought about it is a huge red flag.


(A) They were arbitrarily imposed in this case. Just because a rule is a good idea doesn't mean it is a good idea all the time.

(B) We're not talking about injecting the patients with some experimental drug here. We're talking about asking them questions that they were already being asked anyways. The lack of risk is self evident to anyone who considers the question.


> The lack of risk is self evident to anyone who considers the question.

Even the author disagrees with you (and, oddly, himself):

From the article:

> Also, psychiatric patients are sometimes…how can I put this nicely?…a little paranoid. Sometimes you can offer them breakfast and they’ll accuse you of trying to poison them. I had no illusions that I would get every single patient to consent to this study, but I felt like I could at least avoid handing them a paper saying “BY THE WAY, THIS STUDY IS FULL OF RISKS”.

If merely offering breakfast has a reasonable chance of getting re-interpreted as offering poison, offering a consent form for a test the purpose of which cannot be revealed before it is given certainly has a nonzero chance of triggering some kind of similar episode. So the risk of having such patients sign a consent form isn't zero, as the author somehow implies everywhere except this paragraph. Thus it should have been written into the risks section.


I'm concerned you conflate two things. A paranoid psychiatric patient may see a world full of overwhelming risk behind everything. It does not mean that risk exists. It does mean, however, that confirming their paranoia, or seeming to, would cause a panic. We're concerned about the panic because it is unjustified. A good procedure would avoid this unjustified panic. The IRB requires a procedure that doesn't avoid it. In this way, the IRB failed to protect the interest of the patient.


> The IRB requires a procedure that doesn't avoid it

The IRB requires getting informed consent from the patient. This is not unreasonable. In fact, IMO, experimenting on people and then publishing the results of those experiments without prior informed consent is grossly unreasonable and creepy.

Parent's point is that the author concedes that in a psych ward context, even asking for consent carries serious risks. I'm not sure if this is true because I have no personal experience, but the author seems to believe it is true.

Look, it sucks that doing any research in a psych ward carries substantial risk. But that doesn't mean that you can perform experiments on psych patients without their informed consent. Even if you think that the experiment is NBD. Guess what? You're not the patient and it's not your medical data, so it's not your call. Insisting otherwise is incredibly disrespectful and dehumanizing toward psych ward patients. They are humans, they have rights, and an inconvenienced researcher is not a reasonable justification for nullifying those rights.

Are there aspects of IRBs that are infuriating and unnecessary? Absolutely. But requiring informed consent before medical data is used in a non-treatment context is -- and should be -- a basic right afforded to every patient. Even psych patients.

I hate IRBs, especially for questionnaires, and especially the stupid training. But posts like yours convince me that both are sadly necessary.


(A) This is kind of a red herring though. The rules are not uniform. Sick patients require more protections than healthy adults. Placing an implant in someone's leg requires a more comprehensive risk minimization plan than asking someone to look at the ground for 5 seconds and then read a flash card.

(B) Is the lack of risk really self-evident? To everyone? Or just to you? Thus the question who should decide? And by what process do they decide?


As for your B point, if we're going to play "but what if" I demand we play it consistently. What is the risk of this study not being performed? Is it self-evident that we are better off in a world where those concrete procedures killed this study?

This is one of the core flaws in bereuacracy, looking at the putative benefits of a policy without considering the costs. No one really denies that some oversight is needed, the question is, is this oversight needed?


"But we cannot be blamed for a different eventuality, like saving lives. It didn't happen, and its our job to make sure that it doesn't, unless policy is adhered to the letter."


I think perhaps we are talking past each other. I certainly agree that there should be rules for conducting research on human subjects and that there should be a process for independent review of said research.

I'm just saying that in this case the people tasked with doing the reviewing did a really bad job. And it seems like, from other things I've read, this sort of bad job isn't exactly uncommon. It would be great if the people in charge of this stuff could reform the system so that IRBs did a bad job less often.


> human experimentation is serious business.

Sometimes. When you just ask a few simple questions, it clearly isn't.

Your list of the benefits of IRBs looks valid. But without a corresponding set of costs to weigh them against, it's meaningless.


Well a few simple questions about deep seated emotional trauma (e.g. molestation), or embarrassing or taboo subjects can present risks to patients. So like you say. Sometimes.

The question is who decides and how they make the decision? Surely it should involve someone other than the investigator, and have clearly defined rules so people know what to expect. Hence the IRB.


The point reiterated several times is that they were already asking the patients all of those questions as part of their normal procedure, without any of the alleged protections provided by the IRB. Once they wanted to use the data that they were already collecting anyway it suddenly became a huge risk that required all of this bureaucracy to mitigate.


Yes, but there is a key distinction between the practice of medicine and clinical experimentation. The first is (in theory) is done for the benefit of the patient receiving treatment. The second is done to test a hypothesis.

Putting a patient at risk to treat them (e.g. chemotherapy) is very different than putting them at risk to get data for a publication or NDA. There's no reason what's tolerable in one context should be in the other.


The thing is, in this case, there was no one in the 2nd category who wasn't also in the 1st category. The question is already being asked to treat them. The risk is already being taken. Using the data from that action for research doesn't expose them to any additional risk.


Ultimately, the issue is this: it costs less overall for the author to just comply with the standard way studies are done, than for the committee to carve out and justify exceptions just for his study.

The author complains of the IRB not doing Methods 101. Perhaps the author should have done Methods 201, then he would understand the concerns (which many human experimenters in this thread have elaborated on). As stated elsewhere, so many of the author's wounds are self-inflicted.


The author's point is not that his time was wasted. It's that the system is broken.

Sure, he could have spent a lot of time and effort to learn and adapt to the broken system, and his study would have been done, and fewer people would perhaps be misdiagnosed.

But the point is that it shouldn't be this hard to make the world a better place.


The IRB is there to protect innocent subjects from malicious studies. It's not that the system is 'broken', it's just 'not perfect'.

It's actually been pretty disturbing to read this whole thread. So many HNers who complain endlessly about the relatively trivial metadata that corporations collect on us and sell, they would quite happily get rid of this committee that ensures any studies done on them meet minimum ethical requirements and ensure that subjects know that they're being tested on.

Read the responses in this thread from other people who have done studies on humans that requires passing through one of these committees. Actual people who know what they're for rather than armchair critics. All of them present the same opinion: IRB's are annoying and sometimes a little frustrating, but thank god they're there, because they protect the public. Every researcher is going to claim that their study will cure cancer, but not every researcher is being truthful (because they're humans, after all), and plenty of researchers don't care if they hurt their subjects in order to add a feather to their caps.


The nature of scientific research ensures that there can be no "standard way studies are done". When you are trying to discover entirely new things, you will undoubtedly come into new situations that were not accounted for when the standardization is created.

It may be inconvenient, but advancing human knowledge does require that humans use their capacity to reason and examine circumstances on an individual basis rather than mindlessly checking boxes on some one-size-fits-all form.


Well, since you're arguing against standards in scientific research, then we may as well disband NIST and get rid of SI units.


Yeah, but that kind of system has real problems.

The incentive for the IRB is to be as thorough and put up as many obstacles as possible. Both from a blame deflection standpoint and a bureaucratic empire building standpoint.

I'm not aware of any force pushing in the other direction to make the scrutiny proportional to the risk.

If this is true - and I'm just some programmer reading stuff on the web who knows little of real life IRBs - the end result is a system that overproduces IRB red tape and underproduces science.

Of course, complaining about a system is easy. Figuring out a better system is not.


> I'm not aware of any force pushing in the other direction to make the scrutiny proportional to the risk.

Each research institution has its own IRB, which I expect is composed by faculty/researchers. The have an incentive not to block their own institution from producing any research output whatsoever.


Yeah, the author was not quite as bureaucracy-savvy as you, it's clear. You do realize that the whole study was nothing but matching the results of a questionnaire that was already being used against interviews that were already being done?. Literally, all he's doing is observing. And maybe asking a few extra people to complete the questionnaire, the wording isn't clear.


Said this in another comment, but I'll reiterate here. The big difference is purpose/intent. Looking under the hood of someone's car can be totally fine or a violation depending on whether or not I'm being paid to fix it.

Same thing applies here. In one case the questions are being asked to provide treatment the patient is paying for. In the other it's being done to collect data for a publication.


Bullshit.

If my mechanic wants to publish a paper on wear-and-tear on 4 cylinder engines using just the information he gleaned from doing work on cars that I (and others) paid him for, he has caused zero damage to me versus just doing the work on my car.

The same applies here. Patients come in for treatment, and they provide exactly the same questions as if they weren't doing research.


Private information about my mental health is much more personal and sensitive than the wear of my car's engine. Handling that data to an untrained student intent on publishing a paper can cause plenty of harm.

I agree, though, that this study could have been done more easily by accessing the patients' records post-facto. But it's not the IRB's fault that the experiment could have been designed better.


Yes, mental health information is personal and sensitive, but it looks like the IRB didn't even address patient privacy, it was instead something that came out of a later audit.

[edit]

Also it seems that post-facto experiments are being encouraged over controlled trials, which seems the opposite of what you want from an experimenters point of view.


More privacy protections were enforced later according to the account, but the very first protection of them all is asking for consent to use the data. That's what I was getting to. Consent to use your responses to treat you is different from consent to use them for research.


The author seems to believe that he is incredibly bureaucracy-savvy though.


Hey did you know that there are pens made specifically not to allow stabbing for prisons? https://smile.amazon.com/Prison-Flexible-Non-Lethal-Ball-Poi...


Fill out this 40 page form requesting for an exemption to the no-pen rule. In 3-5 years we will deny the request due to increased risks of patients choking on the more flexible pens.


Did you fill out the 100 page exemption to do the 40 page form? Did you also remember to complete the form completion class and ethics sub-committee on completion?

(Yeah, pretty sure is the the deepest pit of hell)


There wasn't a no pen rule by the irb, there was a mandatory pen rule.


IRBs for psychology really do seem needlessly bureaucratic. This is a case where a controlled experiment is not allowed, but the corresponding natural experiment would be (They would likely have gotten quick approval to investigate long-term outcomes of patients who had charted bipolar on this test). This is not the only time that I've heard of professional psychologists not being allowed to do something considered mundane and ethical as a course of their job if it were part of an experiment.

Yes I am aware of trauma caused to people in not just Nazi experiments but in many post-war studies (many of which were of dubious value). It seems like at some point we should decide one of the following:

A) Psychology has the potential to significantly alleviate suffering, so it's worth a risk of small harms in order to collect good data.

B) Psychology doesn't have the potential to significantly alleviate suffering, so we shouldn't do any of these studies.


My freshman year of college I proposed a study to our hospitals IRB to strap small lasers to three week old infants in an effort to measure concentrations of a chemical in their blood. The most frustrating part was not the arcane insistence on ink and bolded study names, but the hardline insistence that it was impossible (illegal) to test the device before getting IRB approval - even on ourselves. Meaning that without any calibration or testing, our initial study would likely come back with poor results or be a dud, but we couldn't find out until we filled out all the paperwork.

Incidentally, I recall it being much less painful than this, at least to get to the approval step. I would expect that each IRB varies greatly from group to group.


What stopped you from testing on yourselves anyway outside the study?


Our PI was perhaps stricter than was useful. I believe it was intended as a teaching tool to force us to deal with the irb


Well my story with the research bureaucracy is, a coworker had to fill out a notice that he got some money for a conference from outside the university and he dutifully filled out everything. Thing is, as a theoretical physicist you are supposed to fill out the first page, check the box on page 27, that there is not conflict of interest, and sign the thing. Any answer on certain questions will trigger a public hearing by some very obscure board.

Public hearing means, that poor guy has to go there in case the public has questions. He filled out everything, so he had to go to the anti-corruption board and the nuclear safety commission and so on. After roughly three weeks of him whining that he always sits there like an idiot, I got curious and came along to one of the hearings, definitely for some reason other than watching him sit there like an idiot, or so I claim.

That was the animal cruelty (against) commission and at the hearing were the committee, a very nervous working group, one unhappy postdoc and a grad student who just wants to watch the show. The PI of the working group then gave a presentation on the sad reality, that you can't study pain receptors if you anesthetize the mouse. (Apparently that was not the first time they were there, so I got just the part on why they are certain that there is really no way to get a statistical meaningful result with less mice.)

Really drove home the point that these procedures are not designed for the convenience of theoretical physicist, but are designed to tackle really hard problems.


If the ethical problems of rat suffering are really hard problems, why is it utterly mundane for people to buy products at home depot to deliberately poison them for days and days until they succumb to internal bleeding, for no other reason then "it was bothering me, existing in my house"?

Perhaps we should or could start caring, but we definitely do not even acknowledge there being a meaningful question there in any other element of human life.


Fortunately for us, the IRB only governs human experimentation. You need very little approval to experiment on animals, in the US at least.


What a medical professor at MIT HST told me is exactly the opposite, at least for "higher order" mammals like pigs. The checks are more strict because the animal can't possibly consent to you doing the experiment on it.


I dunno what to tell you, the Belmont report was about humans. It's right there in the name.

Animal research is often regulated under a framework, just not the IRB. My training was pretty clear on that.


I mean, he didn't say it was handled by the IRB. He just told me it was more strict. Maybe that was specific to MIT.


Interesting. At the institutions I'm familiar with, you can do all kinds of unspeakable crap to animals and no one cares unless PETA gets wind of it.


If memory serves me well, a friend of mine there killed mutant zebrafish larvae, after being done with them, with bleach (as the approved protocol). And another friend at Harvard cut the heads off lab rats with scissors like Heidi plucking flowers.

My understanding is the less you look like a human in the grand scheme of things, the more unspeakable crap we're comfortable with doing to you.


"Hard problems"? They're mice, for crying out loud!


  “YOU LET PEOPLE SIGN CONSENT FORMS IN 
   PENCIL, HOW CAN YOU JUSTIFY THAT?!”
The funny thing is, contracts are legally binding agreements, whether they are verbal, scribbled on a napkin, signed in pencil, pen or blood, and whether the signature is an easily forged 'X' or elaborate calligraphy. Consent forms are a slightly different use-case, but I think the premise is the same: to show a meeting of the minds.

Knowing this, the pencil signature actually protects the patient more than the institution, since the patient could theoretically forge a revocation of consent by erasing their mark. Meanwhile, ink isn't as binding, without some sort of objective third-party official (e.g. a notary public, beholden neither to patient nor to institution) serving as formal witness, further validating a non-forgery, in the case of a simple 'X' or erasure thereof.

In practical terms, one solution to render the pencil signature binding would be to create a tamper-evident photocopy, with fraud-marks like a UUID moire pattern sprayed by an ordinary desktop printer, for each serialized document, readily detectable in the photocopy but permanently printed into the background of the form (e.g. yes, this is a photocopy of the same piece of paper, and the forger failed to notice the lightly tinted pattern, when erasing their signature), and retain the pencil copy, to ensure a match. In the event of an erasure, the paper might show an abrasion, and the photocopy would act as a snapshot/backup.

Meanwhile, it should be possible to access existing, archived, historical questionnaires and actual clinical diagnoses from normal records, in a de-identified fashion, and compare results, no?

Am I misunderstanding the goal of the study? Didn't he want to compare the assessment of the survey, with actual patient dispositions, and leave the rest of all treatment untouched, undisturbed and otherwise normal? Ask the questions, do the rest, same as ever, but analyze the opinion formed by the score of the answers to the questions, with the opinion of the professional caregiver beyond the scope of the simple Q&A exam?


Here's a story about sidestepping IRB processes by going offshore

http://khn.org/news/offshore-rush-for-herpes-vaccine-roils-d...

"Neither the Food and Drug Administration nor a safety panel known as an institutional review board, or an IRB, monitored the testing of a vaccine its creators say prevents herpes outbreaks. Most of the 20 participants were Americans with herpes who were flown to the island several times to be vaccinated, according to Rational Vaccines, the company that oversaw the trial."


Discussed on HN yesterday https://news.ycombinator.com/item?id=15126557

As the people doing that one argue, there is an argument for balancing the risks of the test against the risk to suffers of illness and the like of not doing the tests.


For those who think the author is in the wrong

1 - Do you think this story indicates a success or a failure of the IRB?

2 - Would you rather a better expedited review process AND more studies (remember the personal effort cost killed this study), neither, or do you see some 3rd option?


1. It's a success of his IRB. Of the many requirements he described, both by his IRB and the auditors, he didn't make the effort to try to understand how any of them could be good for his subjects. Instead he focused on how they affected his study, and tried to find the bare minimum he had to do to get approved. If the result of it is he doesn't want to deal with IRBs anymore, it might be for the best.

2. I believe this study could have been done with much less personal effort. His hypothesis is the screening question leads to unwarranted diagnosis. He already had Dr. W. convinced that it's plausible, and willing to do full diagnosis to his patients. And he had other doctors that would only use the screening question. All that's required to refute the null hypothesis is compare the rate of "yes" vs "no" in the set of Dr. W's diagnoses, with the rate of the doctors that just use the screening question. If his hypothesis is true, the other doctors would get more positives. That's 1 bit of information he had to extract anonymously from the patients' existing records, which is a strict subset of the methodology he developed for the study.


If you don't test both kinds of diagnostics on the same patients, that doesn't sound the same at all?

As another example, if you have a ten-question survey, asking 1000 people one question is not the same as asking 100 people all ten questions. You don't learn nearly as much about the correlation between answers to different questions.


Assume the patients are assigned to the doctors in a way that's not affected by whether they have the condition or not (e.g. at random). If the screening question doesn't lead to over-diagnosis, then you'd expect the same ratio of positives/negatives from both sets.

You need more subjects to attain the same statistical power, but the data is much easier to obtain. 100 subjects already sounded like overkill anyway.


It doesn't strictly pertain to his hypothesis, but presumably you would like data on false positives and negatives as well as overall outcome percentages.


Granted. But to get that you need to jump many more hurdles (which are there for good reasons).


>>Do you think this story indicates a success or a failure of the IRB?

Failure of the IRB but mostly failure of the author to explore his options.

>>Would you rather a better expedited review process AND more studies (remember the personal effort cost killed this study), neither, or do you see some 3rd option?

As I've posted in this thread, this problem is solved. Private IRBs exist that are much more responsive.


What options do you think were inadequately explored to sign the consent forms? Finger paint?


It is worthwhile to consider that evidence of consent is mandatory, and signing in pencil is not considered a legally valid option. If you can't sign the standard form for safety reasons, there are all kinds of ways that have been traditionally used to document legal consent for people who can't sign documents for various reasons - for example, public notaries, signatures of witnesses or even simple video evidence of explaining whatever needed to explain and the patient consenting would be a possibility.


>>What options do you think were inadequately explored to sign the consent forms?

None.

Now, if you asked a question without assuming what you think I was saying, I'd tell you that Alexander's other options were to not use the IRB that he went to, and to explore the private marketplace of IRB organizations that provide much better service than the nightmare he dealt with - a very common practice in all fields.


> Alexander's other options were to not use the IRB that he went to

This is, frankly, ridiculous. The 50-70% overhead the university takes from every grant already pays for one IRB. If that one isn't working, the solution is not to tap in (almost inevitably tight) research funds to pay for a second for-profit one instead.

Given how hard it is to charge things like computers to grants, I'm actually amazed this is even allowed.


>> If that one isn't working, the solution is not to tap in (almost inevitably tight) research funds to pay for a second for-profit one instead.

I am curious: How much do you think private IRB engagement costs?


I looked into this once and it was a few thousand dollars.

"But Matt", you'll say, "Isn't that a drop in the bucket compared to an R01 (a few hundred thousand/year), let alone a big program project grant? Why not spend a bit more to start data collection sooner?"

This is a pretty tempting proposal. However, I think this analysis almost misses the point. Programs run by people who can land big grants usually involve someone who already knows how to navigate the IRB process and can get things approved relatively quickly.

However, small or pilot projects tend to get bogged down, since the proposers have much less experience with the process, as in the article--and my own experience). These is especially true for people like the author, who aren't "full-time" human subjects researchers. The article didn't mention the projects' budget, but I would bet that if it wasn't zero, it was close to it (especially after covering salaries, etc). Even $1000 is going to be prohibitively expensive for many of these projects.

More philosophically, I think overseeing research is part of a university's job (and, as I complained above), one that is already paid for via overhead. There's also something vaguely worrying about the incentives of a for-profit IRB.


>>I looked into this once and it was a few thousand dollars.

Our IRB cost is 10% of that, or less, depending on the number you are choosing for "few." Maybe if you used WIRB or another IRB that is specific for drug trials, yes, it can run that expensive. For run of the mill published papers, no, you should not be using services like that.

>>There's also something vaguely worrying about the incentives of a for-profit IRB.

This would imply any reviewer or journal actually checks IRB qualifications on submission, review, or referee. Devor et al in the JSCR/Ohio State case and all the reviewers coming out as a result of that and saying they don't check IRB shows that it is anything but. Again, this isn't true for drug trials, but for expedited review minimal risk studies, there is no reason to be using bureaucratic and slow moving IRB providers.

EDIT:

>>Even $1000 is going to be prohibitively expensive for many of these projects.

There are many IRB providers who do expedited review, minimal-risk, standard studies for less than half of this cost.

EDIT2:

>>I think overseeing research is part of a university's job (and, as I complained above), one that is already paid for via overhead.

Sure. I'm not going to argue against the idea that the government or the university should do their jobs that they were paid to do. But they regularly and demonstrably do not, in far more fields than IRB. Private markets exist to go around this bureaucracy. IRB is no different.


I'm not the parent but would like to know. How much does it cost?

And how does "I skipped our free IRB and paid this third-party IRB because they're laxer" really look to the department?


A few hundred dollars, generally. Depending on provider.

>>And how does "I skipped our free IRB and paid this third-party IRB because they're laxer" really look to the department?

In my experience the third-party IRB orgs are not more lax, they are just faster at turning things around. Alexander's article bemoans primarily the time it took to get all these decisions made, which private IRBs would be on top of. I got my rejections of consent forms within 24 hours and get my docs pre-qualified over the phone after submission and discussion with reps. Try that at your local university's IRB. Alexander also has complaints about the questions and methods, but they're mostly unfounded. The study title aspect is important, the signing in pencil is not really relevant (I get his complaint but most industries don't allow contracts signed in pencil), and the transfer of the study to a new investigator unannounced is a huge breach of procedure and he's lucky that all the IRB asked was for a submission of a new investigator. Usually it is much more arduous.

IRB simply controls the ethics and the informed consent documents and has the right to audit materials and process at any time to ensure you stay up on these things. Beyond that, it doesn't do much. When you submit your paper to a peer-reviewed journal (some who don't even need IRB approval, just an ethics statement, mind you), they don't call your IRB provider and reference check you.

Ohio State researchers published a fraudulent article about CrossFit years ago in the JSCR, which is considered a very prestigious exercise science journal.

http://retractionwatch.com/2017/06/02/journal-retracts-ohio-...

Devor stated they had IRB approval. In actuality, they did not. CrossFit sued NSCA (parent org of JSCR) and won lawyer's fees, and Ohio State settled with CrossFit for six figures for damages due to false publications.

IRB is a mere formality in today's landscape. You just need to get it out of the way. It's not like journals are cross-checking your IRB applications, they just want to see the approval that you put in there. And hell, if you're like Ohio State researchers (and probably many more), you can just lie about it.


Yeah, I'm surprised the funders (or someone in a business office, at least), don't baulk at paying for this.

@icelander, have you ever successfully charged these third-party IRB fees to a grant?


Yes. A few weeks ago.


If one group of IRBs is prohibiting certain practices and another is allowing them, then that is obviously a major systematic flaw in IRBs as such. Either one IRB is letting shady practices through (and that's a big problem) or another class of IRBs is unreasonably restricting research, and that's a problem as well.

Shopping for a better IRB isn't a systematic solution, it's a workaround for a system of flawed IRBs.


A take from the opposite end of the spectrum (ethically and logistically):

https://arstechnica.com/science/2017/08/bucking-fda-peter-th...

I wish the author took more seriously the warnings around patient health and safety. Yes, they may not apply to his particular project, but that doesn't make asking about them useless. To take past abuses lightly undermines his overall point.

It's done precisely to avoid a situation like the way he delegated responsibility to his PI: "Dr. W had vaguely hoped that I was taking care of it. I had vaguely hoped that Dr. W was taking care of it." The review board shouldn't make vague assumptions about something as critical and necessary as ethical study design.


I take the possibility of serious ethical violations...well, seriously.

I'm not sure whether the global cost of IRBs outweighs the benefits. But I'm pretty sure that the sort of regulatory changes OHRP proposed (mentioned in the linked NYT article), where studies like mine with minimal possible risk are exempted from IRB requirements, seem good and important.

I also wonder whether it's possible to monitor even the scary studies with lots of potential risks - like the herpes vaccine trial you link to - with a little more common sense and a little bit less insistence on everything being signed in pen.


BTW, welcome [back] to HN. And just wanted to say, I am immensely enjoying very many of your articles and glad to see them making the front page!


There are very serious costs to excessive safety regulation. When it makes drug development more expensive, marginal drugs don't get put through trials, and those drugs then never reach patients. Rational analysis means weighing the cost of each additional safety rule against the benefit. In many cases, there's a lot of evidence that the benefits of the current IRB process are trivial or nonexistent, and the costs are large.


Extending rules that are a good idea for some projects to make them apply to all projects is the basis of crappy bureaucracy. It replaces sound human judgement with a set of inflexible rules that must be followed just because they are rules.


The problem with specialized committees that review and approve something is that they have absolutely no incentive to allow the new thing to go forward and every incentive to drag it into the ground with requests for more and more changes and precautions.

Well, except when they are paid per review in which case they just rubber-stamp everything.


It's kinda amazing any research gets done at all, with all the bureaucracy.

My university requires official approval and a full process even to apply for a small non-institutional grant. I was going to apply for that tiny little AI Grant that was posted on here a while back, then I concluded that the rules probably would require me to get approval to apply for that. So I said "fuck it."


I suspect that the proper ethical thing to do depends on the risks to the patients, and the benefits to the subjects and overall population.

In my own opinion the author is quite correct in classifying the medical benefits to the subjects (practically speaking, zero; equivalent to talking with any random untrained individual), and to future patients (potentially great benefits).

The risks seem to be much more closely associated with accidentally exposing patient information. This is the section that their study protocols (at least what was described in overview) need review and revision.

The unblinding documentation should be kept in a secured, designated area. A safe with a very limited ACL would be ideal. Unblinding information could be either the consent forms with the unique non-PII ID information on them or a list for matching unique non-PII IDs to (hard copies of) existing patient record stubs attached to the consent forms.

If possible the study should have actually been designed to use as much of the already collected data as possible, and possibly ask the small additional information of all, but not record the answers (or destroy the collected information if they do not consent). Hypothetically they could be asked if they consent anonymously providing some of the questions they were already asked to an informational medical research study. If you ask them for consent AFTER having asked (and gotten answers on) the questions, then you're even actually safe to tell them which data are being collected (what their answers were as well) and what you're attempting to study.

I believe that the above is the most ethical way of designing this type of study, and that it sounds similar to a term I've heard professionals in this field use 'retrospective study' (collecting study data based on previously collected medical data).


Why should study data subject to higher security standards than information that is collected on the patient during the process of providing care that is much more sensitive?


Mostly in preparation for sharing the study data with others (E.G. Publishing, outside data processing).

It's just good hygiene to have the non-sensitive data isolated from the sensitive stuff.

Identifying information belongs under lock and key, or at least forcing a user to authenticate again (even if by literal physical key-fob/ID card) (and should for the hospital too).

Answers to questions which can't be classified as the above can be associated with a fully synthetic ID that works as a pointer to the above. That makes them less sensitive (but still important to keep secure: E.G. they could be in the office which is always locked when un-occupied and don't have to be stored in the safe).

Practicing this discipline also makes it less likely for researchers and writers to inadvertently leak patient data. I'm actually shocked at how often hospitals / clinics / etc don't practice similar division of data by storage requirement.


All PHI is subject to the same data security standards, regardless of if it is from research or clinical sources.


That's not what the blog post linked here says.


I didn't see anyplace in the blog post where they say how the data in the chart (that they couldn't use) was stored. Did I miss that part? Are we talking about the online chart that they used?

Just because they couldn't use the online (clinical) chart to store research data doesn't mean that the data that backs the clinical chart application is stored in any less secure way. There are very specific methods that are used to store PHI electronically. The fact that the clinicians (who needed access) could access it doesn't mean it wasn't stored securely -- they were supposed to have access to it.

In all clinical research there is a clear separation between clinical (as in, used for treatment) and research (as in, should never be used for treatment) data. When you try to hit that gray in-between area is when you start to have issues with IRBs. In particular, when you are both the treating physician and the researcher, you have to be particularly careful about keeping the two sides separate. There is no way anyone should be allowed to store research data in the clinical record -- even if the researchers otherwise have access to the clinical chart.

Also -- it should be mentioned that not all study data is automatically PHI. In fact, most of it shouldn't be. Once it's been de-identified, then the data isn't PHI any more and can be more freely used for research. This is why there is so much emphasis on the security for storing the "unblinding" documentation.


I'm in the process of getting two IRBs approved. I am somewhere in the first quarter of that author's story. I agree that it's a big hurdle to get past, and writing a good consent form is hard.

All of this being said, though, it was/is a great experience to have under my belt. And, next time, I'll be able to tackle the regulatory hurdles with far less confusion.


Much of this reminds me of what I have been going through. What SSC should have investigated and wrote up is that hospital and medical IRB is fraught with bureaucracy and that a private IRB industry has developed, entities inside that handle wide ranges of studies and turn around informed consent docs and things very fast (within a week).

Our first IC forms got banged multiple times for the same reasons his did. And it’s annoying as shit. But my ethical training focused not on something stupid like Nazis, but something real like American scientists unethically hiding treatment from diseases they were studying, like the Tuskegee Syphilis Experiment (the actual reason IRB exists as is). Or the Milgram Experiment. Etc.

A lot of IRB review is stupid. But messing with human subjects doesn’t have a good track record in this country. Our predecessors earned every bit of bureaucracy and oversight based on their absurd experiments. Such is life.


That sounds, to me, eerily close to the mentalities that Alexander rightly criticizes:

- "It's no big deal because you can just pay off someone's who's in the know and can work the system rather than read the rules at face value."

- "No amount of kafkaesque absurdity is too much, because Nazis."


What specifically sounds like that? I cannot decipher the context you are using those quotes in regarding my comment.


For the first one:

>"What SSC should have investigated and wrote up is that hospital and medical IRB is fraught with bureaucracy and that a private IRB industry has developed, entities inside that handle wide ranges of studies and turn around informed consent docs and things very fast (within a week).

That is, even though Alexander had a long wait time just to hear responses, there are expensive services you can use that make all that wait time and pushback from hidden rules disappear, so you don't see it as a problem.

For the second one:

>A lot of IRB review is stupid. But messing with human subjects doesn’t have a good track record in this country. Our predecessors earned every bit of bureaucracy and oversight based on their absurd experiments. Such is life.

You don't show a standard for the upper bound on when the absurdity is too much, dismissing any such concerns with appeal to some historical atrocities.


>>That is, even though Alexander had a long wait time just to hear responses, there are expensive services you can use that make all that wait time and pushback from hidden rules disappear, so you don't see it as a problem.

You can make the wait time disappear. In my experience, you cannot make the pushback disappear. I've been scolded multiple times by our IRB for improper and too-technical language in our Informed Consent forms, amongst other violations/gaffes.

IRB is poorly-designed. Should it go away? Sure, in its current incarnation, yes. Will it? Like anything government + university designed, there is a snowball's chance in hell that anything with that many moving parts and bureaucracy will ever die a true death.

The private markets designed a workaround for this and they are accepted as every bit as good as the university and medical IRBs.

As far as "expensive," your time is valuable if you are a scientist. You should act accordingly. The IRB fees we pay are less than 7% of the fee it costs to publish in major peer-reviewed journals, which is another racket that is propped up by the government + university complex. So I don't consider it expensive in the least. (It is well under $500, to give you a hard number.)


>You can make the wait time disappear. ... IRB is poorly-designed. Should it go away? Sure, in its current incarnation, yes.

So, you agree with my points then?

>As far as "expensive," your time is valuable if you are a scientist. You should act accordingly.

I didn't say otherwise, and that's not relevant to Alexander's claim that the many of the standards are excessive and not merited.


Someone on Reddit says[1]:

> Amateur. What you do is you sweet talk the clinicians into using their medical judgement to adopt the form as part of their routine clinical practice and get them to include it as part of the patient's medical records. Later... you approach the IRB for a retrospective chart review study and get blessed with waived consent. Bonus: very likely to also get expedited review.

[1] https://www.reddit.com/r/slatestarcodex/comments/6wtylk/my_i...


That sounds more unethical though?


What form?


The author was trying to test whether a simple common test for bipolar was accurate. Presumably, the suggestion is to make the test a form filled out by each doctor for each patient. Then months or years later the author could do a chart review and see whether the test results were accurate.


To potentially clarify: the test is just a form. It's a form with some questions about mood, and is currently being used to diagnose bipolar disorder (inappropriately, the author believes).


Commenters have mentioned "Chesterton's Fence". I'd add a reference to another of GKC's observations:

"When you break the big laws, you do not get liberty; you do not even get anarchy. You get the small laws."


I can't help but wonder if the Fence comments had anything to do with my blog post last night. Kind of like watching social osmosis in action :D


Chesterton's fence has long been a staple of HN discussions. Unfortunately, it involves no risk/reward analysis and it invariably leads to needless conservatism. That's probably not a big deal in medical research where risks are often high, but it's a problem when applied as a general principle.

(The opposite wisdom is probably grandma's ham [1]. Consider that if grandma had passed away, they might never be allowed to try cooking the whole ham.)

I quite enjoyed that blog post, btw.

[1]: http://www.angelfire.com/ma/artemis9/humor/joke7.html


The point of Chesterton's fence is cost/benefit analysis. It's not saying the fence mustn't go; it's saying you need to know why the fence was put there in order to evaluate whether it's best removed or left in place - that is, your analysis needs to be informed by understanding of the status quo in order to be accurate.

Grandma's ham is a great example. It might have been some subtle trichinosis-related failure case that cutting the end off a ham reliably prevents, and it's a shame we had to lose your great-uncle Joe to find out about that. Or it might have been an issue of pan length. You can't know until you ask, so you ask if you can. If you can't ask, then sure, you do the best you can with what you have - I should like to hope there are no blind dogmatists here. But if you can ask, you'd be a fool not to.


My only objection is the dogmatism. Asking why something is the way it is is obviously worthwhile.

Perhaps you just have more faith in people to behave reasonably than I do.


> Faced with someone even more obsessive and bureaucratic than they were, the IRB backed down and gave us preliminary permission to start our study.

I love it! You can’t fight mindless rule followers with logic or reason. You need to use even more detailed rule-following. This is true in every bureaucracy where checklists have replaced good judgment.



And it's only worse at the staff level. Even getting patient data variables for patients consented to IRB-approved studies is hit-or-miss. Groups build barriers by wanting to "work with" principal investigators (PIs) instead of providing data to PIs and study staff. Data moats are everywhere.

My own group (at a research center attached to a university school of medicine) has switched our focus from helping PIs with clinical data management and applications to helping manage research samples and sample processing, mainly by doing housekeeping around genomic processing across different lab teams.

It's kind of hard to convey just how much I would warn devs away from healthcare and healthcare research IT gigs.


Maybe he should have called it a care audit and made it look like they were auditing the doctors success at administering screenings instead of doing any kind of patient research.


A separate discussion of this happening in /r/slatestarcodex

Top comment there right now:

"Amateur. What you do is you sweet talk the clinicians into using their medical judgement to adopt the form as part of their routine clinical practice and get them to include it as part of the patient's medical records. Later... you approach the IRB for a retrospective chart review study and get blessed with waived consent. Bonus: very likely to also get expedited review."

Which is an extremely clever "hack" (in the way we like to use it on HN) of the IRB setup.


You can't stop science. Someday, somebody, somewhere will answer the questions that this IRB requirement silenced. Whether it's via underdeveloped countries, or through the use of technologies too new to catch the regulator's eye, science will progress.


Or via a procedure that complies with the requirements of that IRB :)


Why do a study in the first place? Particularly when the outcome of the study is a foregone conclusion as far as the author is concerned. Just go to professional conferences and say "researchers say this works, but those of us with clinical experience know it doesn't. We write the diagnostic manuals, let's write a diagnostic procedure that actually makes sense to us"


I can't tell how much of this is exaggerated and how much really happened. Sometimes I have a hard time parsing the difference.

They demand that you reveal what the study is on and sign in pen? Because the Nazis didn't disclose that stuff and get signatures?


That's kinda the point. We need rules because Nazis (and Tuskegee, etc.) but it's not clear we need these rules about pen vs pencil.


The pen vs pencil thing is a total canard by the author, who is looking to 'dress-up' his indignation. Hand the form to the subject. Hand a pen to the subject. Subject reads and signs the form. Take the form from the subject. Take the pen from the subject. Job done.

It's not like the patient will go bonkers stabbing-crazy the millisecond their flesh touches a pen.


The first part is real, and the second part is the joke.


Man. This sounds just like my workplace.


The author complains about there not being enough of an ethics gauntlet with the bipolar test in question, and then complains of having to run an ethics gauntlet to do the study.

The author keeps on bringing up nazis and suggests he knows about study methods, when it's clear he's pretty clueless about doing studies. Several of the "this is stupid" questions really aren't that stupid, given that these are generic forms for any study. Why the fuck do you care if you're being asked if you're removing organs? Just tick 'no' and move on to the next question. Does the author also complain that other genders are selectable on online forms, and not just his own gender - after all, it's equally useless information? The setup to the story is: 'waaah, this application form isn't tailored exclusively to MY study'.

I ran into problems with ethics committees myself when doing an (aborted) PhD that caused significant needless delays. They're annoying as hell, I agree, and there are some genuine complaints in the article, but most of what the author is complaining about makes it clear that he doesn't understand the point of the questions posed.


I thought this was going to be about Ruby...


Hah, I think your joke got stomped by a mod re-writing the headline. :]


yeah, they probably saw that coming a mile away... :)


Poor choice of headline for this site, IMO. I can't be the only one who clicked expecting something about the Ruby REPL.

For the benefit of anybody else confused by the title, IRB in this article refers to an "Institutional Review Board" https://en.wikipedia.org/wiki/Institutional_review_board


[flagged]


Or collaborated with someone who cares enough.


It sounds like the study was poorly designed. Having an actual PI would have helped. The IRB probably would have seemed a lot less painful with a better-designed study and an engaged PI.


You're probably getting downvoted because you're making a casual armchair judgement and this is Scott fucking Alexander we're talking about.

//edit: I only just got to this section of the article, but it feels good to include as a counter-example.

> During that year, Dr. W and I worked together on two less ambitious studies, carefully designed not to require any contact with the IRB. One was a case report, the other used publicly available data.

> They won 1st and 2nd prize at a regional research competition. I got some nice certificates for my wall and a little prize money. I went on to present one of them at the national meeting of the American Psychiatric Association, a friend helped me write it up formally, and it was recently accepted for publication by a medium-tier journal.

> I say this not to boast, but to protest that I’m not as much of a loser as my story probably makes me sound. I’m capable of doing research, I think I have something to contribute to Science. I still think the bipolar screening test sucks, and I still think that patients are being harmed by people’s reliance on it. I still think somebody should look into it and publish the results.


Sorry, but this is a real question: Why would we know who Scott Alexander is?


He's written a lot of great and influential things, and I feel like he's entering the "nerd/intellectual" common knowledge.

The biggest example for me is Meditations on Moloch, which is the only thing that's ever made me deeply question my libertarianism. http://slatestarcodex.com/2014/07/30/meditations-on-moloch/

"I can tolerate anything except the outgroup" is a depressing piece on American politics. http://slatestarcodex.com/2014/09/30/i-can-tolerate-anything...

My original comment is not the best, it was a kneejerk reaction. I'm not representing the fandom well. :-/


IRB stuff can be painful, but in my experience submitting these through the process is a lot less problematic than it was described in the article. An experienced PI knows how to navigate these issues.

Practically, there are two issues with which the IRB is concerned. 1) Protection of particpants 2) Protection of the Institution from litigation by participants.

Realizing the second oft unspoken goal can help you manage the IRB process


The real nightmare is having read through 3/4 of this, only to realize it's the longest bitchy complaint I've ever read through.

No substance and a lot of arrogance.

The more curious take-away is that these extremely long winded rants of his have younger people fooled as being worthwhile.

If you have something of value to say - you distill it, condense it, so as to ease the mental effort required for someone else to understand the point you're trying to make.

It reminds me of what philosophers do - they exhaust you into thinking they're making some sophisticated point simply because they're using fancy words that produce the thought of 'am I missing something? Surely no one is spouting complete nonsense for 400 pages, it must be me'. It isn't you. It's them on a massive ego trip - this guy being a great example.


A more charitable interpretation might be that a detailed and clinical account of the absurdity of the most tedious bureaucracy may have made for rather... dry reading.


I can summarize what he's said in the 3/4s that I've read, down to 2 or 3 paragraphs, with a few bullet points.

The author complains about tedious procedure, while writing the most tedious blog I've had the displeasure of reading.

The obliviousness and callous arrogance of the author, combined with cult-like reverence some people seem to have for his babbling is what makes this at all worth commenting on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: