Consenting to Known Risks
The goal of informed consent is to ensure people understand what known risks they’re agreeing to. Modern consent paperwork has mutated far beyond this original purpose in a way that's harming patients
The landmark 1957 case of Salgo v. Stanford involved a man who was left partially paralyzed after a procedure to diagnose arteriosclerosis. Mr. Salgo’s doctors didn’t inform him that paralysis was a known risk of the procedure. A California appellate court’s decision in Salgo’s favor established the important concept of ensuring that patients understand the potential benefits and known risks of a medical procedure before voluntarily agreeing to it.
In a postscript to a previous post, I briefly wrestled with the ethics of a case report involving a 47 year-old gay male sex worker who developed nasal papillomas after recovering from AIDS. Although the papillomas contained human papillomavirus (HPV) type 11, the man’s doctors didn’t feel 100% certain that the papillomas were caused by the papillomavirus - and they weren’t confident that the current preventive HPV vaccine would effectively treat the infection. The HPV vaccine is only approved for people under age 46, and the regulatory authorizations don’t include consideration for individuals, such as this particular patient, who epidemiologists tell us are at elevated risk of developing HPV-induced diseases. The medical team demanded that the man sign an informed consent form before receiving the HPV vaccine “off-label.” The case study is almost perfectly upside-down from Salgo. The 47 year-old man was informing his doctors of a potential side-benefit of a treatment that poses no known medical risks - aside from the well-known minor risk that, in rare instances, some people might faint after receiving an injection. In the end, the man’s bet paid off. After signing the consent form and receiving the HPV vaccine, his nasal papillomas quickly disappeared.
The informed consent process in this case obviously didn’t offer the patient any meaningful protection. Instead, it just looks like a bureaucratic obstruction intended to shield the institution from lawsuits in the event that the treatment didn’t turn out to provide the side-benefit the patient was hoping for. That objective is explicitly prohibited by informed consent laws.
What’s been gradually lost in the evolution of informed consent processes is the original focus on known risk. The term “known” is of paramount importance. As a reductio ad absurdum, you can’t have an informed consent form that says the HPV vaccine might cause you to be abducted by aliens. In a less absurd example, we can imagine a scenario where I set out to conduct an epidemiological survey looking at blood samples from a community of Christian fundamentalists. I would be aware of the fact that some study volunteers might not approve of my same-sex marriage, but that doesn’t mean the consent form should include a checkbox indicating “I agree that a homosexual can process my blood samples on a Sunday.” Such language wouldn’t simply create a needless technical obstacle for sample processing, it would inappropriately encourage study volunteers to start wondering about the hidden dangers of sabbath-defiling homosexuals. Because why else would such a thing be on a consent form.
Which brings us to Rebecca Skloot’s insidious book, The Immortal Life of Henrietta Lacks. George Otto Gey’s supposed “abuse” of Henrietta Lacks essentially consisted of putting surgical discard material into tissue culture instead of into the garbage. There wasn’t any known risk of harm for Lacks to consent to. The fake controversy Skloot whipped up to sell books has gradually obscured the primacy of known risks in consent processes. And it isn’t just a general public misapprehension of the issue - Skloot’s sensationalized thinking now suffuses the bloodstream of the modern academic ethical-regulatory industrial complex.
In a recent institutional training module, scientists are invited to consider a case where a research team harmlessly shares anonymized tissue cultures with their scientific colleagues. The research team later realizes that some study volunteers checked a “no” box on a consent question about scientific sample sharing. The flaw in the training exercise is that such a question should never have been on a consent form in the first place. The ethics professionals who wrote the training exercise appear to have lost track of the primacy of known risks in the consent process. The training module asks whether the researchers should destroy their scientific results because “consent was violated.” A contrarian take-away from the training module is that sacrificing hard-won scientific data on the altar of non-harms would be deeply unethical.
And yet that’s exactly what we’ve been doing with hard-won DNA sequencing data. When the pandemic forced me to work remotely, I shifted into a bioinformatic project where I combed through public sequence databases looking for viral sequences hidden behind the animal genomes people originally set out to sequence. So-called “privacy” regulations make it somewhere between extremely difficult and absolutely impossible for credentialed scientists to access datasets containing human sequences, so my survey pretty much only looked at animal datasets. Boy, did I see some doozies. Monkey datasets have an emerging family of potentially cancer-causing viruses called adintoviruses. A monkey brain dataset had an exotic virus called an iridovirus. A Cincinnati sewer rat had a distant relative of smallpox in its lungs. It would be nice to know whether any of these viruses are lurking in any human datasets.
The justification for locking human sequence data behind bureaucratic privacy curtains is that sabbath-defiling homosexuals like me might theoretically be able to deduce the personal identity of the sequence donor. Any fan of true-crime podcasts will know that serial killers have been caught and victims have been identified using sequence-based methods. But anybody who listens to true-crime podcasts will also know that the process requires specialized expertise and a great deal of time and effort. Accordingly, there has never been a documented case of a research subject being harmed by the release of sequence data. The risk is almost as absurd as alien abduction.
Moreover, it’s easy to computationally strip datasets of all human sequences, which would render the dataset permanently unidentifiable. The most frustrating aspect of the ethics puzzle is that removing the human sequences would actually be a useful first step for virus-hunting. I’ve been debating the idea of releasing de-identified versions of clinical sequence datasets with colleagues, and every conversation disappointingly seems to end with something like, “yes, but there’s no way we can do anything like that because it wasn’t on the consent form.” That conclusion is false. Things that pose no known risk of harm don’t belong on consent forms. Putting harmless things on consent forms doesn’t offer research volunteers any protection - it just whips up public hysteria that needlessly obstructs life-saving medical research. We desperately need to revisit this lose-lose-lose proposition.