This is one of those medical revolutions that I am waiting dearly for.
Facilities that are not hospitals(to avoid the risk of occupying medical devices that sick people need) built to _regulary_ check up otherwise healthy people for preventive care.
Heck, I have so many alerts defined on my monitoring setup for servers to watch for signals of failure before they get too big. But, my own body is not observed until something bad needs treatment. Why can’t we observe ourselves medically and analyze that record for early signs of trouble before it becomes serious?!
All the advancement in technology in recent years, this ought to happen sooner than later.
Actually you don't want this, and you are describing a nightmare scenario that everyone who studies health policy understands all too well. Mass screening of healthy people will result in extreme iatrogenics and unnecessary psychological damage and stress (which leads to physical effects as well), not to mention overwhelming the medical system.
The fact is that many things that could be detected will never result in symptoms or other noticeable problems. Further, for many things that can be detected we can't really do much about, so by detecting it early you are just reducing the amount of life they have left without worrying about their disease, or causing unnecessary treatment (which includes unnecessary damage, cost, stress, etc.)
The argument you’re making has always bothered me because it’s hiding the ball.
If finding something that otherwise carries no symptoms is best left untreated, then the fact that you found it should make no difference to the decision. The doctor should say that the best course of action is to do nothing. More information can never be harmful. If you know information is best not acted on in the abstract, then you also know you should not act on it in particular.
So what you really mean but have left unspoken is one of two things.
First, that doctors are untrustworthy people who make recommendations and decisions based on concerns other than their patient’s wellbeing, such as covering their asses from lawsuits or making more money.
But rather than fix that, you would rather keep patients more ignorant and away from the doctor in the first place. Which actively harms people who do have honest doctors. That is, your approach of not testing hurts people with honest doctors to protect people with dishonest doctors.
Or second, if you take dumb patients as the problem, you are willing to hurt people with good decision making ability (who would heed their good doctor’s advice to leave the possible ailment untreated) so you can protect people with bad decision making.
Because many times, these tests will uncover things that obviously need treatment. They will save many lives. But because more stupid people will hurt themselves, then no one should have access to them.
The political and moral assumptions built into these positions are immense and yet the medical field tucks those away and pretends that this is just a purely scientific truth, that someone running more harmless tests is actually inherently harmful.
>If finding something that otherwise carries no symptoms is best left untreated, then the fact that you found it should make no difference to the decision. The doctor should say that the best course of action is to do nothing. More information can never be harmful.
Absolutely and demonstratively false. There is an entire field of health policy that destroys this harmful idea. First of all, just because something doesn't carry symptoms NOW doesn't mean it won't carry symptoms LATER, but doctors can't always predict this and aren't perfect decision making machines. Many times the patient will push them for further tests and treatment (or the doctors will advocate for it to ease the mind of the patient) which leads to potential harm from unnecessary treatments. And this is just ONE of the ways patients are harmed by unnecessary screening. There is also the psychological damage of having a condition you wouldn't have otherwise known about, and living with that knowledge (take aneurysms for example). Psychological stress has a real physical toll on the body. Then there's the COST associated with unnecessary screening and treatments, which (especially in the US) can run into the thousands quite easily for even simple interventions as a result of unnecessary screening.
Even after that, a screening process can also find something that could be bad, but in some patients doesn't actually decrease their lifespan or quality of life. If we have no way of adjudicating between these cases or predicting which cases will end up bad if left untreated at the present moment, what do you think will happen? People will ask to be treated anyway, and iatrogenics will rear its ugly head.
You cannot ASSUME that screening is automatically good. It must PROVE itself as such in a randomized clinical trial. This trial must show that people live longer and/or better as a result of this intervention. In many well documented cases, this turns out NOT to be the case, which entirely destroys your original rebuttal.
> but doctors can't always predict this and aren't perfect decision making machines
So mass screen healthy people, collect the data, run models, and get better at it. People like you would rather not try, and this is the same reaction I get from doctors. Applying the same tech that we use to improve ad targeting to disease prediction is a no brainer to anyone whose cushy job doesn't depend on the current medieval state of medical technology.
What do you think the entire medical, pharmaceutical, bioinformatic, etc. industries have been doing for decades? Are you really so arrogant to think that other people are too lazy and stupid to think of your brilliant idea and that you are simply more intelligent than everybody else? Do you really think people haven't been trying?
If they aren't mass collecting this data in the first place (for the reasons outlined above), then how can they be doing this?
Or, if they are doing this from quality samples of the population, then we have data to show that, for example, "Yes, Mr. Function Seven, your scan shows elevated levels of Widget-5a enzymes. This is often a precursor to Gadget Cancer, but 30% of the population shows this elevation, while only 1% develop the cancer. It's best to do nothing at this time unless we see further elevation. Have a good day, see you next year"
So either we have this data and we can give accurate advice, or we don't have this data because we're afraid of over-diagnosing non-problems.
I understand the fear, but it's still burying one's head in the sand to not even look.
They are mass collecting the data. They have more data than they know what to do with. Bioinformatics abuses data science at levels comparable to Google and Facebook.
They are getting quality samples from the population. But it's not a simple as "high enzyme = maybe good chance of cancer". How are you going to get those enzymes from their blood in a way that can be applied to the general population? Before that, how do you know there's not confounding factors (of which they are a LOT)?
> So either we have this data and we can give accurate advice, or we don't have this data because we're afraid of over-diagnosing non-problems.
This is what people think the activation of genes is. What they THINK the activation of genes is.
Note the following:
- They don't know all the genes. They are constantly identifying new ones. The number of gene pathways people are pretty sure are complete is small. I don't know if this one is one of the "pretty sure" ones.
- Gene pathways are not just complex in terms of size, they are also non-linear. This is not a computer program, this is a horrible biological mess where biological components constantly and probabilistically emit chemicals.
I don't even think that the pathway I gave you is super representative - it doesn't feature the ridiculous non-linearity and uncertainty that many pathways at the bleeding edge have. They can get much worse.
People are not burying their heads in the sand. They are trying their almighty to dig up from bedrock and reach the sky.
> Bioinformatics abuses data science at levels comparable to Google and Facebook.
To give people a sense of this, it is not unheard of for large scale bioinformatics platforms to set off alarms and/or zone-level capacity issues with the large cloud providers.
> Are you really so arrogant to think that other people are too lazy and stupid to think of your brilliant idea and that you are simply more intelligent than everybody else? Do you really think people haven't been trying?
I find it hard to believe I'm reading this sentiment on HN. Do you realize nearly any disruptive idea has do the "arrogant" thing you are speaking off.
Big companies go out of business all the time, industry disappear or wane all the time, big companies/industries with lots of smart people do stupid stuff on the regular. Characterizing the desire to do something other than status quo as "arrogance" is just the bottom of the barrel. I'm glad Semmelweis didn't think the way you do.
I didn't understand their comment to mean that at all. It's not arrogant to think there might be a better way. It probably is arrogant to think that the better way is actually really simple and easy to implement.
Arrogance definitely can play a big role in success of many startup entrepreneurs. Because they think the answer is simple and their arrogance shields them from not trying. So they go down the road and find out the solution is actually quite complicated, but a small percentage succeed in accomplishing the goal.
> Are you really so arrogant to think that other people are too lazy and stupid to think of your brilliant idea and that you are simply more intelligent than everybody else?
Not OP, but yes, I do.
I've had some runins with the health system. At least the parts that I've seen are worse than the dark ages. Especially endocrinologists have absolutely no clue what they are doing.
The ones I met don't even have superficial knowledge about their full-time job which they've performed for ~20 years.
And yes, this sounds arrogant. I've tripple checked whether I'm just tripping. Their knowledge is not simply outdated, but never correct to begin with.
I think it's a cultural problem which shows in a lot of areas. Medicine doesn't value human life as much as f.e. air transport
You're talking about praxis, we're talking about research. I agree the praxis can be pretty bad, but in regards to the above commenter's remarks on why people don't just simply collect data and apply it, medical professionals simply cannot try out new treatments the way they are suggesting.
To go further, a large issue/distraction in biomedical research has been Big Tech types coming in assuming that the roadblock all along has been lack of smart computational people in the process. I will not name names, but have seen so many instances of personal tech heroes coming in & claiming the underlying problem was that some of the brightest computational I've ever encountered simply didn't know how to computer.
Nothing could be further from the truth. People coming in to "disrupt" only add noise. Eventually those people either understand this and put the effort in to understand the domain or they wander off.
While it’s absolutely true that naive non-experts can end up adding a lot more heat than light, I’ve also seen non-expert people come into stagnant domains and absolutely completely transform and improve upon the state of the art.
I am skeptical of the “more data is bad” meme of screening hesitancy. It cannot be scientifically true in the strictest sense, and to the degree it’s an accurate assertion, it really seems to reveal an unscientificness to how screening data is used today in practice rather than that in principle more data is bad.
In a perfect world you're right. What it's getting at is that screening itself isn't very good in the grand scheme of things, and thus the negatives of extra screening can be argued to be worse than the extra screening. Whether or not that's true, well that's another matter.
The issue in this subthread was the notion that the only thing between the current state of affairs and high quality screening is a bit of disruption. The problem is hard, smart people are working on it, more smart people are always welcome, it'll still take a while.
It's that 1) sometimes collecting the data itself is harmful at scale. E.g. mammography can cause breast cancer, or cause it to spread. 2) the patients actions as a result of the data can, and does at sufficient scale, cause further harm.
> What do you think the entire medical, pharmaceutical, bioinformatic, etc. industries have been doing for decades?
Well, let's remember that the AlphaFold team at Google solved the protein folding problem with a relatively small team in a relatively small number of years after extremely large, well funded companies whose primary business was drug development failed to do so for decades.
So yeah, it's been demonstrated to be possible that the current leaders in a field might be significantly less capable than a different team.
There's confounding here that you're ignoring. For reference, I'm a machine learning research scientist who started in bioinformatics, initially lured by the possibility of a machine learning solution to the protein folding problem.
Google's research arm has made leaps and bounds in a particular field (deep learning) and then managed to apply it successfully to a very, very hard problem (protein folding). That other companies failed to adapt Google's successes in deep learning faster than Google is not surprising at all to me.
One might argue that the impact of academic-big pharma collaboration (in the form of funding for research projects related to CASP) is what enabled a company like Google, with no independent desire to collect the massive amounts of wetlab data required to evaluate or develop a tool like AlphaFold, to even participate.
More importantly, AlphaFold hasn't really solved the protein folding problem well enough for drug development. So, the entire debate might be moot.
> let's remember that the AlphaFold team at Google solved the protein folding problem with a relatively small team in a relatively small number of years after extremely large, well funded companies whose primary business was drug development failed to do so for decades
Did they though? They did extremely well at CASP14, and much better than competing groups. But does this solve protein folding? Deepmind's marketing department would have you think so, but for those of us that work in the field we know that this is not the case.
Furthermore, does protein folding solve the relevant problems of drug design? It solve the forward problem, given an amino acid sequence predict its 3d structure. But for drug design we need the inverse problem, given a specified structure predict an amino acid sequence that produces that structure.
alpha fold was developed at deepmind, not google, they didn't solve the protein folding problem, and it's not surprising that drug discovery companies didn't reach a similar level of accuracy first.
Note that DeepMind drafted off the work of a protein folding guy from academia who had been doing this for decades.
> Are you really so arrogant to think that other people are too lazy and stupid to think of your brilliant idea and that you are simply more intelligent than everybody else?
Here's a report of a person whose wife had seasonal affective disorder. He
- deduced that a powerful enough lightbox should be able to cure SAD
- didn't find any examples of such treatments in literature
- spent $600 to build "LUMINATOR"
- "And as of early 2017, with two winters come and gone, Brienne seems to no longer have crippling SAD—though it took a lot of light bulbs, including light bulbs in her bedroom that had to be timed to go on at 7:30am before she woke up, to sustain the apparent cure."
We have done this for e.g. breast cancer, and that is exactly why people are cautious now, because we have real data on the harm overtesting can cause. That doesn't mean it should never be done, but that it needs to be approached with care.
You are assigning beliefs to me that I do not hold. As I said, we would need a solid randomized controlled clinical trial to determine whether any particular intervention actually helps patients live longer and/or better. That's the only way we know. So go ahead and collect data, do an RCT, and let us know how it goes.
The problem with this and the nnt gatekeeping is that personalized medicine will always require stepping away from massive double blind randomized placebo controlled multiple meta study levels of evidence. From a patient perspective it feels a lot like economics largely determine medical outcomes.
>Many times the patient will push them for further tests and treatment (or the doctors will advocate for it to ease the mind of the patient) which leads to potential harm from unnecessary treatments.
>People will ask to be treated anyway, and iatrogenics will rear its ugly head.
I just want to say that you are in agreement with my position here.
Your point about psychological problems is interesting, but as long as people know what they’re signing up for, it’s okay. Some people aren’t prone to anxiety. Why should they pay the price because other people are? Part of my original point was that implicit in your argument is that people capable of handling it should be denied something because more people incapable of handling it can’t be stopped from hurting themselves. I think many people do not agree with that moral reasoning, so medical people hide it behind objective looking statistics.
And as for cost, I don’t take any arguments from medical people about cost seriously. The reason why costs are so high is because doctors and the medical field as a whole run a massive protectionist racket to keep the supply of medical professionals low.
How fresh of them to say that because they’ve limited the supply of medical resources to enrich themselves, patients must pay by having less access to care.
I think once you become a doctor, you quickly realize that a large amount of the population do not have the temperament to deal with 'maybe' bad news. Many are not the 135 IQ %1 of the population with a low anxiety personality who accept that they will eventually grow old and die, which happens to coincide with many engineering types on HN.
Lol, i would argue that engineers, many of whose primary job function is to imagine worst case scenarios and engineer around those to prevent accidents, data loss, etc. Are the exact type of people that are prone to hypochondria. It's not about growing old and dying, it's about ignoring a stomach ache for a couple months and then being told you have terminal cancer, and then living with the regret of "if only I got it checked out earlier".
In the engineering world, you are almost always rewarded for being extra safe and testing, fixing, and investigating anything that might seem a little off. If you do blood tests, MRIs, and cancer screens for every mole, cough, and stomach pain, you will go insane and develop hypochondria.
I guess depends on the mental counter response those engineers take. Because you have to think about every possibility, you create a counter temperment that doesn't implode thinking about every possibility.
Some might go the other way, and their anxious personality might help them think about everything, but also make them a stress case.
I think the first type tends to last longer in the industry, at least for my coworkers.
you're conflating "pushing for further tests" with "pushing for further treatment.' IMO, everyone should be entitled to as much data accumulation as they want. Insurance companies can set reasonable thresholds, but if I want to pay out of pocket to get bloodwork done, that should be easy to get.
Whether or not a doctor prescribes/advises a certain treatment is still firmly in their domain. The amount of biomarkers/biological evidence a patient has shouldn't sway a doctor's decision to alter a treatment plan. As a reasonably smart non-medical professional, I would rather have more data than less data, and it's paternalistic and a little condescending to say "no, you shouldn't actually take diagnostic tests because 'having that information might freak you out'"
I wonder if the psychological stress you describe is a result of societal expectations, which itself is a result of what has been done historically. I would think that to most people you are either healthy, or you have a problem, because that is often how it is viewed when it comes to healthcare. In the same way your doctor could say "you are a little overweight, and let's keep an eye on it to make sure it doesn't cause problems down the road", a similar perspective could be had for what people commonly associate with more serious issues. You can look at what the correlations say for health policy, or you can consider the actual reasoning as to why people view healthcare as such in the first place. It's normalized for people to not go to a doctor unless they have a problem, it's normalized to not take regular preventative measures and screenings. In addition, there are financial barriers. While insurance companies may benefit in the long run from more regular, personalized, preventative care, they are likely very resistant to paying for it. With the costs of healthcare, only well off people would consider this approach, as most of their costs may be out of pocket because they don't have a problem to point to for these kinds of screenings. Like getting blood work done maybe every 6 months just to make sure everything is doing ok, hormones, etc. If it was normalized, and people less afraid of the doctor, needles, and of the costs attached to being unhealthy, their sources of psychological stress would likely be non-existent.
I have no expertise here but I worry that these trials are studying the decision-making processes around how people decide to use the information from these tests, rather than the tests themselves.
How people use information varies, so the data might not have external validity - it’s culture specific, and cultures differ. Cultures can also change through accumulated experience.
So do the people studying these things try again with different and possibly better decision-making, or do they conclude that the test itself is no good?
That is a very important point. This is why multiple studies are needed that across time, location, culture, etc. The more data a doctor has the better decisions they can make for their specific patients.
That statement alone shows blatant ignorance of basic properties of the human psyche which makes reading the rest of your long reply rather pointless, as thought-through as it might have been from your perspective.
Humans are not the perfectly rational machines you seem to make them out to be. You need to deal with people in the real world, not some dream utopia that does not exist.
In my particular field of biomedical research, there has recently been a push for "Diagnostic Stewardship" because more information has demonstrably been harmful to patient well-being.
I believe that's what the data says, but I have a really hard time reconciling it with common sense. More information is strictly better than less information, because you can always choose to ignore the extra information.
I would say most medical decisions are made either due to statistics, or due to experience. What treatment has the highest chance of making the patient better, extending their life, or giving the best quality of life? You'd "just" have to adjust the tables for the new test.
I mean in a contrived example, you could have the lab technician themself look up the numerical result (xyz > 100, abc < 10 whatever) in some table, and then there would be the rule to throw the result in the bin and report "don't treat" because this results in the best outcomes. I don't want to have all that extra diagnostic information, but I want my doctors to use it conditionally to improve my treatment if possible.
"I believe that's what the data says, but I have a really hard time reconciling it with common sense."
Can you see why some people might be hesitant about basing medical treatment on "That's what the data says, but it doesn't conform to my priors, so it's probably wrong"?
People are terrible at ignoring information. Clinicians are people. We know this.
> you can always choose to ignore the extra information.
Can you?
That is to say: you can try, but I don't think most people are very good at consciously choosing not to think or worry about something that they know, if that's even possible.
It’s not enough to say it’s harmful. How is it harmful? A blood test, for example, cannot be inherently harmful. (Okay, no more harmful than drawing blood). So it must be about what people then decide to do based on that test. That’s what my post is about.
Do enough of them, and even basic blood tests cause harm. Infections happen. Rupturing veins happens.
If your tests are for conditions that are rare enough, and where early detection does little enough to improve outcomes, even a tiny risk like that becomes a problem.
I should say, if you know ex ante that certain results are not worth acting on, then when you actually get those results ex post, you should ignore them just the same. So actually running a test cannot formally be the problem.
That is not how medicine works, especially with cancer. Very often we simply don't know whether or not to "ignore" something. But the harms of knowing are not zero, and therefore screening can itself be a net negative. That's why a well powered RCT is required to say whether it benefits patients or not.
We have a more detailed exchange going on elsewhere in this thread, with my latest comment, addressing your more detailed points, here: https://news.ycombinator.com/item?id=27630947
Isn't the only way to learn what things we can ignore and what things we can't ignore to do a lot more testing? Is there a better way to learn that? It seems like having a lot more data from tests is the kind of thing that would have some short term harm but massive long term benefits.
if you take dumb patients as the problem, you are willing to hurt people with good decision making ability (who would heed their good doctor’s advice to leave the possible ailment untreated) so you can protect people with bad decision making.
It seems like we've made this decision with covid-19. Dr. Fauci has been canonized, but we know that he's been intentional misleading the public through misinformation (starting from telling us that masks aren't effective in order to preserve the supply for medical providers, and later mis-stating herd immunity numbers to manipulate people into getting the behavior he wanted).
“ If finding something that otherwise carries no symptoms is best left untreated, then the fact that you found it should make no difference to the decision.”
This is easy to say but hard to do, just need to imagine
If I can step in with a personal anecdote: I was diagnosed in my late 20s with papillary thyroid cancer.
PTC is a very survivable cancer, with a near-100% survival rate (death usually only occurs in rare cases where the disease is diagnosed very late, the progession is atypical, or there are comorbidities at play). It is very easy to screen for and diagnose: a neck ultrasound identifies thyroid nodules, and if the nodules look suspicous they are biopsied in a 20 minute procedure performed under local anaesthetic.
Treating papillary thyroid cancer is also relatively straightforward, as far as cancer goes: depending on the size of the lesion and the features, either half or all of the thyroid gland is removed surgically. In cases where the whole gland is removed (which is the majority), the patient is given a course or two of radioiodine therapy to nuke anything left over, and in many/most cases, it's a done deal.
The vast majority of thyroid cancer survivors have to take thyroid replacement hormones (all patients who had the whole gland removed have to do this, and about half of patients who only had half the gland removed still need a small dose to keep up). I'm relatively lucky: the oral hormone seems to work just fine for me. I take a pill every morning and then go about my day. I will need to do this for the rest of my life, but hey, that's life.
However, there's a substantial minority of patients who aren't so lucky: even with oral hormone replacement, they suffer from long-term sequelae including weight gain, low energy, brain fog, hair loss, and other hypothyroid symptoms.
And there lies the crux of the issue: it turns out that even with increased diagnostic capability (thanks to the ubiquity of relatively cheap ultrasound exams in clinical practice), the number of people dying from thyroid cancer has stayed pretty much flat for decades (mostly due to more aggressive types than papillary, such as medullary or anaplastic). Yet, we take out a lot more thyroids now.
The reason this happens is pretty simple: if you see something, you have to do something about it. So you're removing thyroid glands from people where the cancer might never have actually grown big enough to be a problem, and then subjecting those people to a lifetime of hormone replacement therapy. Something like 10% of all cadavers at autopsy have thyroid cancer: it's a cancer that very commonly develops, but only becomes a concern in a few patients. As of now, we don't have a good way to differentiate between "thyroid cancer that's a problem" and "thyroid cancer that'll be fine."
The clinical guidelines have changed a bit in recent years: if the cancerous nodule is really small, they'll now do "watchful waiting" and monitor the nodule to see if it grows. But you're still subjecting a patient to potentially many years of worry and regular testing. And good luck getting life insurance if you have a microcarcinoma! Yeah, it's highly unlikely to kill you (especially when monitored), but try telling an insurer that.
The medical profession is well aware of these concerns. That's why they avoid testing for thyroid cancer unless there are symptoms, such as thyroid hormone disturbances or a lump in the neck. If you were to make a thyroid ultrasound a regular test, you'd quickly overwhelm the system with cancer patients who probably never needed to be treated in the first place, and who may now have to get their thyroids removed and be dependent on pills for the rest of their lives.
But this really just illustrates a lack of knowledge on our part. If we made thyroid screening a regular thing and invested $10B/year into diagnostics, analysis, and study, i guarantee you in 5 years or less we would have the most efficient, effective system for treatment, determining which microcarcinomas are bad, which ones to keep an eye on, etc.
There simply isn't a profit motive right now, and there aren't a ought resources, so unprofitable, minor things like thyroid cancers and other, small, mostly non-fatal things fall by the wayside. If we could massively increase the resources and time spent of solving health issues, we'd have a lot better solutions. There only exist a certain amount of cancer researchers, oncologists, and clinical pharmacologists that can profitably exist. Lots of diseases will never be cured because there are too few people affected by them. Until we decouple medical progress from profit, there's a only a certain amount of progress we can make. Unfortunately, it seems like tying profit to medicine is the most efficient system we have, so it may be centuries till we get there
My thinking is that the tool was not the problem, it was the decision to do something. With what you say about how common and insignificant is often is, then the right choice in a lot of cases is to probably monitor, seeing how much it grows over time. This isn't a cost effective solution if the ultrasounds are expensive or not high resolution enough, or if insurance won't allow you to take this strategy. If there's no benchmarks or thresholds of size or growth of the tumor before cutting it out should be considered, then you are right, it would likely overwhelm the system with uneccessary treatments. The monitoring doesn't feel like the problem to me, it's the decisions/recommendations of the doctor, often being made as a result of professional wisdom, what research was financed, and the financial system that encompasses healthcare.
If we optimistically assume the claimed 0.5% false positive rate is accurate, and all of the US got tested annually, that's 1.64M false positives per year. Cancer.gov is telling me that approximately 1.8M USians were expected to get cancer last year. That's a positive predictive value of 52%. That still seems highly informative, to me; much higher than the PPV for mammography according to this (admittedly old) study: [1].
Assuming the 0.5% FP rate holds (again, I know that's optimistic), would you still regard universal testing with this method to be harmful?
IIRC, the actual article says they're going to do a large trial, with 150k people, and someone in the article expresses skepticism that the estimated FP rate of 0.5% is accurate, because so far it was tested in people where there's some evidence of cancer.
Is that actually good though? It feels to me that there’s a big difference between explaining the implications of a positive result to someone who understands bayes theorem and someone who doesn’t.
Another aspect of cancer screening is that detecting cancer earlier can improve key statistics like 5 year survival without affecting the actual disease in any way. Which can make screening sound more effective than it is.
Good question. The proof is in the pudding. Run a well powered RCT and if the intervention helps patients live longer and/or better, then we should consider using it! It will be hard to design a study that includes all possible detected cancers, but there are ways around this - perhaps we begin by studying those cancers that are most common and lethal, and go from there.
Utterly wrong. You are conflating current technology like MRIs and CT scans that only detect lumps, to blood tests that detect CANCER.
Yes, if every lump were treated and excised then it would be problematic. But CANCER is different. The only cancer that you might be able to leave alone is prostate because it grows so slowly. Everything else is a risk.
And if we can treat cancer at early stage 1, then maybe people won’t be as afraid of it because it has such a high rate of cure. We don’t know that until we do it.
You say "utterly wrong" and yet you clearly have no idea what you're talking about and have never even approached the field of health policy. Screening has to PROVE that it makes people live longer or better, just like any other intervention. It is not the case that screening is automatically good, otherwise we would be giving 20 year olds colonoscopies. Just because you find something, doesn't mean you've helped the patient, as anyone with a modicum of understanding in this field knows.
Your replies to folks in this thread contain the same hubris as the content of your message, which in turn is consistent with the hubris I've encountered in the medical community over the past five years.
My wife went to urgent care twice in three months because of a pain in her side. She was turned away with cough medicine. She finally went to the ER where a CT scan revealed massive tumors in her abdomen that had gone undetected for likely years. Genetic and semi-annual CA125 screening could yield quite a few false positives, but combined with her physical symptoms she may have had more cause to press at an incomplete conclusion and possibly could have had a different outcome. She underwent nearly $2M in medical procedures over the course of 26 months and died at 45 years old last year.
A few years prior to that I was spring cleaning one day and found a glucose test kit at the house. I had the whole family test their blood glucose. My youngest was 240mg/dL. We waited a day and tested again, same thing. We took her into the hospital and they actually admitted her for two days because they had no idea what to do with a child presenting with Type 1 diabetes that hadn't gone into full DKA. That's harmful, gross, and embarrassing. And this is a major children's hospital that was recently in the top ten in the nation for endocrinology.
In both of these situations, preventative screening could have or did have positive outcomes. I don't disagree with the effect that has been observed, but the conclusion being drawn from it is revolting to me. People should be permitted to make their own decisions about their level of knowledge of the state of their body. We only get one trip as far as we know, and I just find it unacceptable that people are willing to categorically deny diagnostic technology or bemoan its development because they they don't know how to support people trying to navigate the information it brings.
Exactly. Also my sincerest condolences to you and your family.
The level of arrogance in these responses parroting what they’ve read in a paragraph online approaches that with what we saw early in the pandemic with masks.
“Masks don’t work! Fauci said so!” And the level of confidence they said it was shocking along with the self-righteousness.
Yes, maybe people can’t put on N95 masks properly... but maybe we can teach them? Is that so mind blowing? We teach people how to wash hands for 20 seconds, is it not so hard to believe you can’t teach people to wear masks properly? Maybe make masks that are easier to put on?
It’s sad how people with no vision are Karen’ing people to not think ahead or think of the future. It’s scary that it’s happening on hacker news.
We don’t give colonoscopies to 20 years olds because it’s an expensive, invasive test with a low rate of cancer. We don’t give mammograms to under 40 year olds these days because the same and it’s notoriously inaccurate.
We give blood glucose level tests and cholesterol tests to everyone every year at the physical because they are cheap and easy to administer. Even though the effects of both may take decades to have any effect and have no correlation to ultimate death. It’s about trade off of cost and convenience.
If you detect pancreatic or colon cancer in a 20 year old patient via $1 blood test, that is immediately actionable. Cancer is ALWAYS actionable unless you’re talking about prostrate which grows slowly. But even in a 20 year old, you would want to treat that but maybe not a 70 year old.
And we’ve never had the ability to detect stage 1 cancer of the deadliest types of cancer. Imagine the new treatments that might save lives if we could.
Honestly, the number of people here who think they know better and citing unrelated research is bordering on anti-maskers parroting the surgeon general saying masks don’t work when they do.
> It correctly identified when cancer was present in 51.5% of cases, across all stages of the disease, and wrongly detected cancer in only 0.5% of cases.
If all 300 million Americans get blood tested each year, that is 1 and a half MILLION people who are falsely told they have CANCER. One and a half MILLION people whose life gets turned upside down, WRONGLY, due to a blood test -- at least, until they take a second one. Or a third one. Or maybe they get unlucky never learn they don't have cancer, and mess up their life treating a disease they don't have.
Now, OBVIOUSLY not all 300 million Americans are going to get tested each year. But you are completely ignoring the ethical concerns surrounding telling literally .5% of those who don't have cancer that they have cancer, and the non-insignificant proportion of people who will never learn they don't have cancer (we NEED to consider even the "miniscule" probabilities at this scale), and that makes you seem like somebody who "think[s] they know better".
> One and a half MILLION people whose life gets turned upside down, WRONGLY, due to a blood test
Culture adapts. Getting a cancer diagnosis today is a huge deal because testing is infrequent, usually in response to a problem, and the diagnosis tends to be late and accurate. If 1.5mm healthy people get a false positive every year, the general response will be stress until a second test confirms a false positive.
The lack of public trust in health policy doctrine, and the support for start-ups that can operate outside those channels, comes in large part from this type of thinking.
> Getting a cancer diagnosis today is a huge deal because testing is infrequent, usually in response to a problem, and the diagnosis tends to be late and accurate.
Getting a cancer diagnosis is a "huge deal" because getting cancer is one of the worst experiences a human can go through.
Getting cancer carries the weight of imminent death, obviously. But some cancers can be cured with high survival rate, so what's the big deal? The deal is that these cures are often brutal chemotherapy or radiotherapy or whatever treatments that destroy your body, your sleep, your appetite, your ability to go to the bathroom yourself, your ability to do chores by yourself, your ability to do anything a normal human being can do. They DESTROY you.
THAT'S the "huge deal". NOT the frequency. Cancer RUINS YOUR LIFE for the period you get treated, IF you survive the treatment at all, let ALONE the cancer. And you are proposing we tell sub-1.5 million people EACH YEAR that they might have to go through that, while STILL ignoring the proportion of the population that will never learn they have a false positive because they got unlucky (which, AGAIN, we need to consider, since even miniscule probabilities matter at this scale.)
So until the day we figure out a way to make getting rid of cancer as easy as popping a pill (we're getting closer thanks to gene therapy, but patients may still receive the destructive treatments in the meantime), culture will not "adapt", it will remain a "huge deal", and we will keep "this type of thinking".
>Getting a cancer diagnosis is a "huge deal" because getting cancer is one of the worst experiences a human can go through.
I’ve never been given a cancer diagnosis, but I’d imagine it’s much, much worse to be told you have stage 4 cancer versus stage 1. As the other person stated, the culture would adapt and people would learn to not immediately sell their house and go on a Vegas bender just because a yearly preventative test said they might, possibly, perhaps have stage 1 cancer. If you are one of the people with a false positive, you schedule a follow-up and move on accordingly.
Your position is anti-progress just for the sake of being against accidentally scaring a few people. Indeed, we should let more people develop cancer and discover it at a later stage due to a small amount of false positives. Not scaring a small amount of people is more important than revolutionizing cancer screenings.
My impression is that yours is the consensus view in the medical community. But I think that what all these people are telling you is that they'd much rather have a much higher chance of a false positive stage 1 diagnosis than even a much lower chance of false negative diagnosis until they're at stage 4. And any positive result from this would obviously be couched in a doctor consult saying "if you have cancer, which this test doesn't definitively show, it's still very early, we're just going to keep an eye on it, we didn't get any confirmation from a chest x-ray or CT", etc
And having a memento mori can be a positive thing.
Heart disease is a huge deal. It ruins your life. (Maybe not in all caps.) Nevertheless, we regularly test for it, including with techniques with high rates of false positives. Few panic because almost everyone knows someone who got a false positive and didn't promptly keel over and die.
If healthy adults are screened for cancer, there will be cultural memory of people who got a false positive. The response to a positive test result won't be chemo. It will be further testing, if non-invasive, or suggestions of lifestyle changes and increased monitoring of the suspected system.
People aren't too stupid to understand an early test with a high false positive rate. People with family histories of cancer don't wake up every morning screaming. Some may. But barring everyone because some people will overreact is why orthodox health policy is losing public trust.
I don't enough about heart disease treatment to comment on the degree it ruins your life.
People already get further testing. People already get second opinions. In the time you are changing your lifestyle (a good option for helping treatment, I agree) and increasing monitoring you could already be progressing beyond treatability.
You can argue about whether it's alright to tell sub-1.5 million people each year who don't have cancer that they have cancer (I don't think it is, but whatever). But again you can't disregard the non-insignificant number of people who get false positives.
edit: Actually, not "whatever". You probably shouldn't tell sub-1.5 million people each year who don't have cancer that they have cancer.
Hey, I can see this is emotional for you, but have you considered a world in which it is not telling this vast body of people "you definitely have cancer"? Something more like "we detected XXppm of [marker] which indicates we might need to do some more testing, can we talk about [risk factors]?".
It just doesn't need to be as dire as what you are projecting.
> In the time you are changing your lifestyle (a good option for helping treatment, I agree) and increasing monitoring you could already be progressing beyond treatability.
That's actually a very good argument for early testing.
And needlessly scaring a some people into eating better and exercising more is about as bad as not screening them at all to avoid needless anxiety.
Most stage 1 can be treated successfully with surgery alone, it's getting a late diagnosis that force you to go through a brutal treatment. If all cancer were detected in phase 1 it would not be seen as a "huge deal" anymore.
They will not be told they have cancer. They will be told they probably should check something out. Everybody already knows they might have cancer, having a negative blood test would be a huge relief.
Maybe I'm misunderstanding this but a 0.5% false positive rate and a 51% positive rate means that you still get a 50% TRUE positive rate instead of the current rate, which is 0%. And then after you can follow up with a test that is more accurate. Everything has an error rate, even pregnancy tests do, and nobody argues that we should not use pregnancy tests.
Are you comparing that against the MILLIONS who get a correct diagnosis? I expect that the quality-adjusted life year total over all tested is higher with the test than without. Wouldn't that be worth it?
I think this leans more towards the "the needs of the many outweigh the needs of the few" and less towards "tyranny of the majority".
Do you think that technology doesn’t improve over time? You are suffering from a fixed mindset, where you think people or technology is as good as it gets now and never improve.
You should read up more on growth mindsets. People and technology aren’t fixed. They change and improve over time.
If people had your attitude about AIDS we would have let all AIDS patients die because “oh well nothing we can do.”
Luckily those with growth mindsets and optimism didn’t listen to people like you and now AIDS patients take a single pill a day.
The same could happen with cancer to, but not if people like you are in charge.
> If people had your attitude about AIDS we would have let all AIDS patients die because “oh well nothing we can do.”
No, if people had my attitude about AIDS they would say "Jesus Christ, don't tell 1 and half million people each year who don't have AIDS that they might have AIDS." I am talking about diagnosis, not treatment.
You are clearly confusing treatment with diagnosis/screening and should consider walking away from the conversation until you get a better understanding of the ideas being discussed here. Right now you're adding a lot of unnecessary noise with bad arguments.
Early diagnosis means new ways of treating cancers like liver and pancreatic. You don’t seem to understand this. Right now there is practically no treatments for either because it’s detected so late. Maybe after 20 years of early diagnosis we can treat pancreatic cancer the same way we do others.
Let me guess. You were telling people in early 2020 that N95 masks don’t work.
But this isn't the argument. The argument is about all the false positives you get when you mass-screen everyone. All those people then need to be further examined, overwhelming the medical system and causing distress for the individuals (who are, remember, perfectly well).
I agree if we had a test that was 100% accurate (literally never gave a false result either way), only detected dangerous cancers, and cost $1 then that would be a game-changer, but the screening and tests so far are not that.
For every genuinely smart person on HN, there’s a handful of Google warriors who hardly know what they are talking about. I guess that’s not unique to HN, but it can be aggravating just how many commenters can be so “know-it-all” with their comments.
That this test is approved implies that it to some extent at least makes people live longer. There are a number of cancers that are easily treatable in stage one - but they have no symptoms then, and so often kill people because by the time it is discovered it is too late.
False positives are a concern, but one that is manageable. People should get regular checkups, if something comes up we just do more testing as needed.
Though that makes this discussion completely worthless - without trial results we have no idea if it is a useful test. Once we have those results we will know something about how useful/useless might be.
cells turn cancerous all the time and usually the body eliminates them without issue. this blood test is still going to hit all sorts of "false" positives, where it detects some cell that has gone cancerous that would have been eliminated anyway.
I'm far from expert but I don't think this would be a large concern. My understanding of CTCs (circulating tumor cells) is that they are shed by a growing tumor. A random weird cell doesn't seem likely to be turned up in a test like this. You need some amount of quantity to set off the proverbial alarm bells
> ke MRIs and CT scans that only detect lumps, to blood tests that detect CANCER.
You mean like the PSA test? It's pretty bad, in the scale of these things. It's used because there isn't a better screen, not because it is more specific.
I weight myself everyday. Some people tell me not to do that but its honestly pretty good at keeping me in check and because I weigh myself everyday I know that sometimes I just get super bloated and put on four pounds of water and it will be gone in a couple days. When you get more data you learn to adjust how you interpret it and make better decisions.
Likewise, if getting a false result 50% of the time becomes common (this assumes we never improve the tests) then people will know to adjust their priors. After all - I've had to go to specialists a few times for extra tests. I wouldn't say we ban those.
I meticulously tracked every calorie I ate for several months. Now I know how much of different types of food is how many calories, how many calories I eat per day, and most importantly what my different hunger levels mean in terms of the number of calories ingested. Combined with my weight tracking, which tells me my set weight / weight variance over a day or several days / food intake to gain or lose a certain amount of weight, I can easily control portions to hit my weight goals without thinking about it.
I would imagine more health data would allow me to optimize in this fashion as well. If I can correlate health markers to my lifestyle often and directly, I can make better and more informed choices. Advocating against easy access to health data because some people can misuse it is the same kind of nanny state thinking that says encryption shouldn't be available to the masses because criminal enterprises can use it to hide their activities.
While I agree with the point that you're making around employing caution against iatrogenics (with colonoscopies being a very good case study that one should make sure to understand before forming an opinion here, as you referenced in another point), I think you're overconfident in your prediction that this hypothetical scenario would be bad.
I think your claim (based on sibling posts) is that in the current medical system, if we just added more screening, we'd not necessarily get net benefits. But I think that ignores the fact that, if we had cheap and high-resolution screening, we could fundamentally restructure many aspects of medical care. The BMJ article you linked in a sibling[1] notes that cancer screening may reduce cancer mortality but increase all-cause mortality. That's an unexpected and problematic result of getting referred to a cancer specialist that might not have (or be incentivized to care about) a wholistic picture of health when you screen positively for cancer. But if we had higher-resolution data, and conceptualized medicine as primarily preventive instead of curative, then it seems likely to me that overall mortality would be your target, we'd have richer data to be able to track that endpoint, and so we'd be more likely to catch the cases where an intervention caused unexpected harms (because we'd be tracking more indicators).
In other words, the problem you're observing is that adding a bit more data to the current system can produce negative outcomes. But that problem would be fixed by adding even more data. (With the remaining question being, how much data would we need to add to reach the "net positive" regime?)
I think you're arguing against a change that looks like a harm from the perspective of a local optimum that we're currently working towards, without considering the dramatic paradigm shift into another higher-utility region that would have been brought along by this sort of technology.
In summary, I am much more optimistic that if we had orders of magnitude more data, we'd make better decisions, not worse. But I agree with your caution that it's not as easy as it seems.
I think you're casting too wide of a net there. Just because you've detected something early doesn't mean you automatically need aggressive interventions like chemotherapy. The impact might just end up being a line in your medical record for the family physician to keep an eye on during yearly routine exams. Breast cancer awareness programs are a good example of early screening programs not being an exercise in mass hysteria. Some early symptoms for eye conditions only require periodic observation up until it actually becomes a problem. Etc.
Perhaps it is true - in the US, anyways - that there's a tendency to overly prescribe more aggressive interventions (read: more expensive ones), but my understanding is that the US model is the exception, not the rule, when looking at the rest of the world.
>Breast cancer awareness programs are a good example of early screening programs not being an exercise in mass hysteria.
This example also shows how screening can be recommended against because that's better for public health. The US Preventive Services Task Force (indirectly decides what Medicaid/Medicare covers) has different recommendations than the American Cancer Society for how often women of certain ages should have mammograms. USPSTF recommends against routine mammograms for women aged 40 to 49 if they don't have other risk factors. ACS recommends biannual mammograms start at age 45.
The reason they differ is because of how they weigh the reduction in deaths against the harm of false positives. Routine mammograms will prevent breast cancer death, no doubt about it. But notice neither recommends routine mammograms for all women below 45, even though they accounted for over 10% of breast cancer diagnoses in 2014-2018[0].
The math is tricky when comparing a risk of death against quality of life and economic costs. Public health is a matter of public policy as much as health.
I had a mystery illness. Spent hundreds of thousands on Er visits only to find it’s was a pair of fairly common conditions acting together. Blood work tests exists for both.
A deep round of blood testing would have saved everyone a lot of time and money and suffering.
Or I could have waited many years until the organ damage was so extensive that diagnosing was easier. Oh wait, that’s what ended up happening.
Mass screening of healthy people will result in ...
But you don't know they're healthy. They might be sick but (so far) asymptomatic. That's why you screen, if you have a sufficiently accurate test available and you can make a useful intervention if the test gives a positive result.
If we applied your argument consistently, we would abolish all cancer screening programmes, resulting in many extra deaths because early detection and treatment didn't happen. We'd stop checking up on heart health as people age, resulting in many extra deaths because people continued to live unhealthy lifestyles without realising what it was doing to them. We wouldn't be using rapid testing for COVID-19 to detect and isolate probable asymptomatic carriers who might spread the virus to others who wouldn't be so lucky. The list goes on.
Good screening programmes save lives. It's as simple as that.
If you leave the treatment criteria the same, then yes, you are absolutely right. But you have to adjust those, too.
Just a made-up example: Let's say someone has an advanced test for cancer run, and it comes back positive. You know from studies that treating people the same way as before with the advanced test leads to worse outcomes because of unneccessary treatment etc.. So what you could do is you do the simpler test afterwards. If it comes back negative, you don't treat -> you are in the same situation as before, only you can be more vigilant in future and see if the cancer grows. If the simpler test also comes back positive, you do the treatment.
It's all about getting data (statistical and otherwise) on what the best treatment is, and acting on that.
As it stands wealthy people can and do find more thorough medical treatment, including more frequent and exhaustive imaging, labs, etc... The conventional wisdom you cite only applies to those who can’t seek endless second opinions from specialists. There are many conditions that exist and are treatable in wealthier societies. The poor suffer those same afflictions but are untreated or misdiagnosed.
I wonder if the equivalent of extreme iatrogenics and unnecessary psychological damage and stress also occurs when monitoring servers too much. I have a feeling it does. This is why I typically set higher thresholds like more disk space, memory (equivalent of exercise and healthy diet?) that I really need so I stop monitoring my servers religiously.
I understand this argument. But if we separate out the mass screening/data-collection approach from its practical constraints (undue stress, overwhelm medical system), I think we can both at least agree that it is a correct direction to head towards? Or is the status-quo already as well optimized as it could be?
this is a paternalistic point of view. People should be entitled to accumulate data about their own body -- any "negative" ramifications of this are personal problems.
I think it has more to do with how the kind of data collected from those mass screenings could be used to build predictive disease models that would put a lot of the guys making >= $300K to do a very poor job of keeping people healthy out of business.
> Actually you don't want this, and you are describing a nightmare scenario that everyone who studies health policy understands all too well.
Would a very large representative sample of volunteers doing this kind of hitherto unprecedented level of medical monitoring help improve this technology while limiting the unintended consequences? If we could get to the point where the technology was very highly predictive of specific outcomes and even had the ability to test early interventions on those outcomes, that would both allay the psychological cost of a population-wide roll-out and drastically improve outcomes across the board, would it not?
> so by detecting it early you are just reducing the amount of life they have left without worrying about their disease
I wonder if people would rather continue to live care free, spending most of their waking hours commuting and at work, or if they'd rather learn the truth and face the harsh reality that their time is about up and adjust their priorities accordingly?
Regularly checking people that are otherwise healthy for cancer will turn up a lot of cancer through false positives or slowly growing ones, will lead to a lot of unnecessary intervention and will in fact lead to a reduction in quality of life and lifespan. This is one of the reasons why exhaustive cancer screening (which was more costly in the past but could have been done) was not promoted, it had nothing to do with ability, but everything to do with outcomes.
Those cancers where checkups are useful we already do regular screenings for.
For aggressive cancers - the ones that are really problematic - you would have to do such a test too frequently to make any real difference, for instance, if you were to test annually you'd be on average 6 months away from your next test, plenty of time for such a cancer to develop and kill you.
So this is not the kind of breakthrough that you may think it is.
Have heard your argument on the false positives and unnecessary interventions many times.
It is not a good argument. In fact, it makes no sense. If you get a positive with an uncertainty in its accuracy, at the very least, the test is repeated. But even more, you can use the information from the investigation of the reason for the false positive to improve the tests in the first place. If we weren't humans and the uncertainty of the test is known then at the very very very least you could throw a dice to decide whether you discard or not the test result.
If more information leads to worse decisions it just means that the noise level introduced by the test is just too high. A way to reduce the noise is two amplify the signal, and a way to amplify the signal is to look for more information (other tests, other indirect measurements: i.e., look for B if A was positive, etc.).
>If you get a positive with an uncertainty in its accuracy, at the very least, the test is repeated. But even more, you can use the information from the investigation of the reason for the false positive to improve the tests in the first place.
This assumes that the false positive is caused randomly. That's not the case. False positive tests are usually followed by false positive tests. Then it will take years to find out if it was a false positive or not.
That is very interesting. I assumed that false positives is generally a testing error (testing with another method or from another company would not lead to the same result). If the false positive is a result of a non-dangerous anomaly of the person being tested, then, I see how testing without symptoms can be worse.
Herein lies the real issue. Biology is a very, very messy science. So yes it could just be a testing error. But it might not be. It might be that something in your body behaves in a way that's unexpected. It might be some other non-dangerous anomaly as you cite.
We understand far more than we did say 20 years ago. But the problems are non-trivial on a scale most people don't appreciate.
This is a very well known problem for many kinds of screenings, and the solution is not as simple as you claim.
You can't just repeat the test, that is generally not where the problem lies. You'll get a second positive result and still don't know if it's a false positive or not.
And in many cases there aren't other non-invasive test you can perform. If you can't actually determine reliably if some anomaly will cause trouble before removing it, whole population screening will cause unnecessary operations.
Your belief has no representation in medical science, which is mostly evidence based. A lot of data has been collected on this, studies (many) have been conducted and the general consensus is that more testing absent symptoms does not lead to improved patient outcomes.
That you want to have some kind of theoretical argument in the face of this evidence might be interesting to you but it isn't to me.
That would be a bit of a paradox in science. I suppose we might take 2 steps forward and 1 step backward in the short term. As medical science advances, hopefully we can address any shortcomings from the additional early knowledge.
I noticed that pancreatic is on the list. This cancer is almost always fatal because we can't detect it early.
This really depends. Information that does not help make a good decision is just noise. It might seem like diagnostic information should always help make a good decision, but that isn’t always the case. If the false positive rate is higher than the base rate, a positive test would be more likely to be wrong than right, even with a very high accuracy.
People are idiots that don't understand statistics. Telling them they tested positive is all they hear, and it leads to them making subpar decisions that often involve invasive surgery. Yes, it's paternalistic, but the fact is most people aren't informed enough to make medical decisions for themselves.
No, that's not the point. On an individual basis knowing for sure that a person who has symptoms has cancer, especially what kind of cancer, is a positive. On a population scale knowing that people without symptoms may have cancer with an 0.5% false positive rate and an 18-50% true positive rate is quite possibly a negative.
Less information can be better. Or more accurately, conditional probabilities can be better than unconditional probabilities.
For example, a large number of unnecessary antibiotic treatments, which fuel resistance, are triggered by doing diagnostic testing on patients with no symptoms.
"Diagnostic stewardship" is a concept that exists for a reason.
You’re making an unwarranted assumption: that the choice is between early detection with current responses to detected cancers and no early detection at all. Optimizing the response to an early detection will give a result no worse than either of those, since the choices of what to do include doing whatever doctors did in the studies that have poor results as well as doing nothing at all.
No, that is not the assumption. The assumption is that the only way in which such a test will be useful is by applying it absent symptoms to the population en-masse aka screening.
And that - no matter who good the test, and no matter how early - leads to a decrease in positive patient outcomes.
This is established medical science, and it pains me to have to continue to point out the same thing over and over again, but since I started with this response I feel obliged to continue to do so.
The outcome of a cancer treatment is not pre-determined, there are a lot of individual factors at play here that will have a huge effect on the outcome, possibly much larger than the effect of that particular cancer itself.
So en-masse screening leading to an increased number of treatments of pre-symptomatic cancers with those current responses is not a choice, we know that this will lead to a worse outcome across a population.
Early detection does not add anything to that. If you could pick out those individuals for which early detection would make a difference then that would be a gamechanger, and here the ball is currently in the genetics court.
The other part where major change can be made is by finding ways to treat cancers in a way that is non-invasive and does not put the patient further at risk (so no surgery, chemotherapy or radiation therapy).
And that - no matter who good the test, and no matter how early - leads to a decrease in positive patient outcomes.
You keep writing comments in absolute terms and talking about evidence, but how do you reconcile your position with the results of successful screening programmes like cervical smear testing? Detecting and treating high risk HPV before it causes changes that can turn into cervical cancer has dramatically reduced the harm caused by cervical cancer itself at a population level. Routine screening of this type isn't normally recommended for young women, but it becomes increasingly effective with age and screening programmes operate accordingly.
Yes, but that's one specific cancer. There are a few others for which this is the case and absent genetic data a few that are borderline cases (notably: breast cancer, where the presence or absence of a mutation is a very relevant bit of data).
So you do agree that evidence-based screening programmes can be effective then? In that case, I'm sorry but I don't quite understand the point you're trying to make here.
Yes, they can, but not in a blanket fashion where a test with a relatively high false positive rate and a relatively low sensitivity (between 18 and 50% for this particular test) is released without patient outcomes as the main driver of whether or not to apply the test absent further symptoms.
Whether this test is one that can serve in that particular role is definitely not something that has already been determined and those advocating for such are ignoring a mountain of established science and are simply jumping the gun.
Policy is set by the overall effect of application, which can be quite different than applying that same tool in an individual setting. This whole thread started with
"This is one of those medical revolutions that I am waiting dearly for.
Facilities that are not hospitals(to avoid the risk of occupying medical devices that sick people need) built to _regulary_ check up otherwise healthy people for preventive care."
And that is not something you do without taking into account the downsides. Whether this is revolutionary or not remains to be seen, it definitely is a useful test based on what I've read about it so far.
Well. With a bad test, what is necessary is to improve the test. The argument "outcomes are worse off with more absent symptoms testing" is really saying that symptoms is more predictive than the test itself: test is positive if [symptoms & positive result] test is positive or neutral if [no symptoms & should have been screened & positive result] and test is negative if [no symptoms & should not have been screened & positive result] (notice that this does not change the outcome to not testing people without symptoms and without another reason to do so). Until the tests/decision protocols were improved to avoid false positive, this third case should be communicated to the person as a negative result of the test (if we are convinced that the overall outcome is better off if the test should not have been administered at all), but I would find that unethical and dehumanizing and I understand touches on lots of ethical issues: the stress of waiting for a result, the liability of not being an actual false positive, etc. So, I understand, that given the current state of the testing, the decision is to restrict when to do it.
You are visibly making progress in your understanding of the problem, for which you are to be commended, especially if you are a complete layperson in this field. Thank you.
This test isn't a 'bad test', though applying it in the wrong way can lead to bad outcomes. This is why tests used absent symptoms have to have a false positive rate that is much lower than the base rate at which the disease occurs. These tests - unlike software tests - should not be thought about in absolute terms but in terms of probability, so a positive test indicates a probability that you have a specific disease, but it is very well possible that you do not have it, and a negative test indicates a probability that you do not have it - but it is very well possible that you in fact do have the disease. And the reasons for a false positive or false negative may have nothing to do with the test itself, but could easily be an environmental factor or some benign aspect of the test subject that was not accounted for when designing the test.
Whether or not a test is suitable for mass screening hinges on the factors above, the base rate for the disease, the age of the group being tested, in some cases gender and so on. In order to have a positive outcome across the population all these factors have to be taken into account and by the time that you have done so there are - unfortunately - at the moment no miracles to be had. But combinations of knowledge, for instance a genetic pre-disposition to a certain disease + a positive test can have much higher signal to noise ratios than either by themselves.
But make no mistake: this is an important development in medical diagnostics and it may very well be that once more evidence has been collected and some of the kinks have been worked out that this particular test or an improved version of it can be applied in a screening setting for one or more of the cancers that we currently have no reliable detection method for and that could be cured if detected early enough given a high enough base rate and a low enough false positive rate.
Note that the scientists behind the project are very careful with their statements and that the reporting on this was actually quite neutral and trends to cautious optimism, which I think is warranted, but until the result of the new studies is known it is way too early to shout 'revolution'.
Well, yes, it's only worth running a screening programme if you have a usefully accurate test and if you can then make some useful intervention after a positive result. Is anyone here disagreeing with that, though?
Maybe we interpreted OP's comment that you quoted differently? I read it as being in favour of preventative medicine as a whole, not necessarily endorsing this specific test at this specific point in time.
Maybe I'm also interpreting some of the other comments in this discussion, including yours, differently to how their authors intended them. My concern is that as written they appear to be criticising all use of screening, regardless of its efficacy, which is extremely dangerous.
That certainly isn't the goal, it is strictly meant within the context as established by the root comment. To present this at the present time as a revolutionary breakthrough and to suggest using this test in particular for mass screening is not a path that will lead to a good outcome unless a lot more data is gathered to support that position.
The people that have built this are at the forefront of this field, I've been following them for quite a while - since the announcement in March last year - it has direct bearing on some other things that I'm involved with and I'm hopeful that it will at first be a useful diagnostic tool and that in a later stage - after the kinks have been worked out and there is sufficient data - that it might help with more than that.
Preventative medicine obviously has its place and for selected cancers we are now in a phase where early detection leads to improved outcomes. But we should continue to be weary of overselling this - the same has already happened with other cancer tests.
Absent symptoms mass testing has serious risks and these will obviously be taken into account when setting policy, the article is actually reasonably neutral in this respect so I wonder why it leads to an immediate response that is equating this with a medical revolution. It may well be, but there is no evidence right now that this is the case.
You’re correct in that this is a prevailing view in epidemiological/Public Health circles. But Medical Science is not only the macro but also the micro level perspective. Individual Practitioners of medicine might well appreciate more and earlier Data. A first principals based argument is a complementary approach that might uncover things that an empirical view might hide.A good example of this is the Australian Noble Laureate Barry Marshall (https://www.discovermagazine.com/health/the-doctor-who-drank...). As long as the hypothesis can consequently be validated in controlled clinical studies, a theoretical argument even without existing foundation in empirical literature can still make for good science.
Absolutely, on an individual basis is where the difference can be made and this is exactly why it is important if you are tested positive for some cancer to work together with your oncologist to ensure the best possible outcome for you. The interesting thing here is that laypeople tend to be in favor of massive testing and almost always want to be operated on/have chemotherapy/have radiation therapy even if that is not necessarily the best path for them.
> A lot of data has been collected on this, studies (many) have been conducted and the general consensus is that more testing absent symptoms does not lead to improved patient outcomes.
Those are popular and accessible, the actual studies you can find through Google Scholar, SciHub or various medical publications.
This is not something where the general public - or software developers, who seem to treat cancer as a bug that needs to be fixed - are going to be very helpful, I am more than happy to trust the medical establishment with this.
What would be a game changer would be rather than improved testing something that would destroy tumors in-situ in a non-invasive manner that does not involve radiation or attempts to poison the body just this side of death.
Not just that, but neither GP nor anyone else in this thread seems to be mentioning the mental health implications of constantly screening for life threatening diseases. For me, and many others I am sure, this would be bad. My quality of life would suffer tremendously, to the point of likely substantial loss of mental function. And I am definitely not one of these “we all die someday” types. Far from it. It’s just that if you’re prone to anxiety, especially the hypochondriasis variety, the picture is much more complex than thinking of yourself as a server hooked up to monitoring.
> mental health implications of constantly screening for life threatening diseases
...to be contrasted with the mental health implication of living with the knowledge that you are *not* being tested and that cancer can grow undetected for years.
That, if anything, is a very good reason for anxiety.
Yes, that's a very important point. Even regular checks for cervical cancer or breast cancer can be a huge stress factor, especially in the period just prior to the test and in the waiting period until the results are in, and even more so if the test yields a false positive.
Is the problem not so much the testing, but that most current treatments for cancers are so crude? If you have a relatively asymptomatic cancer, the treatments (chemo, surgery, proton beams) and psychological stress of discovery could be worse (statistically) than letting the body naturally take care of it. This probably depends very much on the cancer ( e.g., some Thyroid cancers might be better left alone).
If our best treatment for fixing a harddrive was to hit it with a hammer, then maybe we'd also conclude monitoring for minor bit errors in data centers is unwise too.
Fair point: if we had a completely non-invasive cancer treatment then that would be a game changer (I made that some point in an earlier comment far down in the thread). And in that case it would lead to improved patient outcomes. But the current regime of tests+treatment options does not in the general case - and there are quite a few exceptions that mostly have to do with genetics - lead to improved outcomes.
Look up the term "iatrogenics" or check out oncologist Vinay Prasad's podcast called Plenary Session where he often talks about the dangers of over screening.
BTW... years ago I was rather supportive (albeit not having voiced it) of the argument that false positives lead to unnecessary interventions and thus screening should be itself gated.
Your understanding as evinced by the comment above leaves me to wonder if you actually get that this is not about individual outcomes but about the overall statistics. Even if more accurate tests would turn up more true positives that still would not necessarily result in increased patient outcomes. That is why absent symptoms testing makes sense for only a very low number of cancers where early detection does improve patient outcomes significantly, typically this is vastly improved if there is knowledge about the genetical make-up of the individuals.
A positive blood test is a symptom. One which you'll miss if you don't do the blood test regularly.
And looking for symptoms (other than a blood test) is a form of testing... one with much worse accuracy (both positive and negative) than the blood test, especially in the early stages.
"A physical or mental problem that a person experiences that may indicate a disease or condition. Symptoms cannot be seen and do not show up on medical tests. Some examples of symptoms are headache, fatigue, nausea, and pain."
That still sounds like a subset of medical testing to me, where you're using the patient as your test equipment and ignoring anything the patient cannot sense directly (or fails to report). Perhaps you don't call the results of medical tests "symptoms", but to me this is an insignificant distinction. Either way we're talking about an observable result of some underlying condition; only the means of observation varies. Jargon aside, this approach still means waiting until after the condition has already affected the patient's quality of life, and perhaps beyond the point where it can be treated effectively, when it could have been treated and perhaps even cured before any symptoms manifested.
It seems like you and GP are agreeing on the fact that right now we don’t do too many preventive checks because the cost of testing is higher than the benefits (mainly because false positives could lead to risky and invasive follow-up testing to confirm that it’s indeed a /false/ positive; and knowing that you have something sooner rather than later may not meaningfully affect the outcome of the disease; and costs of tests and trained personnel are huge).
But GP seems to be saying that they hope to see better tests in the future, that are not risky or invasive, don’t have as much false positives, and are less costly to do, so that the equation would change and we could actually meaningfully improve outcomes by doing large-scale preventive testing.
You would still likely have cases where you cannot /improve/ outcome by knowing sooner that you have a disease, but as long as you are not making matters worse and improving chances for a significant subset of people, all while keeping costs the same or even decrease costs, this seems like a great evolution.
Even 100% true positive rate would not guarantee improved patient outcomes with massive testing. This is grounded in a poor understanding of what improved patient outcomes are all about, which is fine with me but to see this so misunderstood is a bit disappointing.
Better tests do not automatically lead to better outcomes. They will lead to many more cancers detected, and they will lead to more interventions.
Just one example (there are many more): for many tumors the risk of the operation to excise it already outweighs the risk of the tumor itself leading to damage to the body.
The factors that govern whether an intervention is necessary are determined by the rate of growth, the risk of meta-stasis, the organ(s) affected, the stage the cancer is currently in (and here early detection would at least help to get a grip on that) and so on.
But once detected treatment is going to be the norm, and that's where the problem lies: treatments are not necessarily an improvement over having a mostly dormant cancer.
If you were to autopsy all of the cadavers from any given country for a period of time you would find a correlation with age and the presence of one or more tumors in that cadaver, even if the person never had symptoms and died of a completely unrelated cause. Treating all of these would have resulted in some of those people ending up in the morgue a lot earlier and having a reduced quality of life both from a medical and a mental health perspective.
Deciding to treat - or not - is not a simple matter.
Your point across this thread is weird because it's not the testing itself which is an issue (if we assume the testing has a negligible cost economically and is comfortable/easy for the subject).
We just need to improve the decision making after getting test results (one of these decisions is to decide to not do anything), and more data make improving it easier.
But tests do not have a negligible cost, have other costs besides the pure monetary value (such as occupying valuable lab time that could be spent on symptomatic patients instead), are typically not at all comfortable and easy because you'll be looking at a biopsy at a minimum (which again takes away valuable resources from symptomatic patients) and so on.
My argument is not about particular individuals, but about populations as a whole and wholesale screening of those populations. The consensus is that this does not lead to improved patient outcomes across that population, though in individual cases it may very well be the result.
My lay understanding of the current standard of care is very roughly speaking something like:
Patient exhibits symptom => perform a not-especially-invasive test
Positive test result => invasive test like a biopsy
Positive biopsy result => heavy-duty intervention (although I'm not focusing on this part of the chain in what follows)
Both testing and (certain) symptoms have predictive value, and don't completely overlap. So there's something like this going on:
P(actual problem|no additional information) = really really low, which is why they don't scoop out chunks of every organ to test "just in case" every time you go to the doctor
P(actual problem | [symptom AND positive test result]) = generally high enough in at least some cases to justify the risk of the biopsy, which is why it's the standard of care
P(actual problem | just symptom) = probably not super high, which is why the tests are developed
P(actual problem | just a positive test result) = substantially lower than P(actual problem | [symptom AND positive test result]), so in the general case the balance of risk no longer favors the biopsy
In the broadest of strokes, is there anything I've just said that you substantially disagree with?
No, there isn't, though it is probably important to point out that age, genetic disposition and gender are a big factor in selecting what kind of test and if positive what kind of treatment - if any - will be administered and that this is as you correctly identify on symptomatic patients only which raises the base rate in that population (the population of symptomatic patients) tremendously.
And that's exactly where the issue with indiscriminate asymptomatic testing lies, that requires much higher quality tests than the ones that can be used in a diagnostic setting once a patient is symptomatic.
To add one more unpopular bit of data to all this: there is some evidence that the indiscriminate testing for certain cancers has gone too far and that it no longer is a net positive. But in the presence of certain mutations those tests are extremely valuable.
Biology is messy, and it is quite hard to state up front whether or not a test or a treatment - even if in an experimental setting it is working - will still be a gain if rolled out in a different setting or application. Hence all the trials and studies, that's the only way to really get a grip on this.
I'm quite curious what the outcome of the large scale test the article refers to will be.
>> treatments are not necessarily an improvement over having a mostly dormant cancer.
This is true today.
But if we could detect cancer at a really early stage, relatively reliably, maybe this means we could develop new and effective treatments that are low risk and low on side-effects ?
And if we had that, early cancer detection will also have a totally different meaning, so that could help with the mental health aspect too.
We could develop those treatments irrespective of early detection, there are plenty of examples of early detection of cancers today to make that feasible, this does not depend on a new test regime.
You may know more than me about medicine, but when it comes to allocating resources, I know just as much if not more than you.
If we start to test more, and understand the magnitude of the problem better (despite false positives/negatives) we can better allocate capital to solving this problem.
Sure, "cancer is horrible, we should already allocate as much capital as possible" but this just isn't reality. As soon as the addressable market for early-detected cancer treatment goes from X per year to 100X per year (and 1,000X or 10,000X is "even better"), big pharma has more motivation to actually R&D safe treatments for early-detected cancers.
Not testing more to detect cancer early is silly, if only from the perspective of capital allocation.
I'd need to know more about the specific types of cancer this screening covers to say anything for sure. However, the errors that cause cancerous growth are more common than most think and many are not life-threatening. I think that's what the other commenter is talking about. The intervention for these types of cancer may be more damaging than the cancer itself. If these are detected by this test, the patient may not understand that intervention is not necessarily in their best interest and may have increased anxiety or demand treatment when it is not needed.
> Regularly checking people that are otherwise healthy for cancer will turn up a lot of cancer through false positives or slowly growing ones, will lead to a lot of unnecessary intervention and will in fact lead to a reduction in quality of life and lifespan
This is completely irrational argument. Catching cancers early is crucial.
Furthermore, if better information leads to unnecessary intervention the blame lands squarely on the hospitals being overzealous and greedy.
> For aggressive cancers - the ones that are really problematic - you would have to do such a test too frequently to make any real difference
Because aggressive cancers go from 0 to dead in a week? Please.
I detest this mindset. It is incredibly counter productive.
"Our tests are bad, so lets not test" is not a thought worthy of respect. This states that you know about a problem but want to continue to ignore that problem. Reprehensible.
No, we should take the exact opposite approach. Test everyone constantly until such methods become both cheap and powerful. So yeah we all have some cancer load but if the diagnostic or treatment systems can't deal with that reality then those systems need to change.
The problem is not my mindset, but your understanding of the subject material. Patient outcomes have very little to do with the quality of the tests, even a 100% accurate test would not necessarily lead to improved patient outcomes because these are defined independent of the tests.
You are treating this like a software problem, but it isn't, it's a medical problem, and medical problems tend to be complex because they have a ton of confounding variables that make it hard to have a one-size-fits-all method for dealing with medical issues.
What needs to change is that people need to realize that they have a field of expertise and that the medical domain has its own experts who typically dedicate a lifetime to their profession, their general consensus is that improved tests are welcome but in and of themselves are not enough to guarantee improved patient outcomes. Yes, this is unfortunate, but it is also a simple reality, you can either accept that or not, that's up to you but if you want to make a change there than you probably should join the medical profession. Most likely by the time that you have completed the requirements you will have shifted your viewpoint away from the software domain's mindset that all bugs can be found and squashed by the next sprint. Which by the way judging by the general quality of software out there is also something that doesn't work out in practice as we believe it should in theory.
Before my software career I spent a few years working in a diagnostic medical field. Specifically osteoporosis testing. I worked both in a research capacity at Stanford looking at osteoporosis in older men (not pretending to be the PI here) and in a day to day testing clinic. So I've seen exactly what happens when you test a cross-sectional asymptomatic sample of the populous and what happens during the normal course of referred testing.
Low bone mass at the spine, hip, heel, and forearm as measured by DXA are correlated with increased risk of fracture, but it's only a correlation. Some people have resilient architecture which looks porous on an x-ray but only leads to serious fracture much later in life.
Because the current diagnostic tests are set up with levels like 'osteopenia' and 'osteoporosis' the reaction to clinical referrals was most often treatment. Some of those treatments have serious side effects like necrotizing impacts on the jaw. However the reaction to testing in a large asymptomatic population was much more likely to be an increase in preventative behavior (exercise) or no treatment except in extreme cases. While our study was exploratory and didn't have a treatment cohort (we cared about the impact of sleep quality on bone mass) we got to see a lot of older men who if they were referred to a clinic might have received treatment because that's what clinics do. Instead we had sufficient data to discuss what's normal and what isn't. For a time we were the leading experts in the world on what "normal" meant.
Because I've conducted these tests myself and seen data from hundreds of experimental and clinical patients I feel comfortable contrasting the two. The problem is the clinical medical field reacting to a lack of data with over-prescription of treatments.
Sure, but that's a completely different setting than the one the OP described: mass testing for presence of cancers by non-specialist labs. And that's the thing that triggered my response. The outcome of such an approach could easily be a negative one.
FWIW I too have some experience with medical diagnostic systems (specifically: cancer testing), and one of the main reasons why I'm still skeptical about this test is that for many cancer types tested for the base rate is much lower than the false positive rate.
“We’ve never had blood tests that detect pancreatic cancer early, therefore we should never use them because when we used other means to try to detect early, it didn’t lead to better outcomes.”
If you can detect colon, kidney pancreatic or liver cancer early, you might be able to do surgery or develop treatments at the stage 1 stage. Right now we don’t have anything except MRIs and CT scans that are too hard and expensive to do frequently. And if you are diagnose with pancreatic cancer it’s usually so late that you will basically die in weeks.
You’re basically saying “give up. Even if we detect early you all die anyway” which is frankly stupid. You’re discounting the possibility that early detection by means of a blood test adds whole new layers of possibilities to fight those particularly dangerous cancers.
But: that surgery is not without risk, and if there are no symptoms there is a fair chance that there never will be symptoms. Depending on your age and your genetic make-up you might be more or less at risk. There is such a thing as spontaneous remission and so on.
So no, I'm not 'basically giving up', and no you won't die anyway (well, unless you take that in the most abstract way), in particular likely not from cancer.
Even for those cancers where we do screen (such as for instance breast cancer) it is not a given that the increased frequency of detection has led to better patient outcomes.
But once you know someone's genetic disposition increased frequency of testing might be advantageous.
Again, absolutely absurd. Many cancers have no symptoms until it’s too late. Do you even understand this? At that point the choices for treatment are extremely limited.
Even breast cancer needs x-rays to detect lumps, not detect cancer. You need a subsequent biopsy to differentiate. That’s our technology right now. If we had a blood test to detect cancer, not lumps, it’s game changing.
Maybe after a generation of early detection, new outcomes will emerge if we can detect the most deadly cancers early. You’re taking old studies and applying them to new technologies and saying “it won’t work.” It’s ridiculous that you are doing this.
Even if you detect cancer you will still need a biopsy to figure out which bits are the cancer and which bits are just 'lumps'.
Yes, it is a game changer, but it is not the kind of game changer that this is being made out to be here and mass screening using this method is not going to lead to improved outcomes.
As an extra tool in the toolbox of the diagnostician it is very useful.
It’s pretty easy. If blood test says “liver cancer positive. Colon cancer negative.” you check the liver for cancer. No need to check the colon for cancer. No need to scan the entire body for lumps.
Not that easy. “Check the liver for cancer” has lots of risks: surgery and the resulting tissue injury.
I’m generally inclined towards the position of more testing....but there are very real tradeoffs to any of the interventions that a blood test could prompt.
What happens when it turns out that most of the people having surgery don't have cancer? Because that's the reality of tests with a .5% false positive rate if less than 1% of the population is positive.
I don't see why early detection means early treatment. What if after detected, the illness is monitored instead. If it then gets to a point where the treatment is no longer considered risky for the stage of the illness, then the treatment can proceed.
It would also give the person the opportunity to change lifestyle to perhaps prolong the time that the illness will become a problem, or perhaps halt its progress altogether.
Additionally, the person who is aware of such illness, can keep an eye out for symptoms related to it, that might otherwise be ignored as something else. At which point normal cancer treatment can progress.
This is a tricky bit. There is a lot of interplay here between medical professionals and the general public, and not all of that is either rational or ethical. But absent symptoms non-treatment is better than treatment.
Data about unnecessary procedures is relatively plentiful, which is of course sad, whether that's driven by commercial incentives or the need for 'something to be done' is not something that I have any grip on but it certainly is problematic.
I think you should stop trying to derate people who you think know less than you. It's very unattractive. Be aware there are folks with extensive public health experience (that would be me) watching you insist you're right. While I totally appreciate what you're trying to do (help software engineers understand why medicine is complex), please do understand that GRAIL is tightly integrated with the PH community, did their work from a good-faith perspective, and came up with a product that does "do" something. Now it's up to the medical community to evaluate whether it truly provides something that doesn't just end up costing us more money and not helping people.
That's perfectly accurate: the problem is that people tend to run with things like this as though they are the miracle that everybody has been waiting for and that is not the case, it is a very important development but in the right hands and not as a tool that will lead to - yet another - round of disappointment in the 'war on cancer'. The whole thread has devolved into people arguing from hope rather than from facts. This set of tests is a very useful thing to have. But the 50% true positive rate, 18% true positive rate for stage I cancers and the false positive rate combined make it at present - as far as I understand this - a tool that could when used as a mass screening tool easily do more harm than good.
I'm more than willing to be convinced otherwise but with relevant data. FWIW I've been following this particular development closely because it has direct implications for a start-up that I have had a lot of contact with that is also in the early detection space and they were adamant that the combination of factors is such - and so complex - that test accuracy is trumped significantly by absence of symptoms in otherwise healthy patients.
my input is that I've been involved with GRAIL since the beginning and I think they've actually done something. We don't have the data to say one way or the other.
I am not arguing with most of your points, it's just that you're not being epistemically humble. I find it's far better to just act humble even when you aare schooling people who persist on believing in magic.
I sincerely hope that this test will drive down the cost of testing and that they will get the kinks ironed out, I think it is a major development and I'm actually afraid that those that oversell it will cause it to end up being tainted, as has happened with many other cancer related diagnostics and potential treatments. Cancer seems to bring out an emotional response rather than a rational one, typically because almost all of us have direct knowledge of one or more people in our environment that succumbed to it.
A lot of work has been done since this development was first announced and there are still quite a few kinks that will need to be ironed out. The whole idea - such as promoted in this thread by some - that this makes cancer a matter of 'simply looking at the right organ' is so laughably oversimplifying reality that some balance is required, and I think at some point my irritation got the better of me.
So thank you for pointing that out.
And to add to that: It definitely is not my intention to suggest that this test is without value.
The fact that you link to that and apparently do not understand it combined with a personal attack is enough to disregard your comment for me.
But let me stress this in case it wasn't clear to you: screening and early diagnosis are not the same thing.
If you screen a large population for cancer you will turn up a lot of cancers that may never become a problem, or that may even end up being re-absorbed without the need for intervention, as well as a large number of false positives.
Early diagnosis means that there are already symptoms.
Disagreement is not "flaming." Jacquesm has been generally respectful and factual. The closest they have come to rudeness is when people refer to "facts" not in evidence or link to things that they haven't read or understood well, and even that impatience was explanatory.
I am sorry if you cannot understand what I am talking about, that you are clearly misunderstanding what I am saying.
I will write in this deeply arrogant fashion and refuse to link to anything I'm talking about, and when questioned I will infer that the people I'm replying to are utterly moronic (without saying quite such, I will instead say things like 'I'm sorry you don't understand / let me make this clear / you shouldn't be posting things you don't understand').
I will continue to write as if my stance is absolutely correct even though I am not an expert in this field, I will write as if I am.
And then when people call me out for my arrogance I will insist I've done nothing wrong and will be confused.
> The problem is not my mindset, but your understanding of the subject material.
IMO, that is inflammatory and arrogant.
More broadly speaking, much of the arguments in this post seem almost political. They are dressed up in fancy language, but essentially boil down to "We can't give these peons information that they aren't smart enough to deal with"
That is an extremely arrogant position to take, even though I do believe there is some truth that knowledge can be counter-productive.
I think I've spent more than enough time in this thread going out of my way to explain things in as clear and simple a manner as I know how to and if have offended anybody then I apologize for that, it certainly wasn't my intention.
this just reads to me as "our tests don't test for the right things, since when we act on those tests (even if they were 100% accurate), it doesn't lead to better patient outcomes".
I feel like a better conclusion is that we need better tests, that detect things that when acted upon improve patient outcome. Of course, we're nowhere near that yet, but do you really think in 1000 years we will still wait for patients to be responsible for correctly noticing symptoms and going to the doctor? Of course not.
It's good that we have studies that show we should move with caution in this territory, but completely ignoring it forever seems absurd.
> "Our tests are bad, so lets[sic] not test" is not a thought worthy of respect.
Correct. But that's not the issue here.
Current tests can detect benign tumors, and do. People go in for their regular tests, hear "cancer" and of course want to get treatment. But that treatment itself is not idempotent. You want to accept the negative side-effects if the alternative is an aggressive cancer that will end your life soon. But you don't want that if there is no cancer.
As in many fields, it's a matter of trade-offs and risk assessment.
Here's one article detailing some of the issues surrounding overdiagnosis of colorectal cancer:
But the tests are not binary sick or healty. If your test gets more sensitive, then you have to shift the, say, PSA value that you use for a diagnosis.
I really hope that false positives become a regular thing. For one it shows that the tests are working according to statistics.
Second, getting comfortable with false positives means that you can more easily hold off treatment, if in your specific scenario the treatment is not beneficial - think an old test that doesn't find the cancer plus a new test that does; and the new test is shown to lead to overtreatment. We need to learn when to hold on and not to treat.
And third, if testing for 10 things will find something that you don't want to know and harm your psyche, when you test for 100 things you will have almost certainty that there is a false positive and this can give you back some of the "blissfull ignorance" I believe (while the doctor can still give you the statistically best treatment).
The only problematic mindset is yours, and you clearly have not engaged with the subject of iatrogenics in any meaningful way. The policy you advocate for would cause real harm to real people, likely to no benefit. Please understand how wrong you are.
The public health literature has been over this again and again. Over zealous non-symptomatic testing consistently leads to worse outcomes. You can get as philosophical as you want, the fact remains that it lowers total utility.
Current screening methods tend to be costly and/or uncomfortable for the person being screened. A simple blood test could lead to higher uptake and more regular screening.
That's not an appropriate analogy for more reasons than I care to relate here. Suffice to say that if you believe that the engine check light is the equivalent to a positive cancer test that you probably should stay out of medicine ;)
Why are you responding to random commenters on HN as if they are the ones getting ready to go out and start writing the official policy of a country?
Discussion is good. It's ok for people to be wrong and disagree. I'm not even saying they are wrong, but if they were- do you really need to go around telling everyone that some opinion that crossed their mind over a cup of morning coffee would be a disaster if implemented as national policy?
> Those cancers where checkups are useful we already do regular screenings for.
More like “are <practical given available tests> and useful”.
Regarding your last paragraph cancer screening seems very rare except for one or two types (at least where I live), so statistically yes an annual test will miss the worst case you presented, but for the vast majority it will be a huge improvement.
For those cancers where it makes sense, yes. But typically screening is voluntary (as it should be), and there are for some cancers strong genetic indicators that a person is susceptible to a particular kind of cancer, which immediately changes the equation for that particular person tremendously in favor of screening.
Slowly growing cancer still sounds bad enough to get some of that "unnecessary" intervention. I'm no doctor, but my understanding is that the aggressive cancers typically get discovered due to a symptom of sorts and thereabout is your 6 month countdown. Again, from a layman's perspective and understanding, those cancers need to start somewhere. What's the harm in catching them as early as possible? If a blood test leads to a scan and the scan turns up negative, what's the issue?
Depends on your age, genetic disposition and many other factors besides. Once you have symptoms your oncologist would be the only person qualified to determine what for you in particular is the best course forward.
In some cases that 6 months might be very generous, in other cases you are better off to do nothing (especially if you are advanced in age and the cancer is growing slowly). It all depends.
The utter confidence with how you answer all these comments while being completely wrong is sickening. It really shows a lack of vision but you wield your confidence like an expert but you’re not and only displaying the Dunning-Kruger to a t.
Early detection these days consists of things like CT scans which can’t tell the difference between cancerous tumors and benign tumors. I have a friend with a mass near her liver but they didn’t detect it until quite large. She asked what she should do and their answer was “well if it was liver cancer you would be dead by now, so it must be benign.”
This is the state of detection that you think is such a utopia, that we shouldn’t bother trying to improve, because you are so confident with your answers but you literally have no idea how wrong you are.
Having an accurate blood test that can differentiate a cancerous tumor from benign is ground breaking. Early detection of cancers like pancreatic or liver cancer detection is virtually impossibly today until it’s too late is groundbreaking. It could lead to new treatments that work when the cancer is small vs when it’s too big to operate.
You’re taking studies done using very obtuse, inaccurate and costly detection like MRIs and CT-Scans and conflating them with new technologies. It’s backwards, old thinking and trying to pooh-pooh new ideas and technologies because of poor understanding on your part. It had no place here among people with vision and hope for the future.
The test in the article actually has very few false positives. And who knows, maybe even those false positives were just extremely early cases of cancer.
Very few false positives: 0.5% false positive rate times 1,000,000 people screened translates into 5,000 false positives, which is in absolute terms not a small number. Those will take away resources from symptomatic patients.
I see it more in the vein of vaccinations. A preventive care activity, not as a reaction to a symptom. Speaking only for myself, I am regularly screened, asymptomatically, for skin, prostate and colon cancers. That screening is a non-event for me because I grew up knowing that we will get screened on a regular basis for these cancers.
With additional asymptomatic testing, there will be false positives. Re-test using a specific test and if still positive, a biopsy. How frequently do false positives lead to unnecessary procedures with the current asymptomatic screening?
What would the rate of unnecessary intervention be, vs the rate of lives saved through early detection?
With the present invasive methods of excising cancer the answer is that on an individual basis you might be better off with early detection but on a statistical basis across a larger population you'll be worse off. The important variables are patient age, how aggressive the cancer is (growth rate), what the risks of that particular type of cancer metastasizing are, what your genetic disposition is etc.
Again, personally, I lost both parents to cancer. One thyroid (very slow progression) one pancreatic (very fast). In both cases, I believe that pre-symptomatic detection would have had a high probability of eradicating their cancers. That experience skews my view and if I get the opportunity, I'll have these tests done 3x annually.
But I understand your point. How about making it elective? That way, people who prefer to not endure the risk of false positives or the anxiety of awaiting the test results can opt out.
I'm really sorry for your loss, but please try to keep a healthy balance between personal tragedy versus medical policy set for a whole population, which are two entirely different things.
For any particular individual, especially those who end up dying from cancer early detection would have likely mattered. Which is exactly where the problem lies: that is a large number of people, but still (much) smaller than the number of people who will end up with positive cancer test. And policy is set by the outcome for the population as a whole, not for any particular individual.
In most places in the developed world cancer screening is already elective, but not for all types of cancer. Even so, how often are you going to do it? Once every year could easily be too slow to make a difference, and these tests aren't free so say a bi-annual test on all of the population would wreck the ability of the medical world to do much else. This is a tough problem to solve, especially because wetware tends to be finicky to work on and tiny little details will have a huge effect on outcome.
Cause things break down all the time. And everything can look like early signs of trouble.
There are lot useless docs and surgeons waiting to perform unnecessary expensive procedures much like building contractors. Oh that pipe is leaking let's just replace this entire load bearing wall etc cause we have this new cool machine that can. Second opinions are over rated cause the majority don't care. There is an endless demand for their trade.
My dad lost his hearing at 35 after they performed surgery to remove a tumor they detected. They detected the same Tumor in my brother when he was in his 20s and wanted to operate with no guarantee of hearing preservation. He declined. He is going to turn 40 soon and would have most likely been deaf for 20 years if he had gone through with it. Minor issues with his balance but other than that he is fine even though tumor gets tracked every couple years and is still growing.
The reason this is not happening isn't a technical one. It's that people with a medical background will tell you this isn't a healthy utopia, it's a nightmare.
Medical tests are complicated. They often have significant false positive and false negative rates. Testing people at scale increases the number of people with wrong test results and can cause harm if you start treating people based on wrong test results. The more healthy people you test the more false positives you get.
The goal of evidence based medicine is to use tests when they can help. It's not to test as many people as possible. This is reasonable. You want to improve patient's lives. Whether or not a test is improving patient's lives is often not easy to answer and has to consider many things.
The point of this is not that you get a definitive diagnosis; it's just a somewhat reliable marker on whether you may need to be concerned and get more accurate (but more harmful) imaging or invasive tests to rule out something.
You only need one false positive test result that ends up with a colonoscopy to understand how costly and uncomfortable these errors can be.
(I'm using an uncomfortable and illustrative example, but a colonoscopy is honestly pretty safe and boring as invasive clinical diagnostic procedures go)
Perhaps a better example than you intended -- recently colonoscopies are being considered more risky as a regular screening tool, because the procedure itself can result in harms, including (rarely) death. Not an expert here, but it seems they are still used in a fairly wide age group, but the recent trend is to be more cautious about their usage, as particularly for older patients they are more risky vs. the potential benefits. [1]
However I think this concern would be reduced if we had better first-line screening like the tests in TFA; the harms come from using a relatively-risky assay like colonoscopy for regular screening in healthy individuals, whereas if we had better noninvasive screening, the colonoscopy could be reserved for patients where there's a higher probability of a positive diagnosis.
Something around 1.8 million people are diagnosed with cancer per year. At a 50% false negative rate, all else being equal it would detect 900k of those.
And all else being equal, at a 0.5% false positive rate, if it were used as suggested it would incorrectly diagnose cancer in around 1.6 million of the population.
So around 60% of the people it says has cancer wouldn't have cancer. I guess it depends upon what you mean by "somewhat reliable".
Note this test is about early detection, not prevention.
If your pancreatic cancer is detected early, you will still need to have your insides rearranged to get it out.
There's a big difference(in success rate, in damage to the body, in costs) when treating a tumor at 1 cubic mm size or less, versus a big tumor, versus a malignant tumor.
That's a critically better health outcome though. It's like fixing a corruption bug when you detect it on one server, versus finding out there's corruption in every shard and all of your backups.
> That's a critically better health outcome though.
Not necessarily. It depends on how old you are, what your genetic make-up is, whether the cancer is growing rapidly, whether it has easy access to other organs to spread to (or has already begun to do so) and so on. It's not a binary thing.
I’m not saying it isn’t a great development. It certainly adds an
option for early detection of some cancers where there previously were none. But it isn’t cancer prevention. It is a highly sophisticated technical undertaking which for educated and motivated patients in well resourced health systems will reduce cancer mortality and morbidity by some increment. Depending on many factors this could be a big or small increment. In the real world the efficacy will be less than what the trial shows. It is also likely that Grail (with north of $1 billion in VC funding) will charge as much as they can for this test. The size of the trials necessary to prove efficacy means that there will be highly constrained competition in this space.
The distinction with prevention is important, because we ultimately must aim for preventing almost all cancers, and not be content with anything less than that.
"Preventative care" refers to the field of medicine called preventative medicine/care/health, which specifically aims at making regular checkups and other monitoring facilities free and accessible in order to improve health via early-detection.
Oftentimes, doing these sorts of things lowers your insurance premiums or makes costs lower if something does happen (not always, but sometimes). These sorts of things are also generally free as it costs the insurance company less in the long run if you're doing them frequently enough.
Perhaps a bit of a misnomer, but "preventative care" doesn't necessarily mean medicinal prevention of any sort.
A big part of this is the fact that learning information (about a disease or condition) early is often NOT the same thing as learning information that will change a patient’s clinical outcome.
Yup. Let’s say the test shows “very likely” for pancreatic cancer.
Now you do imaging. Ok, nothing there? Now what? Biopsy? That’s general anesthesia now and costly (for the patient or govt). Biopsy is negative. Now what? Start chemo? Watch and wait? For how long? Do a biopsy every 6 months?
None of these tests are 100% accurate. If broadly used, a false positive of 0.1% will result in tens of thousands getting unnecessary testing.
I think for pancreatic cancer early detection just means it gives you more time to enjoy a slightly longer bucket list. For bowel cancers however, this could potentially buy many extra years since it's currently difficult to detect early.
One reason why pancreatic is so fatal is it doesn’t tend to get diagnosed until quite late, so this test could help here. But really early cancer (no identified mass) is more of a watch and wait thing.
I never understood this. I mean I totally get the concept, but I don't get how it came to influence policy. Put it like this: how many people are you willing to kill to make sure people don't find out they're sick? Because that's what it boils down to. And it sounds... sociopathic.
The idea is that treatment also has risks and false positives are an issue.
If you gave a mammogram to every 20 year old woman, you would end up doing a large number of unnecessary biopsies and you’d find almost no real cancer. In the end, you’d lose more people than you would save.
Or that’s the idea. I’m no expert in this but it makes sense to me conceptually.
> If you gave a mammogram to every 20 year old woman, you would end up doing a large number of unnecessary biopsies and you’d find almost no real cancer
If you gave a mammogram to every 20-year old, you wouldn't do a biopsy when you got a positive. You'd increase monitoring and maybe suggest lifestyle changes. The same way we don't immediately catherize everyone who comes back with high cholesterol.
How many people are you willing to kill with too excessive testing programs? Because that is part of the equation too. It's easy to ignore because chances of harm from non-invasive-ish testing like a blood test is low.
The problem is that it compounds quickly.
Consider:
* For a rare condition you need a lot of tests to find them. Let's say you look for something that 1 in a million can be expected to have.
* When you find those 1 in a million, the testing needs to save them, which means a proportion of them need to otherwise be significantly affected. Let's say 1 in 10 of them die.
* When you find those 1 in 10 million that has the condition that would have died without intervention, early intervention needs to actually make a difference. Let's say 1 in 10 of those actually survive because of early detection.
Now you have to do ~100 million tests to save one life.
Suddenly 100 million:1 odds of dying as a result of a visit to do a blood test are enough to neutralise the benefit, be it infections or accidents etc.. And that includes secondary effects such as delayed diagnosis because a false negative leads a patient to delay seeking a second opinion once symptoms present.
Additionally there's the opportunity costs in terms of saving lives by spending the money elsewhere, such as e.g. awareness of symptoms and the like, or addressing entirely different issues.
Of course, if you have a more common condition, and/or a condition that is much more lethal, and/or a condition where early intervention makes a difference, this all changes.
But it's worth noting how little mass screening we do - as it turns out, it's hard to find conditions where the benefits are substantial enough to be worthwhile. In some cases, such as the use of mass screening for breast cancer, there has been calls to scale it back some places because it was unclear whether there was a net benefit.
You're missing a step. There are several responses saying basically the same thing (testing kills), and more or less randomly I'm going to answer yours. All have the same step missing.
Testing doesn't kills by itself, not in numbers worth mentioning. Treatments - sure, a whole different ball game, they're positively dangerous. But between testing and treatments there should be a specialist that crunches the numbers and comes to a decision. Which, like in the mammogram example above, is not always going to be more aggressive testing.
What testing does is give you more information, which in a remotely sane system should lead to better decision making. Of course, I can imagine insane systems where, for example, the patient decides, the insurance pays and the doctor can be sued for discouraging treatments. In this particular combo you probably want to avoid doing mammograms to 20 yo, because the chance of a false positive is 1%, the chance of a true positive is 0.01%, and you end up with perverse chains leading to healthy people doing chemo. Like I said in the original comment, I GET the phenomenon. What I don't get however is how it can get even close to conventional wisdom that you want to avoid testing, as a rule. That's a particular fix to a particularly insane incentive combo, and common sense should make everybody rail against the insane incentive combo, and not act like the niche fix is actually a goal.
I can't really explain how this came to be. Maybe people stumbled on an explanation of how extra testing _may_ be harmful, and the idea was so cool that it got stripped of context and became a meme in itself.
> Testing doesn't kills by itself, not in numbers worth mentioning.
Mammography involves radiation and pressure. The radiation alone is a significant risk [1], and kills by causing cancer. It can also cause tumors to rupture and spread malignant cells. It's significant enough to significantly increase the hurdle where mammography is justified. It does not mean it never is - absolutely not. But it means screening programs need to be targeted.
> What I don't get however is how it can get even close to conventional wisdom that you want to avoid testing, as a rule.
It's not conventional wisdom that you want to avoid testing. That is for example what led to a lot of really aggressive campaigns for extensive breast cancer screening.
What we saw was a small improvement in outcomes on a small number of positive test results, for a level of testing that suggested it was necessary to take into account other factors.
Breast cancer screening was rolled back many places, or widening of the age bracket was halted as a result of looking at outcomes and realising that "conventional wisdom", which used to be that more testing was inherently good was flawed.
There are clear, quantifiable harms from it, ranging from actual risks of causing cancer or causing spread of cancer with mammography. These risks are low enough to be worth it for certain patient groups, but high enough to add up to problems if screening is too widespread.
People didn't start worrying about this because it was "conventional wisdom", but because the data shows people actually dying.
The study reads like a success story to me. They started from a hypotheses of testing reducing deaths overall, which was confirmed. Once they had more data they were able to find ways to optimize the process, like delaying testing for large breasted women who have a worse risk/benefit profile.
What I'm advocating for btw, just to make clear, is regularly showing up to a doctor who will recommend all the cost effective non-intrusive tests (like blood work) plus the intrusive testing depending in your particular risk profile. This doesn't seem like a controversial opinion to me - more like a common sense default.
As for conventional wisdom, look at this thread. Count the pro/contra comments if you want. Other then a tautological "some medical procedures can be dangeours if misused/overused", I still don't get how people can be against testing, in general.
The way to optimise the process was to reduce the amount of screening.
It points out exactly why broad, indiscriminate screening can not be assumed to be safe, and that "optimizing the process" means moving away from thinking that more screening is automatically better, and towards identifying which groups where the benefits outweigh the dangers.
But the main reason for posting that link was to point out that the idea that testing does not kill is false for some types of testing, and as such you need to understand the risk vs. reward, or you may end up doing harm.
> What I'm advocating for btw, just to make clear, is regularly showing up to a doctor who will recommend all the cost effective non-intrusive tests (like blood work) plus the intrusive testing depending in your particular risk profile. This doesn't seem like a controversial opinion to me - more like a common sense default.
I haven't seen anyone here arguing against that. That's not what's been discussed by those of us here pointing out the dangers of mass-screening.
You assume that the treatment works and has no side effects, I think that is the main source of misunderstanding.
A great example is prostate cancer. It often gets detected now, people are informed they have cancer, but most often the correct answer is "watchful waiting", i.e. no treatment, probably forever. But people now know they have cancer, and are frightened, because cancer, and thus press for treatment. But this comes with a 10-90%(!) chance of incontinence and 50-90% chance of erectile dysfunction - for a cancer that most probably wouldn't have caused them bigger problems for their whole life.
I’m not sure I understand what you mean. It seems like you’re framing my motivation as not wanting people to know that they are sick and being willing to trade peoples lives to achieve that outcome, which I don’t think is at all what my above comment says.
One issue is that your body may have all types of tissues that might be precancerous or slow growing cancerous growths that the test will identify. To get to them and remove them would do far more damage than leaving them alone. Even with things like prostate cancer, which is fairly easy to get to. Leaving it alone, is often the right choice depending on the age of the patient and speed of growth of the cancer.
That's already a thing, at least everywhere I've ever lived. You have a primary care doctor, you see them at least once a year, they do a physical exam, maybe some blood work. This test maybe becomes standard of care for everyone annually.
May be I have been living in a bubble. I remember doing a “annual check up” in the US about 5 years ago, and that was super basic - blood pressure, sugar, cholesterol levels etc. they didn’t even do a detailed enough bloodwork to test for allergies. Definitely not looking for any disease vectors.
Now, I live in DK, and my visit to the GP would always go with my hard attempts to convince my GP that something is really wrong and I am not a crazy lunatic who is simply looking for attention. Although, I have never directly asked for a checkup(while being healthy) to look for disease vectors. I’ll ask and see what they say.
I’d prefer if this is outside of the general healthcare system though. I don’t want to occupy Doctors, and medical labs from people who are actually sick _now_ and need those tests, and attention.
This approach actually works really well for conserving healthcare dollars. If a doctor feels like nothing is wrong, 99% of the time they’ll be right. Sucks to be that 1% where you die of a cancer that would have been caught, but that’s the trade off.
The marginal return of each dollars spent on screening goes down quickly. At a population level you can’t justify it, but at a personal level you can.
Same for me in Germany. You are eligible to one checkup when you turn 35, that's it, lol.
Otherwise you have to convince the doctor a certain check is necessary. Usually doctors don't like that very much. I would even happily pay for it myself, but I still need to convince the doctor I'm not a hypochondriac.
Thats not true. The cost of the checkup are covered once before you turn 35 and every three years for anybody above the age of 35 [1]. Still, people have to be willing to do that checkup, make an appointment by themselves and so on.
Still, seeing what is and what isn't covered by public health insurance just seems so stupid. The focus on cure instead of prevention, and that not even particularly good. It's sad to see how much money goes to waste for useless treatments with something like homeopathy while people who really need proper treatment are stuck with the cheapest option that is paid for.
I don't think blood tests for allergies are a regular thing anywhere. It's generally done if you can't have the skin testing and the skin testing is something you go to a specialist for. It's not something that's screened for regularly.
Same in Germany (Hausärzte). Even if you have a specific complaint, you'll just be told to go home and drink a lot of water. They just don't have the capacity to test for causes.
I'm not sure if I can even pay someone to make extensive tests here. Never tried. Perhaps one needs to go to the US for that.
I am leaving my current physician because I had a very strange, black patch on my back, surrounded by a 'bleached', growing circle.
Due to COVID, I provided videos to her assistant, which the physician reviewed. Physician claimed she needed better pictures ( obviously I had sent the best material ).
The response to that was : the doctor needs to see the patch herself.
Then when I arrived at the appointment she basically treated me like shit. I have filed a complaint with her practice.
Did I reply to the wrong comment? I thought I was replying to one about how the Netherlands doesn't cover cancer screening under its national healthcare.
That’s not good enough for cancer because we dont even know how fast or slow it grows. Even if it cancer cells multiplied at the rate of a fetus that would be too fast for an annual physical. Congratulations, death.
Ideally something like a wearable computer or nanomachines in your blood stream automatically report anomalies for individualized treatment.
Given the amount of "I don't want bill gates injecting me a tracker!", I have some idea of how a conversation with the same people about nano machines in their body would go.
I would also bet on the time-frame between this and the availability of your data on the servers of your health insurance.
These are all in animals and I specified human cancers for that specific reason, thanks for the article though, I only knew about the Tasmanian Devil one
Yes, correct. But there are also some viruses that cause cancer that can spread from human to human (HPV for instance, the cause of cervical cancer), and the case of that doctor is some proof that at least in theory cancer can spread by blood or tissue contact.
I've never lived anywhere where this was common. The public health system can't afford to fund it and most of the population doesn't want to (or can't afford to) pay for it themselves.
My experience is people don’t really care much about their bodies beyond the aesthetics. Most treat it like a dumpster and ingest whatever crap feels good and hope for the best. Caring too much makes you some sort of health freak in their eyes, with way too much time on your hands.
How do you think we get the rates of diabetes and obesity we have in the US, without that being true for most people in most places and in most social circles? Rich social networks tend to live much healthier lifestyles than average. Normal is fried starch and tens to hundreds of ounces of soda more days than not, and exercise is that maybe you golf sometimes or play in a seasonal company softball league (drinking way more calories than you're burning, almost certainly).
0.5% false positive rate is actually quite high. I don't have the numbers, but I'd be surprised if more than 1% of the people that would receive such a screening actually already have cancer, which would mean that it produces about as many false positives as false negatives.
Picture a population of a million people, all receiving the test. 1% of them have cancer (unknown to them) - half of those people get a negative screening result and half get a positive one (51% successful identification), and 0.5% of the full population get a negative screening result. Those numbers are roughly equal.
That doesn't make it useless by any means, but it's not nearly as 'specific' as it sounds on paper.
I would very much like to know the confusion matrix for all these component tests: the false positive rate, false negative rate, i.e., the sensitivity and specificity of each.
For example, the Prostate Specific Antigen (PSA) test is famous for a high false positive rate, inviting many men to worry needlessly or get unwarranted biopsies. The current state of any test and its history are important to know for both patient and doctor, so both can have an informed dialog about the options after a positive result.
Also, each of these 50 tests are going to evolve over time, as will the accuracy of error rates for patient subpopulations. Race, age, ethnicity, co-morbidities and co-maladies will each shift the accuracy and precision of each test in ways that will make them much more useful if everyone stays well informed about all their merits and demerits.
There's an interesting discussion raging on elsewhere in the thread about the possibility for cultural adaptation that would enable us to cope with statements like "the cancer test was positive, but there's still a decent chance you don't have cancer".
There's also an interesting question, taking your roughly 1:1 true to false positive ratio as an example, of whether the marginal true positive does more good for the world than the marginal false positive.
Side-stepping those questions though, they probably won't give this test to literally every person in the population. How much does the picture improve when you give this only to people over 50? Or only smokers? You could still massively increase effective screening for cancer with a cheap and easy test if you combine it with enough population filtering to increase the true positive/false positive ratio to something like 70-90% rather than the 50%
That side-step was my actual assumption - I can't imagine that 1% of the _whole population_ currently has actionable cancer.
But yes, the higher you drive the prior, the less the false-positives cost. That's a fairly normal situation - it's why they have to evaluate the cost-value of further-investigation against so many different population groups to determine the actual optimal usage.
And yes, the question I was trying to prompt really is "what information would we need to have in order to know when this test is a net positive, from the patient's perspective" (and separately, "from the insurance company's perspective", since that's likely to have a very different answer).
I feel it's a net win anyway, if the procedure is not invasive and cheap, you could take one every so often (yearly?) and if something shows up you just double check with a better method or w/e. Better than nothing, which is the current alternative.
The problem is people are idiots. They get a false positive and their doctors tells them "well there are a lot of false positives with this test" but all people hear is "you have cancer." Then they get an invasive biopsy because the healthcare system is glad to take your money.
I appreciate you stating it frankly, I think a lot of people in this thread are arguing a variation of this point without saying it aloud.
A couple thoughts -
1) If this is the only/main objection, then the test is good news for those that are analytically capable, even if it's potentially bad for Joe Median.
2) While what you describe is how it probably happens in general in the USA, but I wonder if it plays out differently in, say, the UK / Canada under NHS/Medicare. In those systems you don't get to pick your treatments; your doctor will only prescribe a follow-up if it meets ROI standards. (Then the health system picks up the bill). Of course you can go private but there's a pretty big barrier there, and perhaps having your GP push back more strongly (as they would if you were not actually at high risk) that might prevent many people from inferring too much from a potentially-false positive result.
Right. If the objection is that it has a high false positive rate and doctors don’t understand Bayesian statistics enough to take that into account, then the solution isn’t to ban the test but instead require clinicians who interpret the test to learn Bayesian statistics better.
What if the followup procedure is invasive and expensive? It's possible that this procedure could be net negative in aggregate depending on the % of false positives and # of resulting needless followup procedures. Not to mention the emotional stress.
Exactly! This reminds of the recent case of the AMA increasing the suggested age of regular mammograms. More testing means earlier detection of breast cancer. But a false positive means an invasive biopsy. These are the two sides that need to be balanced. The AMA decided that since the base rate of cancer is so low in women under 50, the posterior probability that you have cancer given a positive mammogram for the under 50 age group is considerably low. Therefore they decided that the increased costs (monetary and emotional) do not outweigh the benefits.
"Screening mammography reduces mortality from breast cancer, including in women younger than age 50 years. However, screening mammography carries harms such as false positive results that can lead to additional imaging and invasive biopsy procedures, and overdiagnosis that could lead to treatment in patients who may not benefit from it. The USPSTF considered the balance of benefits and harms using a commissioned targeted systematic evidence review of randomized clinical trials and a decision analysis that compared the expected health outcomes of starting and ending mammography at different ages and using annual and biennial screening strategies; it concluded (in part) that routine screening should begin at age 50 years and continue biennially until age 74 years."
I’ve only skimmed it, but seen quite a few limitations, notably those with “non-malignant conditions at enrolment” (it would be nice to know more what that means), previous cancer or recent corticosteroids use were excluded. Additionally, it’s a case controlled trial which don’t always translate to screening tools (as mentioned in their own discussion).
The problem with screening is that you are doing something on healthy patients, so particularly for rare cancers even a small risk of a false positive is significant. In this case it’s 99.5%. So if you test 1,000,000 people, 5000 (*correction from 500) people will come up positive. This is great for common diseases, but if it’s rare and only 1 in a million has it then you have got 5000 false positive for every 1 positive.
I think this will likely be a useful test (if it translates well to a wider screening population), but it’s not as good as it first seems.
>“non-malignant conditions at enrolment” (it would be nice to know more what that means)
My guess is they're referring to the neoplasm behavior codes defined by the WHO in the International Classification of Diseases for Oncology (see article's references for info). Which means "malignant" is a neoplasm which has begun spreading beyond the tissue it appeared in. Non-malignant neoplasms are either benign (not likely to ever spread), borderline (could go either way), or in situ (still in the original tissue, but it will spread).
North American cancer registries typically don't even bother collecting records of benign or borderline tumors. The only exceptions are brain tumors, which can be deadly without metastasizing. Registries also don't collect cervical cancer in situ records because there are so many and a lot of physicians never bothered reporting them. And finally, oncologists and epidemiologists classify urinary bladder cancer in situ as malignant because it's really hard for physicians to differentiate. Better safe than sorry, in case.
If you’re tested positive, what’s the probably that you actually have cancer?
(The article states 0.5% false positive rate and about 50% true positive rate, but I would need to know the the prevalence of cancer in the population to compute what I am asking for).
One of the cancers they mention is pancreatic cancer.
A quick Google search suggests the prevalence of pancreatic cancer in the population is 13 per 100,000.
So if you gave this test with a 0.005 false positive rate and 0.5 true positive rate to 100,000 people it would miss diagnose 500 people and only correctly detect 7 cancers.
So given you had a positive test result there would be a 1-(7/500)=98.6% chance you did _not_ have pancreatic cancer.
The false positive rate of 0.5% refers to the chance that there is ANY false positive in the whole screening, not just a false positive on the pancreatic cancer segment.
"The test, which is also being piloted by NHS England in the autumn, is aimed at people at higher risk of the disease including patients aged 50 or older"
I believe the name for what you describe is "positive predictive value" or PPV - defined in the paper as the "proportion of true positives among those with a positive test result". According to the paper, their PPV for cancer detection is 44.4% (28.6%-79.9%, presumably the 95% CI).
The paper notes that PPV can be a more useful metric than sensitivity. Their multi-cancer approach includes some hard to detect cancers that decrease the overall sensitivity, but increase PPV.
Edit: the paper also states: "The extrapolated PPV reported here based on SEER cancer incidence and clinical stage distribution was 44.4% in the screening-eligible 50-79-year age group, which is higher than that of currently recommended screening tests, as PPV is driven by specificity and population incidence."
They also add the caveat that "studies in intended-use populations that will provide more accurate PPV estimates are ongoing".
"It correctly identified when cancer was present in 51.5% of cases, across all stages of the disease, and wrongly detected cancer in only 0.5% of cases."
Doesn't this seem kind of low? Just a bit better than a coin flip. Of course, it rises to 65% and 87% for certain circumstances and the false positive rate is low, but it seems like a lot of cancers could fail to be detected and give a false assurance patients are cancer free. When symptoms emerge later they may be less concerned with getting it checked out. Is this in line with standard performance of tests?
This is much better than a coin flip. A coin flip would give about 50% true positive rate (which is about the same as the test), but it would also give about 50% false positive rate.
To put it in perspective, imagine 10 people have cancer in a population of 1000 (a rate of 10% which is too high compared to what I think the real number should be). The test would fail to identify 5 of the people with cancer, and it would say that 5 of the people without cancer have cancer. So overall it would misidentify 10 people. The coin flip would misidentify 500 people.
Amusingly in this example if you have a "test" that just says nobody has cancer it would also misidentify only 10 people :) I think this is why they are reporting true positive rate and false positive rate.
Is it worth taking blood samples now, keeping them in a freezer, and then waiting for the test tech to come so I can get it tested to see disease progression of whatever disease I get when I get older?
yes, having mysterious blood in your freezer is a good conversation starter for guests too. Bonus points if you get creative with the labels for each sample.
"Worth" doing is situational but, assuming you have the personal resources to spare and the interest in doing so, grabbing some snapshots in time might be useful to you in the future. That said, I'm fairly sure it would not have good ROI to do that as a broad public health measure. I'm more assuming we're talking about a 40-something affluent tech worker who has all the basic needs of life already well-covered and is up for spending time and a few thousand dollars which may well be wasted.
It's not so much that "new" tests are incoming, it's that last time I checked, a really large, thorough lab had a menu of over 900 different blood tests they could run. Just for grins, I did Select All and it would be several thousand dollars to run them all (plus who knows how many gallons of blood draws over time). To me the point is if, in the future, you have a new reason (symptom or indicator) to look at one specific test, if your immediate test shows positive you could go back and run that specific test on earlier samples to establish a baseline or perhaps even a progression rate (assuming it is a chronic condition which develops gradually over time).
That’s what scientific studies will produce. They won’t need your particular blood for that. You as an individual only care if you have cancer at the time of the blood test.
There is a big discussion in this post about iatrogenics being introduced by these tests. It sounds like the fear here is that it'll tax an already burdened system with more patients (some of which just got a false positive).
I'm failing to understand how that's any different from other blood tests: they signal a problem, then more has to be established to validate the signal. Nobody is saying "this test proves someone has cancer" and I'm pretty sure doctors already have to discuss how tests can be inaccurate with patients. I believe it's the case that symptoms of cancer (just like other illnesses) may be ignored exactly because there aren't any other signals to indicate cancer. Plus zebras and horses and all of that too. Perhaps someone does have symptoms but nobody connects the dots, and these tests might connect those dots. Is there some reason that this argument is invalid in medical science?
Chalk it up to the techno-rationalist community saying doctors can't handle bayesian statistics, on the basis of results of brain teaser surveys, and then extending that to anything related to testing or medicine.
> The test, which is also being piloted by NHS England in the autumn, is aimed at people at higher risk of the disease including patients aged 50 or older.
Does anyone know if the study was performed with a population that matches this description? Curious if the rates are in a general population or this higher risk group.
Whole area of having tests that give definitive answers in the field of health is the one area in which it can't get enough of. Biggest issue health wise many have is time from issue to getting a correct diagnoses, whatever the ailment.
No, not according to the statistics. You are orders of magnitude more likely to be harmed from unnecessary diagnosis/treatment (including cost and psychological damage) than to be undiagnosed and have a worse outcome as a result. We find most things that we are capable of finding.
If it's detecting 50 types of cancer and the patient doesn't have symptoms, do they just do a full body MRI to find the source? If so, why not just do full body scans which can find other issues, like aneurysms or the other almost 50% of positives that it missed?
I get that cost is a big issue, but it seems like the test is missing a lot and you might get more bang for the buck with a periodic MRI from the perspective of the number of potential issues it can find. Either way can result in false results.
So, the question of "what next" after a positive result on one of these tests is still... open. The Grail test provides indication of likely tissue of origin, so a likely first step may be a targeted study (e.g. colonoscopy if it said a colonic source, MRI if it said pancreatic). There may be role for PET/CT as well to further stage and assess for metastases, perhaps after finding a lesion.
What to do if your blood test is positive but the workup is negative? Lots of discussion but nobody is quite sure.
As for a periodic full body MRI, I will say that currently uh, most of those are garbage. Unfortunately, for a full body MRI to be practical (that is, to not take hours and hours), you have to run very few sequences. (For example, a dedicated MRI of say, your brain or your liver alone could take about hour, each.) As a result, you greatly reduce your sensitivity for most pathologies, which kind of is counter to the point of the MRI to begin with.
Oh, so the test does give the source. I must have missed that part of the article thinking it was only identifying that there was a source in the cfDNA that was abnormal, but not realizing they could determine where it came from.
I mostly agree. I had a CT scan of 1/3 of my body for a specific issue, but as a consequence I got very high confidence that I was cancer-free in that portion of my body. It is such a good feeling that part of me wants to pay out-of-pocket to get the rest checked.
I think though that this test has massive value in earlier detection, cost, and remote lab work.
Dude, CTs can be risky. They use ionizing radiation, which increases your risk of cancer. It's really only an issue with repeated use, but I think there's a study out there claiming 50k early deaths each year can be attributed to repeated use of CTs. I don't have any medical background, but personally I would prefer imaging that does not use ionizing radiation when possible, just to save up some "credits" for when I may need to use the other type (like CT for possible stroke).
Took me quite some time to find the paper that seemed to describe the ML algo[1]. Pretty disappointingly domain-specific and barebones ("logistic regression").
And indeed, model form was so unimportant that it was relegated to the supplement. [2]
Yet again evidence that access to data and domain knowledge trump fancy ML algos 10/10 times.
[2]
> Custom software was built to classify samples using source models that recognized methylation patterns per region as similar to those derived from a particular cancer type, fol- lowed by a pair of ensemble logistic regressions: one to determine cancer/non-cancer status and the other to resolve the TOO to one of the listed sites (supplementary information)
I don’t have much to add to the discussion here. Sensitivity is poor, specificity is pretty good and could be better if used as a confirmatory test(?). What I found interesting in the comments is how many people involved in healthcare lurk on a forum very much not dedicated to healthcare. Kind of cool. You all should chime in more, I learned a lot.
> It correctly identified when cancer was present in 51.5% of cases, across all stages of the disease, and wrongly detected cancer in only 0.5% of cases.
While I think this is a great step forward, How can this be described as highly accurate when it missed identifying cancer in 48.5% of the people?
I don't know about "highly accurate", but certainly pretty fucking good. Because for most of those cancers for most populations, the asymptomatic detection rate is much, much lower. Like close to 0%.
There is no general population screening test for, say, pancreatic cancer or ovarian cancer etc.
One day you'll put your finger on a sensor on your phone that pricks you to get a drop of blood and analyze it right there on the spot. Kinda like square does for mobile credit card payments.
Maybe that's how you'll unlock it too, which might help with phone addiction /s
What makes this even more revolutionary to me (retired anesthesiologist) is that the methodology allows for use of Theranos-style finger-prick-size blood samples rather than IV blood draw, since fragment identity is the indicator rather than level/concentration.
The concentration of cell free DNA in blood plasma is generally in the nanograms/mL range, meaning most cfDNA assays will require at least a ~mL of plasma input for sufficient sensitivity / reproducibility. We aren’t quite at the capillary blood level of sensitivity yet.
Why wait until 2023 for a larger study? This is a game changing technology that should be pursued like the COVID vaccine. We should be dumping billions into this across the globe to improve its accuracy and drive costs down.
There are multiple academic startup companies now trying to do cfDNA tests. They have all acquired patents from a paper or two. I would be wary about many of the early stage ones as they are not truly proven or have demonstrated scale.
So many tech people here thinking they know how the entire medical industry works... medical problems and the research that arises from them are staggeringly complex and messy. Please... if you've never had experience with this kind of thing, don't pretend that you do. Some uninformed person might see your comment and think you actually know something.
One of my great fears in life is that employers/insurers will impose this en masse on people, causing immeasurable harm to the mental health of people who can’t handle the anxiety that this type of approach to medicine would cause. I’m definitely a big proponent of evidence-based medicine and using it to help living quality life as long as possible, but this sort of thing would just wreck my day to day existence and destroy my quality of life. I am not a server in a data center. I am a human with consciousness and emotions, the desire to live, and a fear of suffering and death.
One of your greatest fears in life is to be regularly checked for diseases that would end your life unless thy are detected in time?
...What?
e: you mean the emotional distress caused by the process? But that's just in your head, and it's beyond irrational. Getting checked is inherently a good thing. Even if you are extremely anxious, the people who love you want you to live - this is a psychological problem that you should work through, not one with the testing
Yes that is right. I make no claims to being a rational being. I am at best pseudo rational. As much as I love computers, I make no claim to being one. I am a human.
And yes, I do also greatly appreciate evidence based medicine. There are many terminal illnesses where there is no evidence that constant screening improves outcomes. My impression is that people successfully market products like this to give people a false sense of control tire through micromanagement where given the state of the art, control is lacking.
That still escapes the main argument: your problem isn't the medicine, your problem is your anxiety.
If "this sort of thing would just wreck your day to day existence and destroy your quality of life" (let your own wording sink in), you should work through that exact problem, probably rather sooner than later.
Detecting diseases before they actually destroy your life isn't bad, it's good - and so you should perceive it as good. Not because you're a rational acteur, but because it's literally, by definition, in your personal best interest. Take a step back from the panic, just purely observe, and try to see it for what it is: it prevents real suffering - your own very real suffering - and the price to pay is working through imaginary suffering.
Getting your blood checked isn't linearly related with fears of death, the exact opposite is the case. Try to see it for what it actually is, instead of intuitively letting panic-mode take over. If your intuition doesn't serve your well-being, proactively go ahead and fix your intuition - you can.
> My impression is that people successfully market products like this to give people a sense of control where given the state of the art, control is lacking.
There can be no such evidence for a screening method like this, because it is brand new. We may find that even detection as early as Grail provides is still insufficient, but we don't and can't know yet. Certainly with many cancers earlier detection does lead to better outcomes. You also seem to be focusing on cases where screening has not improved outcomes while ignoring cases where it has.
No, you are flat out wrong and anyone with an understanding of good health policy will understand this. Screening can lead to iatrogenics and unnecessary costs. It is not automatically a net gain, it has to PROVE that it makes patients live longer or better, just like any other intervention. And in many cases, screening policies actually make things worse for all kinds of reasons.
Do you mean the psychological impact of the screenings, constantly wondering if you have cancer, if the test missed anything etc, rather than fear of the procedure itself?
Facilities that are not hospitals(to avoid the risk of occupying medical devices that sick people need) built to _regulary_ check up otherwise healthy people for preventive care.
Heck, I have so many alerts defined on my monitoring setup for servers to watch for signals of failure before they get too big. But, my own body is not observed until something bad needs treatment. Why can’t we observe ourselves medically and analyze that record for early signs of trouble before it becomes serious?!
All the advancement in technology in recent years, this ought to happen sooner than later.