Hacker News new | past | comments | ask | show | jobs | submit login

No, not necessarily. Check out these numbers:

Sensitivity is what fraction of the affected people are actually found. Here: 90%, so 10% are missed.

Specificity is what fraction of the unaffected people are detected as such. Here: 95%, so 5% wrongly detected ("false positives").

In Europe, there are 60 cases of lung cancer per 100 000 people.

That makes 54 correctly detected per 100 000, missing 6 cases. That also means 5000 people incorrectly suspected of lung cancer (5% of 100 000).

Update: using the accuracy from the article itself, we would still get a total of 1000 of false negatives (affected but not detected) and false positives (unaffected but suspected). Incidence is still 60/100 000.




Yup. And some of those follow on tests could be quite invasive and risky in their own right. With that kind of false positive rate, you'd be risking the life of a not insignificant number of people.

Either that or driving up healthcare costs significantly as those 5000 people are going to need an MRI or CAT scan or something else to rule out cancer.


What follow on test for suspected lung cancer would risk the life of a not insignificant number of people?

An MRI without contrast has no impact. An MRI with contrast has relatively little impact. A biopsy would only be done if the MRI with contrast lit up areas of concern. At the point a PET is ordered, you have narrowed the false positive pool substantially and probably want the scan no matter what.


Just telling millions of people per year they might have cancer... say the MRI is booked for 2 weeks out... which they are then going to majorly stress about the whole time regardless of how low you tell them the risk is - is going to have some crazy societal impact.


I typically have several MRIs per year. I'm not claustrophobic and don't stress about them, but sometimes MRIs cause me physical discomfort, even without contrast. With contrast, its slightly worse, it seems.

I think its related to a genetic disorder that causes me to retain high levels of iron, but I have nothing to back this up.

This discomfort in the MRI comes in the form of waves of warmth I feel throughout my abdomen. Its mostly annoying/distracting and not painful. The loudness of the MRI is far worse than the slight warming I feel. Less annoying than the pee your pants feel of a CT with contrast.


I believe the stress would come from the uncertainty of whether or not they have cancer, not from going into the MRI machine.

Just looking at myself, I feel a great level of distress over other, less serious, variations of uncertainty. Having been told I might have cancer, and then not getting to know for certain this very instant would be nothing short of nerve-wracking...


The warmth is caused by the RF pulses and has nothing to do with the magnet or iron levels. Basically the scanner behaves the same as a microwave oven.

That's why you have to enter the patient weight and size before scanning, in order to keep the SAR within acceptable values and limit body heating.


My father has hemochromatosis which causes high iron uptake. He didn’t discover this until his fifties and it’s like he has rust in his joints. I did 23andMe and I only have one allele so I don’t have it. If you suspect you do, a dna rest will tell you. Please get that checked out. You can deal with it if it’s early in your life.


Ive been tested, and I have hemachromatosis. That's what my next MRI is for - to check on my liver.

So far, for me, it's monitoring. Some dietary changes such as cutting back on red meat and high iron vegetables (mostly cut out spinach, which sucks because I've loved spinach my entire life, especially the Greek dish spanikopita).

My blood iron levels fluctuate a lot based upon diet. Ive mostly got it under control, but normal for me is still on the high end of the normal ramge for normal people.

It probably went undiagnosed for a long time because I used to give blood regularly, but had to stop when my work hours changed to be 7am-7pm and could not make it to a facility to donate. Thats when blood tests started ahowing a problem. Not yet to scheduled phlebotemy, but probably the best option if it becomes an issue.


Quite rare to jump straight to an MRI - a chest MRI is a half hour deal. More common to do a CT Chest which has a 1:500 risk of causing cancer


Does this mean that out of the 5000 false positives, 10 people will actually GET cancer by doing a follow-up CT exam?


We don't really know. Data that show X dose of radiation increases the probability of developing cancer by Y% is based on studies on survivors of atomic blasts. The number of people exposed to an amount of radiation equivalent to a handful of CT scans is pretty small relative to the effect size that's claimed, so there is reason to be skeptical.

See https://www.scientificamerican.com/article/how-much-ct-scans... for a jumping off point into this type of research.


"1:500 risk" - citation?


https://www.health.harvard.edu/staying-healthy/do-ct-scans-c...

Looks like I may have been misrepresenting the risk to some patients; also looks like i've probably caused around 4 cases of cancer


It doesn't have to instantly and directly kill somebody to be bad in aggregate.

This kind of analysis has been done, most memorably for breast cancer screening. The conclusion I recall was that it did more harm than good a few years ago (opportunity cost of unnecessary spending, pain and complications of biopsy, unnecessary mastectomy, psychological damage, etc. etc.). The follow-up tests and analysis also have an error rate and no treatment is zero cost.


Not only the risk of contrast agents, but possible sedation or just plain medical errors.

It might only be 1 or 2 people out of 5,000, but those 5,000 were perfectly healthy and never had cancer to start with.


That's also why there was no screening for thyroid cancer after above-ground nuclear weapons testing. Thyroid cancer itself isn't that hard to treat. So the risk of follow-on testing outweighed cancer risk.


And that's assuming that the 99% accurate really means 99% sensitive, 99% specific.

To amplify your point,

99% sensitive from 100000 people with an incidence of 60 means 1 false negative, assuming you can't detect .4 of a person and floor to integer.

99% specific from the same pool means 999 false positives, same assumption.

You mentioned that re: 1000 total, but the kicker:

Total population, 59 true positives + 999 false positives.

So, if I test positive, absent any more knowledge that means it's a 59/(999 + 59) chance of being true, or around a 6% chance of being true.

Probably enough for followup testing, but an interesting demo of why the statistical accuracy is meaningless unless you also know the actual incidence. 99% becomes not many % right quick.


The article says 99% accuracy for 13 cancer types.

Some cancers like pancreatic are a death sentence because it's usually caught too late.

"Toshiba says its device tests for 13 cancer types with 99% accuracy from a single drop of blood"

"The test will be used to detect gastric, esophageal, lung, liver, biliary tract, pancreatic, bowel, ovarian, prostate, bladder and breast cancers as well as sarcoma and glioma."


The numbers were taken from a parent posters post to illustrate that things are not necessarily what they seem with probabilities.

The particular types of cancer are leading the list of most casualties by cancer-type by the way. See https://ourworldindata.org/grapher/total-cancer-deaths-by-ty...


Yes but 5% of false positive raise the price of this a lot for a large scale campaign.


Cancer diagnostics do not rely on a single test. There's a number of (more expensive) metabolic and endocrinology panels, imaging, biopsies, etc. that follow to validate.

The idea is that you have something cheap and easy up front before or in parallel to further downstream diagnostic procedures.


Damn, I knew I sucked at probabilities, but I apparently I didn't know how much.


I think it's pretty intuitive once your attention is called to the issue. The specific numbers aren't even that important. The general principle is that a screening test for any condition that is rare compared to the error rate isn't going to be very useful due to the large number of false positives. And most severe conditions we want to know about are rare.


We all suck at probabilities on an intuitive level. We better do the math and take the results seriously.


Even though those false positive rates are high, I don't think that in itself is enough reason to dismiss the idea.

You'll still be able to identify a pool of people that as a group will develop this cancer at a rate 20x above the normal population. That still seems like a big deal, for instsance if I discovered I had a genetic factor that made me 20x more likely to get a particular cancer I think I would want to be tested for it out of precaution. This seems like the same thing.

(Now if the only further test you can do is itself super invasive or risky, that obviously has to be weighed into the decision too).


It's kind of silly to argue from first principles when you can ask the people who study this stuff, epidemiologists. 90% detection and 95% specificity would be horrible numbers if you asked any of them.


Isn’t the gain of this kind of machine that you are testing many people that weren’t tested at all before? Some of those will get a scare for nothing, but more will find out earlier that they have cancer.

If all it takes is a drop of blood (as opposed to more invasive tests) to know with ~90% accuracy if I have cancer or not (and when the machine says I do, then do a more accurate follow up test) then it’s far more likely more people will get diagnosed sooner.


It's probably not as simple as I think it is but if the test is cheap enough couldn't they just run it again on all affected people just in case?

If ran twice we'd have: 49 correctly detected, missing 11 cases and 250 incorrectly suspected.

Ran thrice keeping the 2 most similar results we'd have: Most people correctly identified?


That would decrease the amount of people correctly identified. And you're also under the assumption that the misclassification of people is a random phenomenon. In the real world that wouldn't be the case and some type of blood may get misclassified all the time by the system.


Is there “error” for these these tests independent when performed on the same person spaced out over a long enough time period?

Say you run the test every day/week/month, can you look at the total results or do the failure cases for the tests themselves depend on the individual?


Many diagnostic blood tests for cancer markers are more useful as a data series over time than as single point in time tests, because individuals have variable "normal" or baseline levels. You establish an individual baseline for each marker (for each patient) and then re-test every 6 months. If a number moves from its baseline, then you investigate.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: