Hacker News new | past | comments | ask | show | jobs | submit login

It seems to me the use case should be to have the radiologist look at a scan for tumors. Then, the algo should look. If they disagree, then the radiologist should look at the difference.

It'll be the best of both.

And in the scans where the algo is wrong, have the scan added to the machine learning database of the algo.




Unfortunately, a lot of times hospitals can only afford one or the other. These systems are very expensive and radiologists and cytologists aren't exactly cheap either. But, I agree, both would be good, especially considering the volume a Cytologist is expected to screen in a single day.


> These systems are very expensive

Seems like a business opportunity for a cloud AI screening provider.


You point out another business opportunity: a developer who understands exactly what regulatory hurdles you need to jump in order to release medical software. I'm not sure exactly what's required in this case, but I'm doubtful there are many cloud providers who are HIPAA compliant.


I'm not sure it would need to be regulated, any more than a medical textbook needs to be regulated. The radiologist would be making any decisions. As for privacy, an x-ray is sent. No personal information whatsoever.


If the radiologist has to look at and double check every scan that algo looked at, then what is the point of the algo? Seems like a useless middleman that get in the way.


Screening is hard work and tedious, so even trained professionals regularly miss things. TP incidence rate is under 1% in the screening population.

There have been studies showing significant improvement from double-reading mammo, for example (i.e. two radiologists, independently). Using an ML approach is trying to give you some or all of this benefit without the cost of redundant reads.


>then what is the point of the algo?

The point is that the algorithm can improve results. This isn't ad placement, it's peoples' lives. Checking and double checking should be the norm.


Better to implement a system with a high rate of false positives (more importantly, low rate of false negatives) from the machine learning component, with all positive findings passed onto the radiologist. If the system can reliably (big if) filter out 98% of the chaff then the radiologist can spend a lot more time separating the false positives from the true positives. This approach has worked well for me so far.


This approach is problematic in medical screening applications. Mainly because you don't want to increase the work up rate for false positives since if they involve biopsy and a large screening population, eventually you will kill people this way (indirectly) so there is a pressure to control FP rate.


Because the scan check by the radiologist becomes a _double_ check.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: