I'm a radiology resident with a master's in computer science. It's great that healthcare organizations are finally releasing larger datasets. There's much progress to be made in medical ML/AI.
However, my attitude towards radiology AI startups has changed quite a bit since starting residency. There really isn't a sustainable business model that involves selling standalone AI products, or even software suites. I think AI is a feature for the scanner, the PACS or for existing dictation software. That is, I think PowerScribe or GE would help a lot of radiologists by integrating AI features into their existing tech stacks.
I also think the only feasible exit strategy for radiology AI/ML startups is to get bought out for their tech talent and algorithms. I don't know of any hospitals/clinics/imaging centers that are buying AI products directly from startups. There's a ton of hype, but I'm not aware of a single positive-cash-flow radiology AI company.
As a resident, you should know that if you think it's appropriate for treatment, you basically have free rein to do what you want to with the MR data.
If someone has made a free ML/AI tool, you can use it.
What you're worried about is valid - is there a business model?
And there doesn't need to be, for it to make sense.
Research hospitals could be funding people to spend time developing new ML/AI algorithms, and then releasing them for free for people like you to use.
More's the point, every country with socialized healthcare could be doing this, too.
The FDA may indeed be making it impossible to make a profit from a business developing ML/AI, but that doesn't mean we shouldn't find a way to do it, not if it actually improves patient outcomes.
> More's the point, every country with socialized healthcare could be doing this, too.
More to the point, every country with socialized healthcare should be collaborating on software, algorithms, and "AI" models to that help everyone deliver better care more efficiently. Each dollar, euro, etc goes so much further when everyone can build off of each other's work.
This makes some sense both on tech and sales sides.
Tech-wise, my guess is the benefits of AI/ML are best exploited at scan time with a scanner integration, rather than after the fact. Specifically, if the algorithm picks up something it can immediately dive in for a closer look while the patient is still there, shortening the turnaround time by weeks. There's also the opportunity to co-develop / tune AI/ML to specific types of scanners.
Sales-wise, selling scanners is already a thing, and you don't have to create a new sales channel or figure out how to get buy-in. If a scanner works faster or better than others, hospitals already have a way to understand that through their procurement processes. And you may gain a higher margin by bundling AI/ML reading with hardware.
I agree that AI has potential on the scanner. My academic research uses HeartVista's RTHawk software, which is essentially a driver to enable real-time MR imaging on magnets where the original manufacturer's pulse sequence API doesn't offer good real-time scanning options. I don't use it for its intended purpose of cardiac imaging, but apparently their latest version takes an AI approach to the problem of locating the heart within the field of view, determining its orientation, and prescribing imaging planes through the heart so all the important views that a radiologist would want can be acquired quickly.
GE might even be on the way to integrating some AI into their scanner tech stack. As of version DV26 you have to specify the anatomical location when you start scanning, and it inserts a SAR scout sequence at the start of the protocol. The SAR scout is apparently looking for shapes matching the indicated anatomy, and if it doesn't find them (say, you're scanning a water-filled phantom that's not shaped like the head/brain you specified) it complains that the SAR estimate won't be as good as it could be. I don't know how they've implemented it, GE might just be using some simpler computer vision for this, but it's the sort of problem that could be well served by AI.
> their latest version takes an AI approach to the problem of locating the heart within the field of view, determining its orientation, and prescribing imaging planes through the heart so all the important views that a radiologist would want can be acquired quickly.
It seems like reducing the cost/making MRIs more comfortable would be much more useful.
Presumably if the machines didn't cost 3-5 million, with huge energy requirements, etc, you could get scanned a lot more often which probably be better than just using AI/ML?
If you were able to scan cancer patients cheaply and quickly for a much lower cost and faster, I'd assume that would have a significant impact.
MRIs are typically (from my perspective working in the health insurance industry) profit centers for those who own them.
Which leads to perverse incentives, like MRI manufacturers assisting physicians with setting up imaging consortiums, promising recoupment and profits within 18 months. Perhaps unsurprisingly, such doctors order imaging notably more often than others.
Low field MRI machines are around $1M. $3M gets you a brand new, state of the art 3 tesla machine.
Many states limit supply of MRI machines through a "certificate of need" program where you have to convince a regulatory body that you should be allowed to own one.
Not only are the machines costly at the initial purchase, the helium cryogen for keeping the magnet in its superconducting temperature regime is also expensive, and has to be topped up periodically. I hope that "high-temperature" superconductors (where liquid nitrogen is sufficient) can eventually replace the conventional superconducting wire currently used in MRIs. Helium is getting rarer all the time, but liquid nitrogen is cheaper than milk.
It's also worth noting that costs are high in part due to bureaucracy and bad laws: you can't open up a dedicated imaging center if you can't get the required "certificate of need" in your area[1]. In such places, the existing hospitals can monopolize a region and prevent imaging centers from being built. More scanners could bring down costs and allow patients to be scanned sooner.
I read something, maybe here on HN, about a company trying a new preventive medicine policy for their C-level execs, where every one of them would get an annual full body MRI. The rationale was that the execs were so valuable to the company that a few thousand dollars of MRI costs per person per year would be nothing compared to the loss of having one of them die of cancer, get sick and quit, or whatever.
If I recall correctly, the program was discontinued after they determined that it led to a bunch of medical overtreatment. Everybody's a little different, plenty of odd things may show up on an MRI, but the vast majority of unusual findings were benign.
If I can find the specific case I'm thinking of, I'll edit this comment to add it in. Or maybe someone out there recognizes this and can refresh my memory? I think the program was in the '00s, not the current decade.
I don't know of this specific story, but one thing that House MD tried to drive home at some point, and which my acquaintances in med school confirmed, is that you don't want to do full-body scans without a very good reason, because a full-body scan on a typical adult will always find something. That something will most likely be harmless, but for various reasons, once you know it's there, you'll end up trying to do something about it, and the combined impact of stress and treatment on health may be much worse than if that thing was just left unseen. This thinking, I've heard, applies to many other ideas of running speculative imaging or blood tests. For some reason it does not apply to ultrasound - I hear many doctors are in favour of patients getting ultrasound twice a year or more often, but I don't know what's the rationale.
I worked at a startup that dealt with these scanners (sending patient info to them so nothing very sexy) that had a lot of input from radiologists. They also had the same belief as this — there’s always something wrong in the body so if theres not a complaint from the patient about a particular issue they either will gloss over it when looking at the charts or they will flag it to look at some more.
One of the issues here is that a full body scan is typically done at a low resolution to save on time and disk space. These scans aren’t good enough to make a diagnosis from and leave the radiologist in the position of wondering if a couple of pixels could be cancer or is something normal. When your practice is on the line for making a call you will tend to err on the side of caution and recommend everything be looked more closely unless you can absolutely rule out a problem.
I was a little shocked when finding this out. I always thought it was like getting your blood tested and it coming back with a couple of the dozens of items that they look at being high and that could be a concern. It is much more inexact than that.
That doesn't solve stress for patient, who now keeps wondering whether or not that Thing is a ticking time bomb. It also doesn't solve the legal risk created by the off chance the Thing actually does develop into a health problem later on.
Faster would be the key word. These things are expensive and in near constant use. There can be extensive waiting lists for non-critical scans. And as someone who's spent a couple of hours in an MRI machine, it's really just the duration that causes discomfort.
Not sure whether there is much room to speed things up though. You need a lot of slices, and selecting them takes time I guess.
Decent datasets are very hard to come by, and the political gymnastics required to create and release them is horrible.
I'm keen to create a dataset of paediatric physiological data during anaesthesia to help with event detection/preemption but my professor thinks getting permission to do so is all but impossible.
I empathize. Pediatric consenting presents challenges more accentuated from what we see in adult patients. Generally we have seen (US academic med centers and hem-onc context) that shifting the focus from baseline expectation of privacy to responsible privacy driven research under oversight generally makes many patients willing to share data for research. So your goal is worthy and you should keep pursuing it. The next level ethical question becomes, if your dataset generates a successful clinically validated model to detect adverse events, will you patent and then release into open domain or commercialize it. If you commercialize it, will you share it back with your research participants in anyway? What is the right thing to do? These are all grey areas and one that informatician ethicists need to talk about openly. My 2 cents!
It's kinda funny to see how relatively easy is to strip people of their general privacy rights for invigilation but how much pushback is there for medical data and medical purposes.
Of course everything can be misused, though I would think that improved healthcare is a much better sellingpoint than being physically hurt by some terrorist actor.
I'm just starting out in a medical career, but keen to pursue a career in Radiology.
I'm curious about your background with a computer science degree. Have you found opportunities to make use of your interests in relation to Radiology?
My knowledge with computing is just limited to messing around with Linux, but I'm keen to learn more (for fun as well as career development). Are there any pathways you would recommend as high-yield for combining with a career in Radiology? My primary motivation is just enjoying tech, but it would be nice to develop my skills in a direction that allows me to incorporate an element of computing into my future work (whether that be side projects, academic research, or just making me more productive).
I think you’re taking this viewpoint quite prematurely. It’s been about 3 years since we found architectures that surpass human level performance. The process in medicine is slow. FDA, billing codes etc. Give it 3 more years before drawing strong conclusions like this.
It might be a strong conclusion, but a reasonable and qualified one. The technical side of ML/AI does not change the main point, sales, a single. Because even in perfect scenario with a perfect AI/ML product to sell you still need a channel to customers. In this case doctors and hospitals. And from that perspective, considering the trend to full-service solutions for expensive equipment, the manufacturers of scanners are in a much better position to serve the needs of their customers. Good news for ML in general, not so good for all the start-ups out there. But then I have the same doubt regarding the majority of logistics, supply chain and mobility start-ups right now.
I disagree that the level of performance will not impact the sales picture. The technical side matters a lot for sales. If an AI product is associated with a billing code, that will hugely change the sales dynamic. However this is a high bar to clear in terms of performance.
If the algorithms are worse than human level performance then I agree with the assessment. If they are above human level performance, I think the sales picture changes a lot. However this process will happen slowly since it will take time for regulatory and medical community to trust that a particular system truly and robustly outperforms humans.
If the price and size of an MRI machine could be brought down to that of a tanning bed, how might you see having one in every home improve health care?
This is a common misconception I see in my patients all the time. Tests aren’t perfect and human bodies aren’t perfect. Even if a test has 99% specificity for a condition, if less than 1% of people getting the test has the condition at the time of the test, there will be more false positives than true positives. False positives require further investigations which have higher risks (incl death) just like true positives. Likewise, tests may show something that is indeed there, a mass for example, but that would never cause a clinically significant impact (the patient would otherwise never know they had it and it wouldn’t hurt their health). But that may still then lead to further investigations, which may be more invasive (read risk of death and morbidity) or at least anxiety in patients about what is going on with them.
This is a long way of saying, it would be bad to test everyone with an MRI machine, let alone repeatedly. (It would be great if MRI costs were cheaper however).
Well assuming you were to take an MRI test once a month I think you could start to read the data after 1 year and be able to infer with AI the velocity/acceleration of any change in the person?
There are practical issues there. Liquid helium is relatively expensive and scarce, magnetic fields from the machine by definition need to be high to get good results, and to throw more fun into the mix you need to stop external radio interference from getting at the machine. Could you get the cost down to "specialised doctor's office" levels? Maaaaybe. Does AI have a role to play in that? Not one bit.
Liquid nitrogen is one alternative. Some further work is being done on room temperature superconductors, which obviates the need for any kind of cooling.
This is for MRI reconstruction, it has no other labels or annotations, as far as I know, only the raw data in k-space and the reconstruction. It's also only for knee.
The data is sampled in the Fourier domain. A complete scan (frequency up to a desired Nyquist limit) takes a long time for the MRI machine to acquire all of these samples.
If you can get by with sampling only a subset of this space and approximate/reconstruct the rest with a mathematical model, yet yield reasonable accuracy (wrt diagnosis or other criterion) relative to full sampling, the MRI session will be a lot faster because you don't need to acquire all the data you did before.
I'm not clear which scenario you are alluding to. It the expectation figuring out only how to sparsely sample the same area or how to quickly sample the larger area so that a detailed scan can be taken after that?
It is known that we can reconstruct MR images at full fidelity- with no loss of information- by randomly sampling "k-space" at something like 10% of the usual sampling rate. This leads to much faster acquisitions. I believe Siemens has a product based on this technology that is currently going to market- https://usa.healthcare.siemens.com/magnetic-resonance-imagin...
One issue, though, is that truly random sampling isn't great from a practical point of view. Sampling patterns are constrained by other equipment considerations. There is also the issue of noise.
I think this is a dangerous research direction under-regulated by the FDA. In order to get this sort of thing approved you just have to prove it doesn’t affect the diagnosis of a small set of abnormalities. The power of these models is enormous. They could potentially recognize and “smooth” out only certain abnormalities and there is no real way to guarantee that they won’t do that without testing it on all abnormalities.
I just spent 20% of my time at RSNA arguing with people doing similar things and everyone seems to be happy to jump over the FDA’s existing bar for reconstruction algorithms. However previous reconstruction algorithms weren’t universal function approximators with the potential to exhibit abnormality-specific behavior.
We know very well these models have the capacity to recognize certain abnormalities or learn to model the normal state of anatomy. There is also the danger of the fact that deep learning powered reconstruction will not work alongside a radiologist like other AI for medical imaging applications such as nodule detection. This means we won’t find the problem with FDA’s low regulatory bar until patients start dying.
A few years ago, we at Harvard have released a high quality data set of 1500 people with MRI images and behavioral information. You can get the dataset [1] and/or read the Nature paper [2].
However, my attitude towards radiology AI startups has changed quite a bit since starting residency. There really isn't a sustainable business model that involves selling standalone AI products, or even software suites. I think AI is a feature for the scanner, the PACS or for existing dictation software. That is, I think PowerScribe or GE would help a lot of radiologists by integrating AI features into their existing tech stacks.
I also think the only feasible exit strategy for radiology AI/ML startups is to get bought out for their tech talent and algorithms. I don't know of any hospitals/clinics/imaging centers that are buying AI products directly from startups. There's a ton of hype, but I'm not aware of a single positive-cash-flow radiology AI company.