Hacker News new | past | comments | ask | show | jobs | submit login
“AI promised to revolutionize radiology but so far its failing” (columbia.edu)
405 points by macleginn on June 7, 2021 | hide | past | favorite | 387 comments



There are many parallels with seismic interpretation here. Many companies/etc keep promising to "revolutionize" interpretation and remove the need for the "tedious" work of a geologist/geophysicist. This is very appealing to management for a wide variety of reasons, so it gets a lot of funding.

What folks miss is that an interpreter _isn't_ just drawing lines / picking reflectors. That's less than 1% of the time spent, if you're doing it right.

Instead, the interpreter's role is to incorporate all of the information from _outside_ the image. E.g. "sure we see this here, but it can't be X because we see Y in this other area", or "there must be a fault in this unimaged area because we see a fold 10 km away".

By definition, you're in a data poor environment. The features you're interested in are almost never what's clearly imaged -- instead, you're predicting what's in that unimaged, "mushy" area over there through fundamental laws of physics like conservation of mass and understanding of the larger regional context. Those are deeply difficult to incorporate in machine learning in practice.

Put a different way, the role is not to come up with a reasonable realization from an image or detect feature X in an image. It's to outline the entire space of physically valid solutions and, most importantly, reject the non-physically valid solutions.


>> seismic interpretation here

Strong disagree here. Lets put aside the math and focus on money.

I dont know much about seismic interpretation, but I know a lot about Radiology+CV/ML. I was CTO+CoFounder for three years full time of a venture-backed Radiology+CV/ML startup.

From what I can see, there is a huge conflict of interest w/r/t Radiology (and presumably any medical field) in the US. Radiologists make a lot of money -- and given their jobs are not tied to high CoL regions (as coders jobs are), they make even more on a CoL-adjust basis. Automating these jobs is the equivalent of killing the golden goose.

Further, Radiologists standards of practice are driven partly by their board (The American Board of Radiology) and the supply of labor is also controlled by them (The American Board of Radiology) by way of limited residency spots to train new radiologists.

So Radiologists (or any medical specialist) can essentially control the supply of labor, and control the standards of best practice, essentially allowing continued high salaries by way of artificial scarcity. WHY ON EARTH WOULD THEY WANT THEIR WORK AUTOMATED AWAY?

My experience during my startup was lots of radiologists mildly interested in CV/ML/AI, interested in lots of discussions, interested in paid advisory roles, interested in paid CMO figurehead-positions, but mostly dragging their feet and hindering real progress, presumably because of the threat it posed. Every action item was hindered by a variety of players in the ecosystem.

In fact, most of our R&D and testing was done overseas in a more friendly single payer system. I dont see how the US's fee-for-service model for Radiology is ever compatible with real progress to drive down costs or drive up volume/value.

Not surprisingly, we made a decision to mostly move on. You can see Enlitic (a competitor) didnt do well either despite the star-studded executive team. Another competitor (to be unnamed) appears to have shifted from models to just licensing data. Same for IBM/Merge.

Going back to seismic interpretation -- this cant be compared to Radiology from a follow-the-money perspective because seismic interpretation isnt effectively a cartel.

Happy to speak offline if anyone is curious about specific experiences. DM me.


I hear this sort of argument a lot in different fields. Usually it's because the IT guy doesn't really understand the business they are trying to automate or where the true pinch points or time savings are.


Could you provide some examples of fields where practitioners control both supply and standard of practice where automation is also shunned, perpetuating high costs? Also, note, the largest source of bankruptcy in the US is medical costs https://www.cnbc.com/2019/02/11/this-is-the-real-reason-most...

"They dont understand the business" is a great excuse for maintaining status quo. I'm an Engineer, a quant, and a computer scientist by training and I refuse to accept defeat w/o sound reason. I will if I'm given a good reason, but "go away you guys, you dont understand our business" is defeatist. If we all accepted such answers society would never progress. I'm sure horse carriages said the same thing when people tried to invent motor vehicles.


So first of all, you’re incorrect about medical costs being the number one reason for bankruptcies: https://www.washingtonpost.com/politics/2019/08/28/sanderss-...

I’ll give you a concrete example in the legal field. Big firms might have reasons to avoid labor-saving automation, because they bill by the hour. But a large fraction of legal work isn’t billed by the hour, it’s contingency work (where the firm gets a certain fraction of a recovery) or fixed fee work. If you’re getting paid 1/3 of the amount you recover (a typical contingency fee) you have enormous incentives to do as little work to get a good result as you can. But those firms don’t use a lot of legal technology either, because it’s just not very good and not very useful.

The bulk of legal practice is about dealing with case-specific facts and legal wrinkles. And machine learning tends not to be useful for that, at least in current forms.


So, machine learning does get used quite a bit in the legal industry, at least outside of small practice. But it tends to be much more successful when it's used as a force multiplier for humans rather than a replacement for humans.

For example, the idea of using document classification to reduce review costs has been around for a long time. But it took a long time to get any traction. Some of that was about familiarity, but a lot of it was about the original systems being designed to solve the wrong problem. The first products were designed to treat the job as a fairly straightforward binary classification problem. They generally accomplished that task very well. The problem was you had to have a serious case of techie tunnel vision to ever think that legal document classification was just a straightforward binary classification problem in the first place.

Nowadays there are newer versions of the technology that were designed by people with a more intimate understanding of the full business context of large-scale litigation, and consequently are solving a radically reframed version of the problem. They are seeing much more traction.


The coordination problems in creating a system designed from the beginning to be human in the loop is a challenge.

There are a lot of great ML algorithms, even if you limit yourself to 10-20 year old ones, that aren't leveraged anywhere like how they could be because very few know how to build such a system by turning business problems into ML problems and training users to work effectively alongside the algorithm.

CRUD application development projects blow past deadlines and budgets frequently enough. ML projects have even greater risks.

Edit: I hope the people making the successful legal document management system you mentioned write about their experience.


FWIW, my experience has been that, if you're trying to build a system that works in tight coordination with humans, you're better off sticking to algorithms that are 40-80 years old. Save some energy for dealing with the part that's actually hard.


That WP article doesn’t support your claim. It’s about the number of bankruptcies, not the leading cause. Nonetheless it does cite a survey that found medical bills contributed to 60+% of bankruptcies, and that it doesn’t really make sense to talk about a single cause.


It's a stat that requires a lot of contextualization. To your point, you're absolutely correct that the number of bankruptcies is important, because over the last couple decades, 1) bankruptcies in general have been falling, 2) medical bankruptcies have also been falling in absolute terms; but because the denominator has dramatically fallen relative to the numerator, the numerator looks larger than it actually is.

https://www.theatlantic.com/business/archive/2009/06/elizabe...

In other words, medical bankruptcies have fallen in absolute terms, but you wouldn't know that by just looking at the %age of bankruptcies.


Why not simplify the medical bankruptcy discussion?

Fact is Americans have high personal cost and risk exposure relative to nearly all of the rest of the world.

Second, our system has making money as the priority, again in contrast to much of the world.

Finally, most of the world recognizes the inherent conflict of interest between for profit and sick/hurt people and both regulate that conflict to marginalize it, and make it so people have options that make sense.

My take, having been chewed up by our toxic healthcare system twice now (having a family does matter, lol), is the temporary dampening on cost and risk escalation starting the ACA brought to us is fading now, and issues are exceberated by the pandemic (demand for care crashing into variable supply), and shifted somewhat as large numbers of people fall into subsidy medicaid type programs due to job loss.

The honeymoon period is long over now, and the drive to "make the number" is going to be front and center and escalating from here.

TL;DR: We are not improving on this front at all. We need to.

I could go on at length about high student debt and it's impact on these discussions too.

The radiology control over labor, preserving income for it's members is totally real, and fron their point of view, necessary. They ask the legitimate question in the US: How can I afford to practice.

Most of the world does not put their medical people in positions to ask that question, with some exceptions, those being far more rare and easily discussed than most of the topic is here.


Full disclosure: I work in healthcare pricing, so I have some first hand insight into all of this

> Fact is Americans have high personal cost and risk exposure relative to nearly all of the rest of the world.

This is only true for some Americans, and increasingly very few. I actually found this tweet by a health policy expert to perfectly capture the status quo: https://twitter.com/CPopeHC/status/1234510323425652737

"American healthcare in short: ~60% (in good employer plans, generous state Medicaid, or M.Adv/Medigap) have the best healthcare in the world. ~30% have insurance with gaps/risk of big bills. ~10% uninsured must rely on uncompensated care, go without treatment, or risk bankruptcy

The strength of M4A proposals is that they begin with an understanding that the 40% exist and need things fixed. Their weakness is that they pretend that the 60% don't, and threaten to take away what they have."

The fact of the matter is that the majority of Americans have excellent, world class health coverage. The problem is that there exists a small percentage of Americans that are totally screwed, and this is a higher percentage than most other comparable countries. There are a couple reasons why, which brings me to...

> Second, our system has making money as the priority, again in contrast to much of the world.

First of all, this is false insofar as not all health insurance in America is for-profit. Blue Cross Blue Shield, for example, are predominately 501 non-profits (with a few notable exceptions).

Second of all, while you're right that much of the world has public insurance companies that don't seek to "make money", there are a number of countries with world class healthcare that do have profit seeking insurance, many of them with purely private profit driven insurance companies: including Switzerland and the Netherlands. Some have a hybrid of public/private, including Germany (public/private mix), Singapore (public/private mix), etc. In fact, while many countries have a public insurance system, it is extraordinarily rare for countries to outright ban private insurance options.

Third of all, in America, health insurance is one of the most regulated industries in the country. After ACA was passed, there's a strict cap on profit margins that health insurers can enjoy. It's not too dissimilar from how private health insurance is regulated in Switzerland and the Netherlands, both of which have some of the best healthcare on the planet.

> Finally, most of the world recognizes the inherent conflict of interest between for profit and sick/hurt people and both regulate that conflict to marginalize it, and make it so people have options that make sense.

Again, as I mentioned above, this is not only not true, it's debatable if such an "inherent" conflict of interest even exists. By this logic, there should be an inherent conflict of interest between for profit food providers and "hungry/starving" people. The profit motive alone can't explain America's health outcomes, because there exists countries with fantastic healthcare systems (Switzerland, Netherlands) which are driven purely by private health insurance.

America actually has a pretty good apples-to-apples experiment of "profit seeking" vs "not profit seeking" insurance, ironically in Medicare Advantage. When you turn 65, you have the option to enroll either in "Original Medicare", which is what we usually think of when we talk about "single payer healthcare in America", or you can enroll in Medicare Advantage (aka Medicare "Part C"), where the premiums that would go to the CMS instead go to private insurers like Humana, United, Oscar Health, Aetna, Clover, etc. These plans replace Original Medicare, also cover Part D prescription drug benefits, and often include supplemental benefits that Original Medicare doesn't already cover. There are some interesting findings so far:

- 39% of Medicare beneficiaries are on private Medicare Advantage plans instead of the public "Original Medicare". Because everyone is entitled to "Original Medicare", this is purely voluntary. This number has been growing so rapidly, that we expect by 2025, more seniors to be on a private plan than the public one. There's also great variance by State. In Florida, Pennsylvania, Wisconsin, Michigan, Minnesota, Oregon, Alabama, Hawaii, and Connecticut — nearly 50% of beneficiaries are on Medicare Advantage. By 2022, we expect more seniors in those States to be on a private plan than a public one. https://www.kff.org/medicare/issue-brief/a-dozen-facts-about...

- For most beneficiaries, Medicare Advantage costs about 39% less than Original Medicare. https://www.kff.org/medicare/issue-brief/a-dozen-facts-about...

- Medicare Advantage plans are, on average, of higher quality than the public Original Medicare. https://healthpayerintelligence.com/news/medicare-advantage-...

- In Urban areas, Medicare Advantage costs less per capita to administer than Medicare — and that's not including the extra Medicare Part D insurance that you would have to buy if you're on the Original Medicare plan. https://www.commonwealthfund.org/publications/issue-briefs/2...

So the reality is really more complicated than you're making it out to be.

From where I sit, the one thing that sets apart America from the rest of the world is not that health insurance can be profit driven (so do the Swiss and the Dutch, for example), it's that health insurance is coupled with employment. There's really no other peer nation for which this is the case, and a lot of the economics of health insurance look the way that they do because big employers buy most of the health insurance in today's market, and that has resulted in market distortions that hurt those that are unemployed. What we're seeing in healthcare costs is analogous to what you might see happen to airline ticket costs if we all got our air tickets through our employers: the vast majority of us would fly business class, while the unemployed would be simply unable to pay for business class fares out of pocket. Employers (especially medium-to-large businesses) have a much higher purchasing power (and hence, willingness to pay) than individuals.


> The fact of the matter is that the majority of Americans have excellent, world class health coverage. The problem is that there exists a small percentage of Americans that are totally screwed, and this is a higher percentage than most other comparable countries. There are a couple reasons why, which brings me to...

Really? That is news to me, as a dual US|EU (Croatian) citizen, who is culturally American--but currently living in Croatia.

Even Croatia has a higher life expectancy than the United States. Yep, even those "eastern European countries" (that are within the European Union) that Americans refer to with derision, often have higher life expectancy than the United States.

See: https://www.reddit.com/r/croatia/comments/nuiyk1/hrvatska_sp...

Also, just in case you want to blame this on "lifestyle factors" (which means that this is a public health matter, which the United States has severely underfunded--locally, state, and nationally for more than a few decades now), the third leading cause of death is believed to be preventable medical errors. (The source I provide has been verified by several follow-up studies.)

See: https://www.bmj.com/content/353/bmj.i2139/rapid-responses

Also, you don't know what you are talking about here. I have studied healthcare systems worldwide for hundreds of hours.

Ironically the IMHE group that does the coronavirus projections is world renowned at producing this data. All of this data is open-access.

For starters (easy reading) this article is relevant: http://www.healthdata.org/news-release/how-healthy-will-we-b...

But, seriously, we have far from the best healthcare system in the world. That is not even remotely true. There are several countries where a woman can give birth and is less likely to die, compared to the US.

I wish you were joking.


> Really? That is news to me, as a dual US|EU (Croatian) citizen, who is culturally American--but currently living in Croatia.

Yes, and just like that health policy analyst, I can attest to it. I've read more than enough plan documents, and work with health actuaries every day.

> Also, just in case you want to blame this on "lifestyle factors" (which means that this is a public health matter, which the United States has severely underfunded--locally, state, and nationally for more than a few decades now), the third leading cause of death is believed to be preventable medical errors. (The source I provide has been verified by several follow-up studies.)

Actually, there's a fantastic analysis that addresses this point head on, even analyzing the IHME data: https://randomcriticalanalysis.com/2017/05/16/the-explanator...

The vast majority of the variance in average life expectancy is attributable to lifestyle factors. As long as you stay away from drugs, you don't participate in a gang, or you take the bus (or any public transit), you're on roughly equal footing with the rest of the OECD.

"The data suggests motor vehicle accidents, homicides, and drug overdose deaths can explain a large fraction of the US life expectancy gap as compared to several highly developed countries. Obviously this does not account for obesity, diabetes, (historical) smoking, and related lifestyle differences that are likely to have a pronounced negative affects on US life expectancy as compared to most other developed countries and which statistically explains the vast majority of the very large spatial differences in the United States."

> Also, you don't know what you are talking about here. I have studied healthcare systems worldwide for hundreds of hours.

Um, so have I. I literally work on health pricing systems, and have studied health policy. "For hundreds of hours" even, for whatever that's worth (not a lot, I assure you).

> But, seriously, we have far from the best healthcare system in the world. That is not even remotely true. There are several countries where a woman can give birth and is less likely to die, compared to the US.

I don't think I ever said that we have the best healthcare in the world. I agree that US healthcare is broken. All I'm pointing out to you is that the "profit motive" has nothing to do with that, as evidenced by counterfactuals in Switerland, Singapore, and the Netherlands; the former two of which actually have the best healthcare in the world.

https://www.rd.com/article/switzerland-worlds-best-healthcar...

https://www.forbes.com/sites/theapothecary/2011/04/29/why-sw...

https://www.bloomberg.com/graphics/infographics/most-efficie...

In my opinion, the profit motive has nothing to do with America's healthcare ills (no pun intended). It's the fact that it's tied to employment and purchased by employers. No other country is set up that way.


This is the most HN argument I've ever seen.

Personally, I defer to you, the person who actually understands the industry from the inside, in terms of having an opinion based in reality.

So often, these hand-wavy solutions which boil down to "we must remove the bad people preventing our utopia" (ie scapegoating) are masking wicked problems (https://en.m.wikipedia.org/wiki/Wicked_problem) that cross multiple thresholds of responsibility, incentive and jurisdiction.

Declaring hard problems to be caused intentionally by evil people has led to some of the most despicable acts in history.


> I've read more than enough plan documents, and work with health actuaries every day.

So, part of your job is to analyze health benefits plans (health insurance plans) that Americans get. You also work daily with actuaries in the life sector, who assign dollar values to people's lives.

Yeah, like that really makes you a good source when it comes to the well-being and long-term outcomes of a country.

> In my opinion, the profit motive has nothing to do with America's healthcare ills (no pun intended). It's the fact that it's tied to employment and purchased by employers. No other country is set up that way.

Congratulations on coming up with that point. That is precisely why I left the US, as somebody with a rare disease that requires an orphan drug to survive.

I knew better than to stay in the US, in order to survive. In fact, there may be a major ACA Supreme Court decision coming soon. If not, it will be released in the next session. I refresh SCOTUSblog every morning, worrying for my fellow Americans, who could very well die from the outcome of the decision. Regardless, I never plan on living in the US ever again. It will never be "home" for me anymore.


> So, part of your job is to analyze health benefits plans (health insurance plans) that Americans get. You also work daily with actuaries in the life sector, who assign dollar values to people's lives.

Yes, exactly like health insurance actuaries at publicly run health insurance providers. We don't sit around trying to figure out how to make people die, like cartoon villains. We try to figure out how to make healthcare sustainable.

If you read what I had written, it's clear that not only do the private sector insurance providers perform comparable with public sector ones like Original Medicare, they can even out-perform them. So we can't conclude the "privateness" as the root cause of our problems, we have to consider other confounding variables.

> Congratulations on coming up with that point. That is precisely why I left the US, as somebody with a rare disease that requires an orphan drug to survive.

Sorry to hear that, truly. In my opinion, the single most effective thing we can do to help folks like you is to decouple health insurance from employment, and I'm sticking around to try to make that happen. Hopefully you'll come back, and stay healthy.


Thank you. Like another poster suggested, I will try to be more considerate next time.

I am off disability, but I can theoretically keep Medicare for life. I was always on traditional Medicare, and my orphan drug (a blood product) was covered under Part D for my condition. I was also insured as a "disabled dependent" via employer-based insurance, through my deceased father's retiree benefit--so it was secondary insurance--which functioned like a supplemental plan.

I have 2 rare immune-mediated neurological diseases affecting my peripheral nervous system (one of them being very rare--which means an HMO from a Medicare Advantage plan is a huge problem if I want to stay alive long term in the US--and generally, you cannot go back to traditional Medicare), plus type 1 diabetes. The very rare neurological disease is believed to have caused the autoimmunity leading to my diabetes diagnosis at age 5.

Anyways, I can tell you that the way things were set up in the US (prior authorizations, prescription formulary restrictions, quantity limits, networks, etc.) were certainly harming my health. I studied electrical engineering for undergraduate, and it is not like I am cannot handle bureaucratic and logistical nightmares.

But, there is a baseline level of stress and anxiety that is present in the US, and you do not have the realistic expectation that you will be cared for there. Not only that, it is a part-time job just to deal with insurance matters. This feeling is basically non-existent within most of the EU, including in places like Croatia. Croatians probably do have the best lifestyle in all of Europe, too.

It's just not worth it.


> generally, you cannot go back to traditional Medicare

You're always able to go back to traditional Medicare, you just have to wait until the next open enrollment. In fact, after the first trial run with MedAdv, you can switch back before open enrollment if you want. Traditional Medicare is always an option.

Great to hear that you're staying healthy otherwise.


Thank you! :-)

True, but the issue is that medical underwriting is allowed on Medigap (Part B supplemental plans). So, once you are on a Medicare Advantage Plan, you basically cannot effectively go back, due to being unable to obtain a Medigap plan (covering the 20% that part B does not cover), due to having pre-existing condtion(s). The financial consequences of not having a Medigap plan are quite severe for somebody who has a rare disease, if you know what I mean.

As you know, it is a loophole in the ACA. Because I was declared disabled before age 22 ("disabled adult child"), there is a way for me to get Medicaid, for life, effectively, through the Ticket-to-Work program, via the PASS (Plan to achieve self support). Even if I "make to much money to stay on Medicaid", at some point, the Pickle Amendment allows me to stay on it for life, due to the age I was declared disabled at.

But, there are issues with that too, since it is a form of "welfare". You can end up having to pay back the US government hardcore overall. You also get punished for being on Medicaid. For example, some states only allow you to have 4 medications covered by Medicaid. After the 4th active prescription, a type of "prior authorization" is sent to each and every doctor--for some government bureaucrat to make an arbitrary decision whether this medication is being "worthy of coverage".


It feels like this is the point in the thread where you just got frustrated and started casting aspersions instead of making arguments. That happens to me a bunch too, and the strategy I've developed for it is to look at how many question marks I've managed to put in my comments, and try to fit more of them in. The person you're arguing with has some apparent domain knowledge; try extracting it?


This is fine. I will try being more thoughtful next time.


>All I'm pointing out to you is that the "profit motive" has nothing to do with that

You have not met that burden. Not even close.

The best case is a mixed environment, with a for profit portion that can address clear for profit cases well.

And those exist!

But, doing that sans a robust system that actually just delivers health care to sick people is crazy bad policy.

There is another argument in your favor out there, and that is consistent, transparent pricing. Or "Equal Pricing"

In Singapore, there are no real surprises and people have options that don't cause them to trade, homes for example, to get sick people they care about healthy again.

Here? Nothing but surprises!

And frankly, that being the case actually does support the difficult argument:

If making money is the priority, then making sick people healthy isn't.

On the other hand, if making sick people healthy is the top priority, and then we talk about money?

Very different scenarios.

The US is firmly entrenched in the former. Examples of the latter exist in the world and perform well.

The profit motive matters. How it's framed, what priority it has, and more all do contribute to the overall effectiveness and again that cost and risk exposure.


> You have not met that burden. Not even close.

What? I absolutely have. I’m not sure what you’re on about. Switzerland is as close as it gets to a profit-driven purely private healthcare system. Indeed, the US was modeled off of it. The only difference between the two is that the former is based on a robust individual market while the latter is driven by group benefits.

> But doing that sans a robust system that actually just delivers health care to sick people is just bad policy

I don’t think anybody is suggesting not delivering health care to sick people. The question is whether the private sector can provide an actuarial product.

> In Singapore, there are no real surprises

Agree, Singapore’s healthcare system is excellent, and price transparency is very important. The reason the US system is devoid of price transparency is because the majority of Americans simply don’t care about prices, since they have little skin in the game. This is true for old people on Original Medicare, poor people on Medicaid, as well as employed people on generous group plans. If none of those describe you, then you’re unfortunately SOL. THAT’S the problem. Not the profit motive.

> If making money is the priority, then making sick people healthy isn’t

Again, it’s a cute pithy quote, but that’s not how the real world works. If making money is a priority, then feeding hungry people isn’t. If making money is a priority, then providing cheap clothing and shelter isn’t. It’s impossible to understand how the world works through such a simplistic lens.

As I showed you above, the Medicare A/B test is illuminating. Medicare Advantage payers are primarily in the business of making money, and yet their members are on average healthier, have higher quality plans, and at lower cost.


Feeding people isn't a priority!

World hunger anyone! It's a real thing.

We have it because making money is a higher priority than feeding people is.

Cheers, I have other things to do, and did enjoy this discussion!


> Feeding people isn’t a priority!

Feeding people is 100% the priority, which is why the vast majority of people in the first world have access to food primarily produced by a predominately private food industry.

The US is in the top 5 in the world for food security: https://www.worldatlas.com/articles/the-world-s-best-countri...

To the extent that there exists poor people unable to afford food, that’s a problem solved by welfare and subsidies, not by nationalizing food supply chains.

Apply that to healthcare.


> Feeding people is 100% the priority, which is why the vast majority of people in the first world have access to food primarily produced by a predominately private food industry.

This isn't true when feeding people is the _100% priority_, like in a big crisis, as a major war. The market flies out of the window and doesn't return before after the war is over since it just can't reliably produce goods in way that is appropriate for a crisis.

When the crisis is over however, the game of profit based production is restarted again.


No disagreements re: crisis / war. That's typically when states of emergency are conventionally recommended.


"Unable to afford food" = making money is a higher priority.

Unable to afford the doctor = making money is a higher priority.

Many food banks will give people who want food, food. Few questions, sometimes no questions asked.

It is rare to have similar access to health care.

Where did I mention nationalize?

I did not do that, and meant to not do that.


> Many food banks will give people who want food, food. Few questions, sometimes no questions asked.

> It is rare to have similar access to health care.

No it’s not, you just described Medicaid. The Venn diagram of people that rely on food banks for food and people on Medicaid is basically a circle.

Also, it’s debatable if food banks are a superior way of getting food to poor people, vs expanding food stamps and/or a UBI which can then be used at grocery stores.

> Where did I mention nationalize?

You seem to be attacking the profit motive as a mechanism by which to provision goods and services for which there is highly inelastic demand. My point is that if you think that the way we provision food is workable, then the concept of private healthcare with subsidies (basically the Swiss model and MAdv) should be as well. In both of those models, profit and “making money” is still key.


Medicaid asks a LOT of questions, the food bank does not.

You are saying a for profit system can work. You are about pricing in that scenario too. Fair enough. Given your perspective, your position on this is understandable. It is not agreeable, in my view.

Our way of provisioning food could be improved significantly. Food differs from health care significantly. Secondky, that we have food banks at all is deplorable and embarrassing.

That is all I care to entertain on that largely useless comparison.

I am attacking the profit motive in health care specifically because it carries an inherent conflict of interest and is poorly aligned with markets.

In any case, I am going to stop here for real, and just make it clear I do oppose health care as a market and do so because people do not have control over their need to participate and we all know what happens in a must buy, cannot walk away from the deal scenario: lowest value for the most possible dollars.

That comes up ALL the time and is why most of the world has removed that conflict of interest from the task of fixing sick or hurt people.


> Medicaid asks a LOT of questions, the food bank does not.

Not sure what you're talking about. Medicaid is extraordinarily generous. The only questions that are asked are in determining whether one qualifies to enroll in Medicaid, but once you're in, you pay almost nothing. Even prescription drugs are capped at $75, no matter how rare or fancy.

> Food differs from health care significantly

I disagree. Both are examples of critical goods/services without which humans die. They both factor into long-term health and quality of life.

> Secondky, that we have food banks at all is deplorable and embarrassing.

I completely agree. While I've been extolling the virtues of pricing systems to provision goods and services, I've also made it clear that welfare is extremely important. The US currently extends welfare to low income people, through SNAP/EBT food stamps, Section 8 Housing vouchers, and EITC. In my opinion, there's still room to further expand the generosity of these systems, and even consolidate them into a basic income.

It's worth disentangling welfare from public vs private, because it's easy to conflate the two. It's entirely possible to rely on private markets to bring down prices and increase availability, while using publicly funded welfare to enable access for those less fortunate.


I agree with you on decoupling the insurance from employment. Great move! This would immediately clarify what cost and risk exposure means to people too. Bonus!

I disagree on cost and risk exposure. Ask around both employer and employee about cost growth this last decade, for example... not getting lower, often are digit increases.

Regulation?

Well, the cap that limits margin dollars is easily dealt with by owning more of the chain of care. Opponents of this tepid method of cost control predicted it and it has happened. They can do billing with themselves and it works like tax shelters do and film studio accounting do to show compliant profit numbers.

From where I sit, being one of the really screwed set, I found some of what you put here clarifying, but did not find myself sold on the idea we are improving at all.

In fact, one way to differentiate the US profit motive from the rest of the developed world, is our continuing move toward market based care despite a lot of information


> did not find myself sold on the idea we are improving at all.

I don't think we're improving either; as long as we have an employer mandate and privileged tax treatment for group health insurance plans, I don't see this changing any time soon. All I'm saying is that the market/private nature of it has little to do with it. It's the "employer sponsored" nature of it that has everything to do with it.

> In fact, one way to differentiate the US profit motive from the rest of the developed world, is our continuing move toward market based care despite a lot of information

That the US is somehow undifferentiated in its pursuit of market based healthcare is not true at all; see Switzerland and the Netherlands. Both have purely private health insurance systems, and there's no sign of that changing any time soon, and both enjoy excellent health outcomes with broad approval of their respective healthcare systems. They're almost exactly as regulated as the US health insurance market, except with one glaring difference: the private health insurance is predominately purchased on the individual market. (https://www.forbes.com/sites/theapothecary/2011/04/29/why-sw...)

Not only that, Singapore has one of the most market-driven healthcare systems on the planet, and enjoys the status of being the most efficient healthcare system with some of the best outcomes:

https://www.bloomberg.com/graphics/infographics/most-efficie...


>I'm saying is that the market/private nature of it has little to do with it.

I appreciate that. And still do not agree.

Either the goal is fixing sick people, delivering care, or it is not.

If the US were to improve DRAMATICALLY on labor policy, among many areas needing improvement, perhaps we could show for profit being able to work.

Fair enough.


> Either the goal is fixing sick people, delivering care, or it is not.

That's an odd dichotomy. You could apply this to literally any good or service. With food, either your goal is nourishing hungry people, or it is not; and yet the private sector provides food just fine.

At the end of the day, price signals and market forces ensure that producers meet the needs of consumers. There are certainly instances of market failures, especially in the case of externalities. But with healthcare, there's really no evidence that markets and the private sector cannot deliver world class healthcare, and in fact we see evidence to the contrary, both domestically (in Medicare Advantage) as well as globally (in Switzerland and the Netherlands).


YES you can!

Not only is it interesting, it's reality.

You are making a market argument fundamentally.

With food, for example, people have lots of options, and while the need for food is absolute, wants for food can be ignored and or vary widely. Food wants is a great market. People can participate or not. They can prepare their own food or not.

Food needs are not as good of a market, though again, people have options, and are rarely in a must buy scenario.

That difference matters.

Notably, there is a cap on how big of a risk there is in the whole thing, and it's not all that big of a risk.

With food, one can end up in a weak scenario where one gets the least value for the most dollars. But, it's not typically life changing, and there are a lot of options for most people in most cases.

Contrast that with health care.

Let's talk about wants first, just like food. Cosmetics are a great example. People can choose not to do it. They may have options, depending on what the scenario is. This makes for a reasonable market. And, depending, people can make their own. I did that for a prosthetic a while back. Saved thousands of dollars. But, that is rare more than not. Still, we could empower people to some degree like we do with food.

Someone having a heart attack will need treatment, or let's say they are out of the market. It's not like they can shop around either. I could go through and compare / contrast with food, but here's the main point:

Unlike food, that doctor visit doesn't really have a cap on risk. 5 figure? 6 figure? 7 figure? All can and does happen.

And things people require? When people have to participate in the market, they pay the most and get the least value for the dollar. See insulin prices in the US?

Now, for a nation that has it's priorities in order and those priorities are not making money first and foremost, that price is a small fraction of what gets charged here in the US.

This is a shitty market. People are forced to buy, their choice is often limited, risks are crazy variant, costs not transparent, and on and on it goes.

At a minimum, most nations break these out making sure people who find themselves sick or hurt have baseline options that are not life changing, and market type options for those health care related things that make better sense.

Boil all that down, and what do we get?

Making money IS NOT THE TOP PRIORITY. Fixing sick people is.

When we examine all this in detail, we will find those shining examples of for profit health care actually working out are very well regulated, and that means they are forced to fix sick people first and foremost.

If they were not, then people would be tipping over for lack of ability to participate in the market, which isn't really even a market in a need scenario. It can be a market in the want scenario.

All of which is precisely why I frame it in those terms.

Which is it then?

Currently the US has chosen to make money first and foremost and look at the carnage!


> With food, for example, people have lots of options, and while the need for food is absolute, wants for food can be ignored and or vary widely.

Wants for food absolutely cannot be ignored. Without food, you starve and die. Along with healthcare, food is the quintessential example of a good/service with price inelastic demand.

Now, you’re absolutely correct that in most food markets, there is a variety of options; that’s exactly what’s needed for a market to function. Unfortunately you haven’t demonstrated that private health insurance markets are inherently devoid of such options by nature of their being private. Medicare Advantage is an extremely healthy market, as is the individual market in Switzerland.

> Unlike food, that doctor visit doesn't really have a cap on risk. 5 figure? 6 figure? 7 figure? All can and does happen.

You’re just talking about catastrophic risk here, and as I’ve already mentioned, there’s nothing inherent to the private insurance model that makes this unworkable. This isn’t based on guesses and conjecture, it’s based in empirical outcomes: see Medicare Advantage, Netherlands, and Switzerland. Also keep in mind that nobody here is arguing against subsidies for poor or unhealthy people; we’re just talking about whether that money is used to purchase plans are created by actuaries that work for the government, or for private sector organizations, and the merits of each.

> And things people require? When people have to participate in the market, they pay the most and get the least value for the dollar.

That’s true in the US. That’s not true in Switzerland. Both have private healthcare markets. Therefore, it’s impossible to conclude just based on US outcomes that private-ness is the root cause. It’s clearly something else.

> See insulin prices in the US?

The unfortunate reality here is that government-enforced patents allow insulin prices to remain bloated. Again the root cause isn’t the profit motive, that’s just a side effect.

> This is a shitty market.

Absolutely no disagreements there. The US healthcare market is indeed shitty (outside of Medicare Advantage at least), unlike Switzerland.

> People are forced to buy, their choice is often limited, risks are crazy variant, costs not transparent, and on and on it goes.

Agreed. Again, nothing to do with private-ness.

> When we examine all this in detail, we will find those shining examples of for profit health care actually working out are very well regulated, and that means they are forced to fix sick people first and foremost.

This is also true of the US. Health insurance is by far the most regulated industry in the country. Profit margins are capped by ACA, plans are regulated by ERISA, health insurance has minimum standards thanks to the ACA, insurers cannot deny access based on pre-existing conditions, and employers are mandated to provide health insurance — all thanks to the ACA. From a regulatory standpoint, the US is virtually identical to Switzerland, except for one notable difference: employer sponsored care.


> Again the root cause isn’t the profit motive, that’s just a side effect.

That's just utterly naive or politically biased. The profit motive is what causes those regulations in the first place! It has always been like that, especially in the states.

Don't try and create some sort of a fairy tale place where the profit motive won't try to rig regulations in their favor, they will always try to, this is what the incentive to make more and more profit creates.


> The profit motive is what causes those regulations in the first place! It has always been like that, especially in the states.

Actually, that's not true. The regulations were created as a result of a series of well-intentioned but catastrophic policy decisions, starting in World War 2. FDR instituted a wage cap to discourage private sector employment, which resulted in employers using non-wage benefits to participate in a competitive labor market. After a decade or so, it became an expected benefit (sort of like company cars, at the time). Eventually around the '70s, the Federal government decided it was time to incentivize remaining employers to provide health insurance by making premiums tax deductible; again a well-intentioned attempt to expand access. It finally all culminated in ACA which _mandated_ that employers provide health insurance. None of this can be attributed to lobbying, almost all of it attributed to well-intentioned regulations gone awry, not the profit motive.

In fact, the best shot we have right now of decoupling health insurance from employment is the ICHRA (https://ichra.com/), which allows employers to fulfill their healthcare obligations by providing tax-advantaged cash to employees that can be used to cover health insurance premiums on the individual market; and I imagine that came about due to lobbying.

Again, the easiest way to falsify a causal line between the profit motive and the current outcome is by finding instances of markets wherein there is a profit motive, but with differing outcomes. That's exactly what we see in Medicare Advantage, Switzerland, and the Netherlands.

> Don't try and create some sort of a fairy tale place where the profit motive won't try to rig regulations in their favor, they will always try to, this is what the incentive to make more and more profit creates.

No disagreements that industries will try to rig regulations in their favor. This is true everywhere in the world, and yet we see wildly differing outcomes. Notably, I'm yet to see an argument explaining away the Medicare Part A/B vs Part C outcomes in the US.


This is great info but I would add that when you talk about the unemployed left out in the cold, you also need to consider the self-employed. Having insurance tied to discrete employer risk pools makes insurance on the private market very expensive for all of those who cannot get subsidies (ACA/Medicaid). It really discourages people from taking the risks to start new ventures. When I started my business I went years without medical coverage.


Yes, agreed. A big part of why that's the case is that the individual market is one huge adverse sample, in its current form. Because most healthy people in the US are employed, they tend to receive their health insurance through employer sponsored group plans, which by design pools risk only within that group.

What ends up happening is that anyone left over has to participate in risky markets with higher premiums in general, resulting in the mostly broken state of the US individual market.

In contrast, you have countries like Switzerland where pretty much all health insurance is purchased on the individual market, and risk is pooled across larger and more diverse populations.


Do you have any insight into how a Medicare eligible person choosing Medicare advantage lose most of their funding for extended care facilities or SAR/AR days (in exchange for subpar vision/dental)? That 39% does not evaporate into savings.


Yes, Medicare traditionally never covered long term care. This was true for both Original Medicare, as well as for Medicare Advantage. Only Medicaid covered long term care, but as you may know that’s only for those with low income.

However, this is beginning to change. The Centers for Medicare and Medicaid Services has begun to allow private Medicare Part C insurers expand into long term care. Notably, to this day the public Medicare still doesn’t cover long term care, whereas M.Adv plans are beginning to.

If you’re talking about SNFs, Medicare Advantage is basically at parity with OM. I’ve seen Advantage plans by big payers like Humana with SNF benefit periods of 100 days. The 39% savings figure is on average, but if you require long term SNF stay, you’ll probably cost roughly the same to a private insurer as you do to Original Medicare. None of that stops private insurers from offering the benefit, since the actuarial math works out to the kind of savings that we’re mentioned above, across a whole benefit population.

> in exchange for subpar vision/dental

I’m not sure what data you’re looking at, but from where I sit, Medicare Advantage almost always includes vision/dental, whereas Original Medicare does not (which is why some seniors go for Medigap).


As a question, why haven't any of these techniques made waves outside the US? Other countries don't have the same monopoly/monopsony powers in the medical industries that are prevalent in the US.


US is exactly the place where those techniques would make waves because of what the US is paying for radiology; in countries where radiologists don't have the same monopoly/monopsony powers it's not nearly as lucrative to replace them.

For example, I'm distantly involved in a project with non-US-radiologists about ML support for automating radiology note dictation (which is a much simpler and much "politically cleaner" issue than actual radiology automation), and IMHO they and their organization would be happy to integrate some image analyis ML tools in their workflow to automate part of their work. However, the current methods really aren't ready, and the local market isn't sufficiently large to make the jump and make a startup to make them ready, that would have to wait until further improvements, most likely done by someone trying to get the US radiologists' money.


There's not really a way to disambiguate the two though - the fact that there are lots of medical technology startups and new drugs coming out of the US is because of the costs involved and how much can be harvested by being a little better. This creates new technologies that the US can't really protect against proliferation - so all of the money has to be harvested from the US market.

This isn't necessarily a bad thing - I for one happen to think it's great that our expensive medical system is financing all kinds of wonderful new technologies that benefit the world overall. However, the major problem here is that things that would be useful for other places simply don't have the market to support it, so most medical innovation exists in the context of the US medical system and it's problems - some of which are widespread, some of which are not. I do wish there were some other testbed healthcare systems out there for companies to try to disrupt, but I don't think it is (by itself) a call for medical reform.

My preferred medical reform is to "legalize insurance markets" (ie: repeal laws that state that insurance companies operating in state Y cannot sell insurance to people in state X because state Y policies are not legally compatible) and try to break the monopoly that doctors and nurses enjoy....somehow. Telehealth? Maybe?


But, is it? Almost half of the funding for Healthcare innovation is governmental even in the US, and a competent public health system already has a strong incentive to reduce costs. So if a technology has the potential to reduce costs, a more efficient healthcare system would also pay for it - and if one doesn't there are dozens that can too - and if there is no path to it providing value overall in such a system, then it's going to be on the balance less efficient anyways.

To note, a big issue in public innovation is that rich western countries, led by the US, HATE governments competing against the private sector. So if a government comes up with an innovative solution, they are generally disallowed from selling it, which hurts everyone except private companies.

This happened in my city a few years ago, which had a very early innovation in bike sharing, much before any VC funded bike sharing company, and other cities had expressed interest in paying my city for implementing this service locally.

But because of laws banning public endeavors from engaging in commercial activities, this was struck down, hurting all the taxpayers in my city, citizens of my city that would have benefited from better service from the experience, and millions of citizens from interested cities which would have received better service and who would have saved money.

Or another case in my province was the invention by the electricity utility of the electric hub motor - which is now over 40 years later in widespread use due to its efficiency and low cost - but instead of exploiting that patent and selling those things, it had to partner with a private company which got exclusivity and mostly squandered it. Again, hurting almost everyone which might have benefited from lower cost electric transportation as well as the taxpayer here.

This is actually part of why China is eating everyone's lunch, they used their size and never really agreed to these rules, leaving themselves the opportunity to profitably invent at the state level, leading into more efficient state owned enterprises that can often profitably outcompete the private sector.


> I for one happen to think it's great that our expensive medical system is financing all kinds of wonderful new technologies that benefit the world overall.

Does it factoring in situation of people unable to pay medical bills?


If the entire rest of the world isn’t a big enough market to be worth developing for then maybe we don’t need ml radiology we just need medical reform.


The entire rest of the world isn't a market, it's many separate markets that need to be entered separately by overcoming different moats. Market fragmentation matters, especially in regulated industries like medicine.

But yes, medical reform is definitely something that might be helpful - technological solutions almost always aren't the best way for solving social/political problems.


EU seems to have quite a lot of companies offering AI solutions in radiology:

https://grand-challenge.org/aiforradiology/companies/


Or the VA, which is a massive single-payer healthcare system that would love to cut costs.


> the largest source of bankruptcy in the US is medical costs

That's not what the article says.

"Two-thirds of people who file for bankruptcy cite medical issues as a key contributor to their financial downfall."

Those issues can absolutely include direct costs, but they also include things like not being able to work, needing a lot of day to day help, and other things that increase costs and reduce income even if the actual medical costs were largely covered.


I don't really know of one.. I don't think automation is ever shunned as long as it is useful and known to be useful. Everyone likes things that save time.

There is an essentially an unrestricted demand for healthcare across the world.. they will use the time to either talk to their patients more (or start to if they don't already).. or they will move into other medical fields.. or increase the volume of screening.. (may be harmful, but that's another matter). They probably don't want to do it as it won't really save them much time. OR it will save them time and they have been burnt before. For example, early voice recognition was very poor and over promised. Stopped me using it for ages after it became fairly good. It's still not actually better than typing, but it is closer now. Let's all focus on voice recognition that works before moving on to grander plans....


I wonder if you can replace a GP with a decision tree. You could update the tree as new research is done.

If you could collect reliable diagnostic data locally, you could serve this globally and for free.

It would also be a treasure trove of data about how we respond to various treatments.


> I wonder if you can replace a GP with a decision tree.

No, you can't.

> If you could collect reliable diagnostic data

And there's the reason. You can't do that either. There is a reason why GPs go through medical school.


> No, you can't.

Any sound reason, or are you either a) a defeatist, or b) a GP?

>There is a reason why GPs go through medical school

The input data would be basic things like:

- blood pressure

- weight

- images of the ear canals and throat

- blood, urine, saliva samples, perhaps analyzed in a regional centre

You don't need a ton of training to get the above from a patient and into a computer, and to ship the samples.


> Any sound reason

The job of a GP is actually probably one of the top hardest to automate, because the GP's main (and often only) job is to extract information. And that _does not_ consist in performing plenty of tests, but in speaking to and most importantly listening to the patient.

> You don't need a ton of training to get the above from a patient and into a computer, and to ship the samples.

Great! And you know what good that would do to improve diagnostic accuracy? Zilch. Zero. There's a saying that '90% of diagnoses are done on history'. Now tell me why that would be different for an algorithm given identical information? If there was a simple answer to that, we'd already be running statistical models over patient labs all day long, which we're not.

> are you either a) a defeatist, or b) a GP?

I'm an epidemiologist and also a practicing anesthesiologist, which is why the statistical theories of people who have never set foot in the clinics to see what's the job really about make me want to jump off a bridge.


When I go to the doctor, this happens:

- Doctor says "Say ahhh" and looks in my throat with the thingy

- Doctor looks in my ear canals with another thingy

- On other occasions, my other vitals are taken, maybe some vials of blood, etc. Again, a student can do this.

I'm asked a few general questions, with some follow-up questions based on my answers.

Then the doctor puts this information - along with my patient history - into the decision tree in their head and comes up with a result. If the doctor is stumped, I'm sent to a specialist.

The above can be automated, plain and simple. It would also be an improvement over my experience of the health system - in Canada. I have never seen my GP pull up a multi-year graph of my blood pressure, weight, or whatever. What I am describing is a system for creating regular data points of the kind currently used in diagnosis. What I fail to understand is how you cannot see that there must necessarily be predictive value in such a database.

Even if only 80% of the job can be automated, public health would improve immensely if the global population can do regular checkups like the above cheaply.


> What I fail to understand is how you cannot see that there must necessarily be predictive value in such a database.

I can see allright. But you cannot see that your hypothetical database is lacking most of the info because your doc actually mostly evaluates you by looking at your general composure and relying on X years of experience and a bit of knowledge shoves that into the really complex decision tree in his head: "Hmmmm... this guy looks mostly fine."

Now, you feed your database to the latest deep-learning shiny thingy that tells you: "this guy has X% chance of having a horrible cancer, but I can't explain why". So you enjoy many months of costly investigation because you don't want to miss something, right? And after the fact, it is discovered that the lack of standardization in measurements caused the algorithm to decide that the light hue in the office was a sign of cancer.

All that to say that yes, someday what you are imagining may well be possible, but we are really very far from having the technology to do that now.


I would have never thought of this, but I'm pretty sure gait, posture, and voice analyses can reliably be classified as "probably ill" or "probably well".


That's not really the hard or useful part part. According to a radiologist and machine learning researcher[1]:

"It turns out that deep learning is a very good match for some of the most time consuming (and therefore costly) parts of medicine: the perceptual tasks.

We also saw that many decisions simply fall out of the perceptual process; once you have identified what you are seeing or hearing, there is no more “thinking” work to do."

[1]: https://lukeoakdenrayner.wordpress.com/2017/05/03/the-end-of...


> examples

The taxi system, until Uber and Lyft kicked their ant hill.


The thing about Health Care is most efforts to automate it have failed. Arguably that's because no one "understands" the field, in the sense that no one can give, codified summary of the way they operate; each professional who's part of a health care pipeline takes into account twenty different common variabilities in human body/health/behavior/etc.

It's similar to the situation of self-driving cars, where the ability to do the ordinary task is overwhelmed by the existence of many, many corner cases that can't be easily trained-for. Except in health care, corner cases are much more common. Just seeking health care is an exceptional relative to something in ordinary life.


It's worse than even that.

The cartel arrangement is as described, but it's increasingly not even a great deal for the radiologists.

The business of radiology is increasingly centralized into teleradiography farms. That means that radiologists are working in shifts, and evaluated according to production metrics, like line workers in a factory.

The cartel arrangement will probably continue, as it is advantageous for people at the top of this food chain, but it's not an arrangement that's going to result in a lot of wealth and job security flowing to individual radiologists. Nor will it result in great outcomes for patients.


CTO of a CV+AI/ML startup developing a radiology solution eh? Let me ask you a couple of quick questions: What was your liability insurance like? How much coverage per diagnosis did you carry?

Let me make it simpler: How much blame was your company willing to absorb if your algorithm made a faulty diagnosis?


Great question! We did our trials at two overseas locations in parallel with doctors. All uses cases were diagnostic for immigration purposes (e.g., detecting Tuberculosis and other chest infections at border points of entry). Given the non-medical use -- no liability insurance. No coverage diagnosis. Also given everything was run in parallel, double-blind with doctors also doing reads, no blame had to be absorbed. Once we got out of parallel, still we wouldn't need liability.

The importance here was demonstrating efficacy, which we did fantastically well.

Once we prove efficacy for multiple use cases, we can at least remove the "oh you computer scientists dont get it" argument and can have adult conversations about how to progress state of the art rather than continue to bleed patients dry.

I'll admit there are definitely barriers like what you mention. But those barriers are not some impenetrable force once we break down real issues and deal with them separately and start treating the problem as one we can solve as a society.


I can't help but think some of the barriers here involved proving the software in a situation decidedly different than a clinical setting. I would not be surprised if an immigration medical officer developed different views about diseases than a GP or ER doctor. They're not treating the person, they're not in a doctor-patient relationship with the person, they're not really even "diagnosing" the person, they're just deciding whether they're "too sick" to come into the country. Maybe if the person looks messed up in some other way, their chest x-ray gets interpreted a little more strictly.


>> I can't help but think some of the barriers here involved proving the software in a situation decidedly different than a clinical setting.

Totally agree. But science moves in baby steps and progress builds on progress. We started ML by doing linear regression. Then we moved onto recognizing digits. Then we moved onto recognizing cats. Suddenly, Google Photos can find a friend of mine from 1994 in images it appears to have automatically sucked up. That is amazing progress.

Similarly, our viewpoint as co-founders in the space was to solve a single use-case amazingly well and prove AUC and cost/value metrics. The field wont be moved by me or you, it will be moved by dozens of teams building upon each other.


But AI theater being good enough to replace no-stakes (because no one is liable to anyone for any errors, in either direction) medical theater is a step, just not as big a step or relevant to any use case of any importance as being sold upthread


> Once we prove efficacy for multiple use cases, we can at least remove the "oh you computer scientists dont get it"

No, you can't. Stating this is a clear proof that you don't understand what you're dealing with. In medical ML/AI, efficacy is not the issue. What you are detecting is not relevant. That's the issue. But I know I won't convince you.


From where does the efficacy come if what you are detecting is irrelevant?


They are detecting what they are testing for. But that's in most cases irrelevant regarding what happens to the patient afterwards, because it's lacking major connexions to the clinical situation that will have to be filled up by a human expert.

So it does in fact work. Unfortunately, only in trivial cases.


Maybe, but then the problem isn't an issue with AI/ML, it's that humans just suck at math.

We're terrible at bayesian logic. Especially when it comes to medical tests, and doctors are very guilty of this also, we ignore priors and take what should just be a Bayes factor as the final truth.


We're terrible at bayesian logic all right, but still better than machines lacking most of the data picture. That's why the priority is not to push lab model efficiency but to push for policy changes that encourage sensible gathering of data. And that's _far_ more difficult than theorizing about model efficiency vs. humans.


So the answer is: zero. Not surprising.

Why does it surprise you then that doctors and patients don't take the solution seriously? You don't have any skin in the game! Whereas the doctors are on the hook for malpractice, and as for the patient, well it's life or death for them.


This is largely where the art of "labeling/claims" comes into play regarding how explicitly worded a "diagnosis" can be. There is a lot of room to play on the spectrum from truly diagnosing a patient with a disease (which requires the most evidence and carries the most liability) all the way down to gently prompting a healthcare provider to look at one record earlier than another one while reviewing their reading queue.


I'm pretty sure people said the same things (nothing will ever change, doctors will never advocate for or accept change) when radiology went from films to digital. I'm sure they said the same things when radiology went from having scribes to using voice recognition software (e.g. Nuance) for reports.

There seems to be a misconception that this is some kind of "all or nothing" thing, where AI will "automate away" radiologists. It's like a decade ago when everybody thought we were just about to "automate away" human drivers, except unlike driving, most radiology reads are by definition (i.e. a sick person) exceptional, out-of-baseline scenarios.

I think this is missing some things about radiology economics. There are indeed incentives to automate as much as possible, especially for outsourced radiology practices like Radiology Partners or people getting paid by pharma companies for detailed clinical trial reads. Organizations like these are getting paid a certain amount per read. If they can use software to speed up parts of their work while demonstrating effectiveness, they make more money. Eventually this drives down the price. There would still be a human in the loop to review or sign off on whatever the AI does, and to look for any anomalies that it misses. But there can be less time spent on rote work or routine segmentation, and more on the overall disease picture.

It's true the amount of imaging going on in the US has increased faster than both the population growth and the number of radiologists. At a certain point, the number of existing radiologists doesn't have time to read the images at any price. This gives the alleged cartel a few choices: graduate more radiologists, outsource reads, or use software to produce more output per radiologist. In the last case, which a self-interested group would obviously choose, they get paid the same but each individual patient pays less.


"...because seismic interpretation isnt effectively a cartel..."

I know some people who would disagree with you on that one!

Seriously, though, you're making an excellent point that I hadn't considered. Healthcare has a lot of "interesting" incentive structures and are baked-in constraints that would prevent even a perfect solution from being widely deployed.

It's not the same as geology, for sure, even though there are some parallels in terms in of image interpretation.


People have been trying to do this with expert systems, flow charts, and every other technology you can imagine, and have for decades. My wife is a pharmacist, and they have software that is supposed to help them out with the bewildering number of medicines that are out there now. This seems like a trivial case, compared to radiology: (here in the US) the FDA publishes guidelines, so just take those and turn them into code, but she finds it "not that much of a help" that mostly gets an override. "Every once in a while I'll get an alert that is helpful, but most of them are not helpful, even a little bit." "Mostly false positives."

And that's for a lot easier case than radiology.


Similarly, in infection control and antimicrobial stewardship, at this point pitching Yet Another Decision Support Tool will get you dirty looks.


> WHY ON EARTH WOULD THEY WANT THEIR WORK AUTOMATED AWAY?

Because any radiologist directly involved in the work of automating it away could capture multiple salaries.

> but mostly dragging their feet and hindering real progress, presumably because of the threat it posed.

It sounds more like they were not offered a stake, or were not sufficiently convinced it would work enough to accept a stake.


I agree with you and don’t know why people would think radiologists would be against automating their jobs away.

Most radiologists aren’t paid by the hour so it’s not like the longer it takes to review and diagnose the better. Having automation tools would allow a radiologist to do even more work and make even more money.

Unless someone literally thinks they won’t need an authoritative radiologist in the loop any longer. But that’s pretty silly since we can’t even automate a McDonald’s cook out of the picture.


>>> I agree with you and don’t know why people would think radiologists would be against automating their jobs away...Having automation tools would allow a radiologist to do even more work and make even more money.

I'd love to understand your viewpoint here. What you're describing would be awesome to a small segment of radiologists, but then what happens to the rest of them?

Further, why would the rest agree to it?!?! This isnt web ad sales or hosting where anyone can come in, do a better job, and win market share and get rich. Rather, here, the limited set of Radiologists would need to agree on standards of practice via the ABR -- why would they do that if it means most of them suffer as a result?


> awesome to a small segment of radiologists, but then what happens to the rest of them?

The same thing, they would do more radiology consults.

This is just like any other tech that’s improved radiology, it’s reduced the cost so there are more consults done.

Let’s say that a review takes 10 minutes without this new tech. With the new tech the review takes 30 seconds. So now a single radiologist can do 120 reviews an hour instead of 6. That’s great, as every radiologist will make way more money.

And also more radiology consults will be done because things that are too expensive and time consuming now become quick and cheaper.

Most of them don’t suffer, they do better.

Again, this isn’t a situation where any radiologists are getting fired and any solution that doesn’t involve radiologists is stupid since you still need humans to confirm and take responsibility for diagnosis or confirmation.


From what I've seen in clinical collaborators in Life Sciences, they're usually driven by pretty selfish incentives - publications, conferences, money, a way to push someone's pet theory if they happen to have one. They have major jobs, with patients and hierarchical games aside from R&D, and rarely care about research progress as such. (And research is mostly low quality, with obscure clinical perspectives). They don't need to actively obstruct, but they have enough reasons not to care.


For short term oriented VC funded startup any professional who likes to err on the side of caution is immediately looked upon as a corrupt actor hindering progress for personal gain.


You can just look at this from the outside w/o any startup's opinions: Why are some products and services' costs growthing faster and out of lockset with the rest of the economy. Some example niches come to mind: college textbooks, college tuition, specialty medical costs.

No one on the outside has to opine -- you can just look at the prices for some of these and know there are abnormal market forces at work.


That's some weird logic. Let's say the market is broken for textbooks. What does it have to do with what I wrote?


"Move fast and break things" is less compelling when the things in question is someone's grandmother.


I see a lot of parallels to this and airline control towers.

I think most can see that what control towers are doing is highly automatable (It's a queue) yet the industry has sabotaged it at every turn. Because the jobs it will eliminate are the ones that need to verify the system works correctly.

A similar thing happens with train operators. We are busy setting up self driving cars, yet a self driving train is almost a trivial implementation that we don't do because the train conductors would never get on board with such a system.


Self-driving subways do exist in a number of cities. But generally, train operator aren't a huge cost. While taxi driver serves 1-2 customers at a time, a conductor can easily move around a thousand people in a train. Or compared with trucks, freight trains can be ridiculously long and so paying one guy is really nothing in comparison.


>Because the jobs it will eliminate are the ones that need to verify the system works correctly.

I don't understand how you recognize that, but still blame them.

I don't ever want to trust pure algorithms or "machine learning" for my health. I believe there is just too much variability in humans, and the costs for outliers is going to be immense.


> because the train conductors would never get on board with such a system.

The train conductors wouldn't have any say in the operation.


Conductors union could clearly organize a strong opposition to this change. At least in France they proved they know how to do it.


“We will stop replacing leaving conductors” is nothing they can oppose to.


Well, I am guessing you are not an MD and as such you do not understand what radiology really is as a profession. You certainly have a very advanced technical knowledge about it, even much more than most radiologists. And that's precisely the catch: why are radiologists (mostly) non-technical people? The only possible answer is 'because what's asked of them as professionals is not technical'. As many (all) other specialties, radiology is more art than science. Its the science of interpreting images in context, and you can't separate the two.

So actually, radiology startups all fail on this crucial issue: to do a good job, you'll not only have to automate image interpretation, but really automate that of the whole EHR. And given the amount of poorly encoded information in there, machines fail now and will continue to do so in the foreseeable future.


No, Im not an MD, but my co-founder was.

Globally, hundreds of thousands of radiologists have been trained over the years have have collectively achieved generally consistent practices. Radiology is pattern matching and a set of very complex decision trees. They arent magic, because we consistently churn out more practitioners who achieve the same consistent outputs given inputs.

Anyone trying to improve things is like every other scientist, they aren't trying to figure out the entire decision tree or every single thing, they are trying to chip away on complex problems little by little.

I also strongly disagree with "radiology is more art than science" because if it was, radiologists wouldn't be able to agree on diagnoses.


Trivial radiology is pattern matching. Radiology for which you need a radiologist for is not. Yes, you could replace part of radiology and some radiologists with algorithms today. Unfortunately, those are also mostly the cases where we don't need radiologists.

I'm sorry, but in many cases and even many important diagnoses that impact the patient, 2 radiologists that aren't allowed to communicate will sometimes reach widely different conclusions. Additionally, if you've interacted with radiologists you were certainly able to see that their analytical and logical thinking capabilities are often far below that of a CS major.

A complex problem in radiology is not: "this patient has TB because [chest X-ray]". It's more like: "take this patient that had 30+ surgeries in the past 5y and tell me if his problem is more likely to be diagnosis 1 or diagnosis 2". And the problem for those cases is that you will never reach the critical mass of training data required (within the capabilities of a single startup) because those cases are too rare. You'll need the collaboration of the health system as a whole for that, _i.e._ state policies. So you can't really chip away at the complex problems on your own.


The reason a doctor is educated in general anatomy is so that when their specialty hits edge cases, they are able to reason about what could be the underlaying cause. This reasoning in a sparse information environment is exactly the scenario AI/ML is worst at.


If two radiologists can consistently reach wildly different conclusions then it really can only mean two things. Either or both radiologist are incompetent. There is not enough information available in the diagnoses tools we have to make a diagnoses.

In both cases I would say this counts against radiologist and for an AI solution. I would rather have a continuously improving AI then a coin tossing radiologist.


I don't see this as a response to the previous post at all. It was about the technical issues associated with a professional data interpreter, outside the simple image being interpreted. This is just cynicism about money and motivation.

Is Radiology affected by the same external factors as seismology? Does one image area depend deeply on surrounding features? Are there external rules that can override what the image seems to present?


> Further, Radiologists standards of practice are driven partly by their board (The American Board of Radiology) and the supply of labor is also controlled by them (The American Board of Radiology) by way of limited residency spots to train new radiologists.

Perhaps it is time to found the American Board of Computational Radiology (or Medicine)? There seems to be a chilling effect on tech innovation in the medical space in the US. On recent trips to the dentist, it seems like most of the cool new tech is coming out of Israel.


Interesting, how would the standards of the Board of Computational Radiology be set? Is the implication that the current standards are too high?


I’m not sure, but it removes the conflict of interest (and subsequent gatekeeping) created by radiologists who don’t want technology to automate away some of their work.


There's a lot to unpack here! In the USA, the ABR regulates human radiologists via board certification. Medical technology is traditionally regulated by the 510K, PMA, and de novo pathways at the FDA. Of course, these products still have to demonstrate value in order for major stakeholders (hospitals, radiology practices) to purchase them. And using these products does not absolve the ordering doctor, the radiologist, or the hospital of legal liability for misdiagnosis. In fact, IANAL and this is a somewhat novel area of the law, but any AI product that functions as a drop-in replacement for a radiologist might be held liable for misdiagnosis that leads to harm. These liabilities could become quite large for a product deployed at scale (a single misdiagnosis causing death can lead to a settlement in excess of 10 million dollars). In summary, there's quite a bit more to the issue than simple "gatekeeping." It might be appealing to blame radiologists for these issues, but there's a much larger system at work that's designed to ensure quality and safety for patients. This is not AdTech - lives are at stake and people can get hurt. Now, this definitely comes with a cost to innovation, but it's going to take more than just a few MD's to reinvent the economics and law of computers practicing medicine on people.


I have very little experience with the existing medical establishment, so forgive me if I've imagined a scenario that is not relevant. I would like to be able to go to have a scan done, receive the scan data, then have the choice of submitting that to a radiologist of my choice or to an AI service of my choice that can read and recommend next steps. It seems like such a scenario is stifled by the existing way of doing things.

My experience with getting a sonogram was that the sonogram wasn't that expensive, but getting it read was hugely expensive. I understand that there are issues of liability, but it's really frustrating that I'm saddled with a high deductible healthcare plan where access to useful medical stuff is stuck behind 3-4 digit costs. Want antibiotics, inhalers, ADHD meds - all of which are pretty cheap in generic form? Pay $100 to the doctor for the privilege. People have very little agency in this system.

I guess at the end of the day I'd like to see open data (I'm able to get the images/data from all kinds of scans & diagnostics), and some kind of transparent system for submitting my data for diagnosis or analysis. There may be caveats & waivers, but I'd be willing to pay $10 to an AI service to tell me "You definitely need to consult with a radiologist based on the data presented" before I pay a radiologist orders of magnitude more to tell me that everything looks OK.


You're making some very interesting and valid points. You've correctly identified that when receiving bundled services you lose the ability to negotiate or comparison shop on the basis of price. This is unfairly combined with a legal presumption that when the healthcare system generates a bill, the bill is valid until proven otherwise. Now, although we probably need more physicians, I don't think the limited supply of physicians is the primary reason for this situation - it's more due to increasing market concentration of health insurers on one side and hospital systems on the other, leading to regional monopolies that don't compete on price. In fact, with the decline of private practices and the rise of hospital systems, physicians receive less than 10% of all healthcare revenue. I can see that you'd like to unbundle your healthcare and regain control over prices, and I think that's a very reasonable thing to want.


The problem with your thesis is that many radiologists are paid per read. They're essentially contractors. These independent radiologists would gladly pay 100k/year for an AI that could do their jobs. Even if the tool didn't do diagnosis, just made them 50% faster, they would gladly pay a huge amount of money.

Contrast this with the field of Radiation Oncology. There are already AI auto-contouring solutions that you can buy today. It's a better fit given the limitations of AI:

1) Contouring can take hours per patient, so more value is gained by automating it

2) Normal structures are, well, normal; that is that they are not the diseased part of the patient. If the AI can't quite figure out where the kidney is and fills it out with an average looking kidney, hey that's probably an alright guess!

3) The AI doesn't have to be perfect; dosimetrists and docs can quickly check and update the result if the algorithm fails. Everyone is already familiar with this workflow due to the atlas segmentation algorithms that were state-of-the-art before.

4) Similarly, the output is easy to understand. If the AI did something crazy because of a failure to generalize, it will be obvious. Not so for a diagnostic AI.

5) Clinicians have the final say on which contours are used for treatment. The liability for contour assistance is lower for the software vendor.


The US healthcare system is slowly migrating from a fee-for-service model to a value-based model where at least some of the financial risk is shifted from employers and insurers to providers. The managers running those provider organizations thus have a direct incentive to adopt new technology if it actually works, even over radiologist objections. So far most radiology automation software hasn't generated a clear cost savings. That may change as technology improves.


Things are changing but as of 2019 by a whisker, most radiologists own their practices and accross all specialties 56.6 percent of physicians in the US are members of small practices. Physicians especially specialists tend to work for and with other physicians. In my area, at least in the recent past, all the cardiologists and all the urologists work for the same small practices organized around their specialties. I'd guess that tends to blunt pricing pressure on providers at least locally (see here for some stats https://www.mpo-mag.com/issues/2019-10-01/view_columns/a-loo...).


The general trend is that smaller practices are going away. More and more physicians are becoming employees of larger provider organizations. Small practices just aren't very viable any more because they lack the negotiating power to get high reimbursement rates from payers, and they don't have the economies of scale necessary to comply with interoperability mandates.

When new physicians complete their training fewer and fewer go on to start or join smaller practices.


Cost savings doesn't have to be near term. Imagine a doctor misses something on a reading and doesn't get the care they need... lawsuits are expensive. So you can have software which helps doctors do their job better which results in better patient outcomes. That is something hospitals are buying today for their radiology groups.


> where at least some of the financial risk is shifted from employers and insurers to providers

Do you have any evidence to back that up?


The ACA created a capitated payment system for medicare that providers can opt into. I'm not sure what evidence you're looking for other than the definition of capitation is "fee per patient" as opposed to "fee for service." Some states like California also have capitated plans on from private companies. https://www.cms.gov/Medicare-Medicaid-Coordination/Medicare-...



Quality of care provisions in Medicare/Medicaid/ACA all help to shift shift costs to the practitioner if care is poor or has bad outcomes.


To balance that, do you have any comments on the arrogance and incentive problems in deep learning? :-P


Good points. But I genuinely wonder what role algorithm brittleness plays here.

“Fitting only to the test set” (see Andrew Ng quote in original article) is an acute concern in my circles: digital pathology in cancer research

See “Google’s medical AI was super accurate in a lab. Real life was a different story.”

https://www.technologyreview.com/2020/04/27/1000658/google-m...


>> In fact, most of our R&D and testing was done overseas in a more friendly single payer system.

So are CV/ML radiology systems deployed somewhere globally? Where, and how successful are they ?

And if not, why ?


> fee-for-service model for Radiology

It's not exactly a fee for service model that's the problem. It's the monopoly over the supply of labor.

Any business and union that manages to get its competition outlawed is guaranteed to abuse that position.


Family member worked on a board of a hospital, they raised charitable donations to buy more powerful software for the radiologist.

The radiologist can now work 10x faster and still bills the hospital the same amount.

The Doctor's Guild is exceedingly powerful.

I really wish Wallmart or Amazon would get into providing healthcare services on the long tail - a lot of common stuff.

I sounds odd but both those companies are built around ripping margins out of the value chain and not keeping much for themselves.

Ok, maybe not either of them ... but something like that: The 'Walmart of Healthcare' that revolutionizes cost.

Also - there are enough Medical Practitioners who would work there. Enough of them do care about patient outcomes, cost etc..


You would expect the radiologist to be paid less because he/she does more work now?

My salary did not went down when we migrated to more efficient tech.


They get paid per review. They're now doing considerably more reviews thanks to the new software, paid for by donations.

Cost to patients is the same, but processed faster. Doctor making bank.


You're misconstruing the parent's point. The radiologist can be paid the same, even more. Their point is that while the cost (at least in time) of the radiologist's work was cut to 1/10, individual patients' bills remained constant.


There is zero chance the software allowed the radiologists to work 10x faster... what?


Yeah that's my hyperbole. I should have said 'significantly faster'.


Yeah the article quoted gave very few details on the supposedly inconsistent performance of the model and lots of details on how few radiologists used it. Doctors (and other regulated professions) are a cabal that need to be broken up.


> Doctors (and other regulated professions) are a cabal that need to be broken up.

What do you see as the alternative to self-regulation? Some government office staffed with bureaucrats who have no idea about the realities of the actual work being done?

I got an engineering undergrad degree and had no interest in pursuing professional certification, but I certainly understand the importance of it for those practicing a in way that may harm the public's trust, and it made me appreciate the role that other professional bodies play in regulating who gets to represent themselves to the public as a lawyer, doctor, electrician, etc.


I am certainly not in favour of government offices staffed with bureaucrats. If anything I'd say that we should reduce the difficulty to be licensed as a doctor/lawyer/electrician. Let universities open more and larger med schools and have them be un-tethered from the self regulating bodies.


But surely regulating the profession is tightly tied to accrediting the education that is the gateway to it. How do you propose to "untether" these?


The universities handle the accreditation of all other faculties so why not have them also handle medicine law and engineering. As far as I'm aware theres no guild of philosophers who can control the number of students entering philosophy school. Universities would respond to supply and demand just like they do in any other faculty. Secondly make medicine an undergraduate programs , like it is in the most of the world which reduces the financial burden that doctors have to take on to complete the degree. This would also make the supply of doctors more elastic.


Would you say this is similar to challenges in other fields such as law?


Lawyers don't really control their own supply the way doctors do, which is why there is a great overabundance of people with law degrees in the country. AI has actually been used in a number of legal contexts, like building normalized contracts, or paralegal work. It's also because a lot of the highly paid legal work is pretty hard to automate in the same way, because it requires much more understanding of precedent or other nebulous ways of interpretation that AI isn't suited for.


> Lawyers don't really control their own supply the way doctors do

Bar associations do control standards for qualifications and acceptable on-ramp paths which directly governs supply (in fact, the oversupply differs in jurisdictions as a direct result of these decisions).

A key difference is that the legal pipeline isn’t sensitive to federal funding to govern supply of qualified new lawyers the way the medical pipeline is for doctors, though; there’s nothing analogous in law to the reliance on medicare funding of residency slots in medicine.


The medical pipeline doesn't have to be sensitive to federal funding either. There is nothing preventing residencies from being privately funded (besides the fact that most are currently publicly funded).

Medicare funds this out of a broad idea of it being a public good if there are more physicians. Note, there is no obligation that physicians work in public service after residency. This is in contrast to if you go to med school on a military scholarship (in which case, there is an obligation to serve).

In other words, if medicine weren't cartel, the government wouldn't need to pay doctors to train new doctors.


>> A key difference is that the legal pipeline isn’t sensitive to federal funding to govern supply of qualified new lawyers the way the medical pipeline is for doctors, though; there’s nothing analogous in law to the reliance on medicare funding of residency slots in medicine.

This a myth. Residency funding from medicare is an excuse because the funding is so limited and profits are so high. The real bottleneck here is the number of seats opened up by the specialty medical boards. Residents earn very little, under six figures, yet billings for residents are multiples of that. Even after resident stipends, benefits, tooling, and infra, i'm certain medical billings more than cover costs.


Partly -- law controls standards to some extent, but does not control supply necessarily.


Its probably a challenge for any profession where there is a legal monopoly where X service must be performed by Y individual, who also get to choose the quality of X and the number of Y in the market.


So, there are lots of countries with a shortage of cardiologists, some of whom could probably use any halfway effective AI solution if the alternative is no radiologist available at all. Perhaps this sort of thing should be started in a medium-income country rather than the wealthiest? Not the ones who cannot afford the equipment at all, but the ones whose trained radiologists keep leaving for wealthier countries.


So you're asserting that the reason your, and other, companies didn't do well is not that you couldn't live up to your promises but rather that there is a grand conspiracy to stop progress?

By the way, have you checked out https://timecube.2enp.com/https://timecube.2enp.com/?


>> grand conspiracy

Umm, I'm asserting like "Medicine is not perfect competition and thus prices are not competitive." If you want to think it is a "conspiracy" you can, but Economics offers great explanations for such setups. I think many of us in technology think all industries are driven by merit, cutting edge technology, margins, and competition.

In reality, not all industries are like this. This shouldnt be surprising. Computers go down in price. So do cloud service costs. So does RAM. But medicine stays expensive. "Conspiracy" is a shallow explanation -- it is just economics, it isnt perfectly competitive. And progress is hindered to maintain scarcity.


“Grand conspiracy” seems a bit uncharitable IMO. He’s just saying that the incentives aren’t aligned, which legitimately seems like an issue in this space.


It's sad you decided to give up because it wouldn't make money easily in the US. As you said, there are plenty of countries around the world that would have loved to lower medical costs without kowtowing to the doctors. Once it was a proven technology in places like Asia, I'm sure you would have been able to make it back into the US a lot easier.


Interesting, can you give an example of a radiologist hindering progress? You make an interesting point about radiologists setting practice standards - what alternative do you propose? You may also want to consider that radiologists don't determine practice standards in a vacuum - they have to serve the needs and expectations of their clinical colleagues.


It takes one group of great radiologists who have a bit of altruistic/capitalistic/venture side, doesn't it?


Yes. Or the right economic setup for societal gain where we can compete on value. Going from pay-for-service to value based care will be great. In the meantime, setups like https://www.nighthawkradiology.com/ are also great because they drive efficiency, I just wish they were more prevalent.


This is a really good point and example. I spent 10 years in mammography software, and I saw first hand how many outside factors can impact a physician's decision to biopsy or not a given artifact on an image.

Things like family history, patient's history, cycle timing, age, weight, other risk factors all play a role in a smart radiologist making the right decision for that patient. And the pattern recognition on top of that is really hard - it's not just about the pattern you see at a particular spot in an image, it's the whole image in the context of what that looks like. Could ML get better over time with this? Sure...but they've been using CAD in mammography for decades and it still hasn't replaced radiologists at all.

Could a model be made to include those over variables? Sure...but again the complexity of that kind of decision making is something that requires a lot more "intelligence" than any AI or ML system exhibits today and in my mind in the foreseeable future. Just collecting that data in a structured, consistent way is more challenging than people realize.


> I spent 10 years in mammography software, and I saw first hand how many outside factors can impact a physician's decision to biopsy or not a given artifact on an image.

This is slightly tangential, but I'm curious about your perspective on a classic example of medical statistical illiteracy. Whenever surveyed, the strong majority of doctors vastly overestimate the odds of a true positive mammogram (half of them by a factor of 10!!), due to flawed Bayesian thinking and the low base rate of breast cancer.[1]

Does your anecdotal experience contradict this data? If not, wouldn't two minutes of stats education (or a system that correctly accounted for base rate) utterly swamp intuition-driven analysis of tiny artifacts? Or is it simply that, through folk wisdom and experience, they implicitly adjust other terms in their mental calculus in order to account for this huge error in estimating one factor?

[1] https://blogs.cornell.edu/info2040/2014/11/12/doctors-dont-k...


The doctor diagnosing a patient isn't solving a puzzle of the kind posed here.

They are doing, as the previous comment said, interpretation. In practice, much of their thinking is profoundly rational and bayesian.

Human (and animal) thinking isn't primarily cognitive, ie., explicit reasoning. It is the application of learned concepts to sensory-motor systems in the right manner.

We don't look to dr's to formulate a crossword puzzle when a patient arrives; we look to them to be overly attentive to yellow skin when the patient's family has a history of alcoholism.


I'm not convinced this just couldn't be a large personal data set into an algorithm.

Doctors barely have any data as it is. I think personal bio testing and monitoring is going to be a huge market and medical paradigm shift.

would you rather have you heart rate and temp constantly monitored for months , or get it checked once a year by a GP to see if you have hypertension or any negative markers


With infinite amounts of perfect information and infinite computing time, all question-answer problems are trivially a hash-table lookup.


> In practice, much of their thinking is profoundly rational and bayesian.

Right, this was the third option I mentioned; I'm certainly not leaping all the way to the conclusion that one shouldn't listen to a doctor about the best course of action after mammogram results[1]. If their explicit understanding of mammography's false positive rate is so incredibly flawed, there is presumably an implicit counterbalance in the calculus that's built on experience (both their own and their mentors'/institution's), or an _order-of-magnitude_ error would show up in patient outcomes. I'd guess that this and the other instances of critical thinking failure that plague medical culture have their rough edges sanded over time, through decades of what is effectively guess-and-check iterative evolution, combined with institutional transmission of this folk wisdom.

Though I disagree that I would call this "profoundly rational", as IMO leaving explicit reasoning tools on the table instead of intentionally combining them with intuition/experience/epistemic humility is optimal. Iterative evolution is not an efficient process, and adding an attempt to explicitly model the functions you're learning can be a powerful tool. It's very difficult for me to imagine that a doctor explicitly internalized the basics of Bayesian reasoning wouldn't make them at least marginally more effective in terms of patient outcome, etc. Medical history is full of blind alleys built by medical hubris like your comment's "doctors know A Deeper Truth in their soul that defies explicit logical articulation". (Though I should note I don't claim to have a pat answer to this problem: one can theorize about improving a specific doctor's effectiveness, but scaling it to the whole culture is another, and can bump into everything from supply problems to downstream cultural impacts with negative consequences)

[1] Though with knowledge of flaws in such basic reasoning skills in one subpart of the total calculus, a patient can't rationally escape updating in the direction of checking their reasoning more thoroughly. Medicine is a collaborative endeavor between patient and doctor, both bring flaws in their reasoning to the table, and stronger evidence of a huge flaw in reasoning should lower confidence in the overall decision (though at a much lower magnitude, for the reasons we both describe here). This is the same logic that doctors use to rationally discount patient's opinions when they don't perceive them as coming from, eg, overly emotional reasoning.


Although we are tempted to think "automate away doctors jobs and take their salaries", it's more achievable and more valuable to provide existing doctors with better UI/UX which helps them make better decisions.

Bayesian probabilities could be displayed, other factors presented, and ML could be designed in an explainable way which helps the doctors. But that's too hard, requires too much talking to people, and too much respect for existing institutions, and not enough disruption! Sigh...


Yes, I agree. I was responding narrowly to the parent comment's premise, about full replacement (in theory). It's still useful to think about as a theoretical threshold, even if the path to commercialization is almost definitely through assistance.


This isn't really surprising to me considering that it's not usually a doctors job to estimate the odds of a true postive - false positives are far less of a problem than false negatives when doing an initial scan for cancer.


Biopsies aren't cost- and risk-free, including from a patient health and mental health perspective. On top of that, the decision space isn't simply binary: there are multiple types of biopsy procedure, and understanding (even implicitly) the probabilities in play can guide those decisions.

The Cult of the Doctor never ceases to amaze (and frankly disgust) me. I think it's absolutely a noble profession, and that doctors are generally smart and conscientious. Even some of the classic complaints are understandable. Eg, my incredibly kind, humble, and open-minded brother-in-law occasionally betrays some extreme disdain for his patients, a common complaint about medical culture. I continue to be a little put off by it, but I understand that he has to deal with some incredibly stupid things from most patients, and it's easy to see how taxing maintaining the welfare/autonomy balance can be. I try to keep this in mind whenever a doctor has an initial defensive reaction to my attempts to be an engaged and informed patient.

But, the lengths to which people bend over backwards to make flimsy excuses (like your comment) for massive errors in reasoning is just astonishing. I can't think of a single other context in which people so desperately clutch their pearls at any suggestion that a professional culture or institution could be improved.

Doctors are people, and people have flaws. They go through a training and acculturation process, which may also have flaws. This shouldn't be controversial. The only reason I can think of is that patients are no better at dealing with uncertainty than doctors are; the idea that you're being treated by a deity levitating in absolute epistemic certainty is more comforting than the idea that you're dealing with a smart, conscientious, trained human that nonetheless has human limitations (and that the field of medicine itself has plenty of uncertainty).


In my field (space) we have an unspoken mantra that autonomous systems should aid human decision making.

It's just so much easier to build a system that allows a human to focus on the tough calls, than it is to build an end-to-end system that makes all the decisions. Only in the most extreme examples does full autonomy make sense.

If there were one doctor in the world, I'd build an autonomous mammogram machine and have him periodically audit the diagnoses. Otherwise, better tools is the way to go.

I noticed this when visiting the OBGYN for sonograms to check the development of our children. The tools are really good. You can easily measure distances and thicknesses, visualize doppler flow of blood, everything is projected from weird polar space (what the instrument measures) to a Cartesian space (what we expect to see), and you can capture images or video in real time.

Sure, the cabal factor is real, as is the curmudgeon doctor, but I think we should be building tools, not doctors. We know how to build doctors.


Building tools to aid humans seems like the best of both worlds. This is already happening in some radiology subspecialties: The autonomous systems can highlight potential areas of interest but it’s up to humans to make the final call. For some cases it’s a quick and easy call, but for tougher calls the radiologist can bring other factors into consideration.


A counter example from your field: Would a human add value adjusting the boosters to vertically land starship?


Not a counter example, but a legit question in spirit since autonomy vs human will always be a reasonable choice for some parts of a system. I just don't think real-time control of a system that's never been flown by humans is a good example of where that choice is hard. You can describe it with equations, solutions are well understood, there's no available humans that can do it, there's a well-understood need because fly by wire was necessitated to control some aircraft, and will be necessary to control more air/spacecraft in the future. Classic computer problem.

(But I'm guessing Starship will have human controllers in/on the loop to do interventions before or during landing if required.)

Doctor diagnosis is a good example of when that choice is hard to make. Hard to describe with equations, hard to model, not a well-understood need since professional doctors exist, decades of experience has been accumulated, huge swaths of industry are dedicated to helping doctors, etc etc.

What I meant by "Allow the human to focus" definitely should be interpreted to mean removing small, multi-dimensional, fast, annoying things like real-time stabilization during landing.


I see the same thing in natural language processing. A lot of important details come from outside the four corners of the document.

Ironically, I often find myself in the unenviable position of being the machine learning person who's trying to convince people that machine learning is probably not a good fit for their problem. Or, worse, taking some more fuzzy-seeming position like, "Yes, we could solve this problem with machine learning, but it would actually cost more money than paying humans to do it by hand."

Part of why I hate hate hate the term "AI" is because you simply can't call something artificial intelligence and then expect them to understand that it's not actually useful for doing anything that requires any kind of intelligence.


You are absolutely correct. In fact, most NLP software ignores the formatting of documents which conveys a lot of information as well. For example, section headings must be treated differently from the text that makes up the body of a section. Its very hard to even determine section headings and then its hard to take advantage of them since the big transformer models simply accept a stream of unspecialized tokens.


There's an old AI joke that all actual problems reduce to the artificial general intelligence problem---everything else, by definition, doesn't require intelligence.


There's some truth to that. But I'd also argue that there's a tendency to try to bill every single kind of spinoff technology that the artificial intelligence community has produced as artificial intelligence.

Which a bit like characterizing mixing up a glass of Tang as a kind of space exploration.


But isn't this just what ML should be good at - taking a huge number of data points and finding the patterns? Or are you saying it's not an issue of ML working poorly but rather one of there not being a good enough data set to train it on properly?


One of the main tasks that doctors do is take patient's vague and non-specific problems, build up a rapport with them, understand what is normal and what is not, dealing with the "irrelevant" information present at the time, and focus the results into a finite tree of possibilities.

In principle, this would be a great task for an ML algorithm. It's all conditional probability. But every such system has failed to do that well -- because the "gut feeling" the doctor develops is funded by a whole host of prior information that an ML algorithm won't be trained on: what is "normal" for an IV-drug addict, or a patient with psychosis; how significant is the "I'm a bit confused" in the middle-aged man who was cycling and came off his bike and hit his head? Do the burns on the soles of this child's feet sit consistently with the story of someone who ran over a fire-pit barbecue "for fun", or is it a non-accidental injury? It's a world of shades of grey where, if you call it the wrong way, ultimately someone could die. Doctors do Bayesian maths, and they do it with priors coming from both their own personal experience as a member of society, and professional training. That is, in my ignorant opinion, the main distinction between what I do -- oft-called "academic medicine" or "academic radiology" -- and clinical medicine. The former looks at populations. The latter looks at individuals.

In other words, I don't think it's even possible to codify what the data the ML algorithms should be trained on -- they're culturally specific down to the level of an individual town in some sense; and require looking at huge populations at others.


> In other words, I don't think it's even possible to codify what the data the ML algorithms should be trained on -- they're culturally specific down to the level of an individual town in some sense; and require looking at huge populations at others.

This is inciteful. ML is by definition generalizing, and should only be used where it's ok to generalize. There is an implicit assumption in use cases like medical diagnosis that there is a latent representation of the condition that had a much lower dimensionality than the data and that the model is trained in such a way that there are no shortcuts to generalization that miss out on any information that may be important. The second condition is the hardest to meet I believe, because even if a model could take in lots of outside factors, it probably doesn't need to to do really well in training and validation, so it doesn't. The result is models that generalize, as you say, to the population instead of the individual, and end up throwing away vital context to the personal case.

I also believe this is an important consideration for many other ML applications. For example those models that predict recidivism rates. I'm sure its possible to build an accurate one, but almost certainly these models stereotypes in the way I mention above, and do not actually take the individual case into account, making them unfair to use on actual people.


On codification:

I actually disagree with this, but only slightly.

Imagine if instead of face to face, doctor transactions were via text. The questioning of the doctor can be monitored and patterns in decision trees observed could be codified, and weighed against the healthcare outcome (however defined).

What is missing, however, is the Counterfactual Reasoning. The "why" matters. The machine cannot reduce the doctor's choice of decision trees from all possible combinations, only that which it observes the doctor perform.

Tail-cases like rare genetic disorders would often be missed.


Tail-case like rare genetic disorders are often missed by doctors too. I have several friends who had Lyme disease with fairly serious complications (in the Northeast) (Not that Lyme disease is that rare - it's actually much more common than is expected). Each of them got misdiagnosed for multiple years by multiple different doctors until finally getting the correct diagnosis/treatment. So every system is fallible.


Personally, my exlerience with doctors is neither rapport nor nuanced understanding of my specific situation.

That is really not what they do, are trained to do or have time to do.


"There's not enough good data to train it on properly"

Bingo: You're in _very_ data poor environment, compared to something like predicting a consumer's preferences in videos or identifying and segmenting a bicycle. The external data is also very qualitative and hard to encode into meaningful features.


Right, but "the training data is bad" is a very ML centric way of looking at the issue. It pushes all the difficult parts of the problem into the "data prep" sphere of responsibility.


Note that there are different ways in which data can be bad (i) image resolution not good enough, too many artifacts and noise (ii) its woefully incomplete, doctors collect and use information from other channels that aren't even in the image, regular conversations, sizing up the patient, if the doctor knows the patient for a long time then a sense of what is not normal for the patient given his/her history etc., etc.

Some of the issues that have been discussed in the thread can be incorporated in to a Bayesian prior for the patient, but there is still this incompleteness issue to deal with.


The first step would be to build an information collection pipeline that is in the same league as the doctors. That alone will be a monumental effort because doctors have shared human experiences to draw from and they are allowed to iteratively collect information.

I'm just complaining that it seems fantastically reductive to call the absence of such a pipeline "bad data" because developing such a pipeline would be a thousand times the effort of implementing an image detection model. Maybe a million times. It will require either NLP like none we have seen before or an expert system with so much buy-in from the experts and investors that it survives the thousand rounds of iterative improvement it needs to address 99% of the requirements.

Comparing issues like low resolution and noise to such a development effort seems like comparing apples to... jet fighters.


How else would you describe the issue?


Structural. The problem hasn't even been correctly formulated yet -- and it will take an enormous amount of work to do so.


The gap is interpretation and application of those patterns. Building expert systems is expensive, but ML hits the low hanging fruit of showing patterns to experts out of the park.


ML should be good at drawing basic conclusions. End users are misunderstanding the boundary between basic and advanced.

Or, to put it another way, everyone agrees there's a difference in value and quality of output between an analyst with 1 year & 10 years of experience, right? So why are we treating ML like it should be able to solve both sorts of problems equally easily?

I have faith it will get there. But it's not there yet, in a general purpose way.


Because people like Hinton are outright saying that it already is there.


Incorporating more variables into the model wouldn't be sufficient. You would also need to get the input data for those variables into a form that algorithm could consume. Often the raw data simply isn't recorded in the patient's chart, or if it is recorded it's in unstructured text where even sophisticated NLP struggles to extract accurate discrete clinical concept codes.


Mammography is one of the most difficult to interpret. You need more data like the age and family history to decide on the next step.

Radiology is huge. I am sure ML can help in some of the specialties (it does not need to be all or none). The reason it is not is because of the medical system refusing to give in.


Surely you've seen the improvement over the last 5-6 years from machine learning in all the interpretation toolsets. The last place I worked we internally had a seismic inversion tool that blew all the commercial suites out the water. I'm currently contracting for an AI/ML service company currently that has a synthetic welllog tool that is can apparently beat the pants off actual well logging tools for a fraction of the cost (though I'm not a geologist or petrophysicist so I can't personally verify this.)

I think the problem is more the media and advertisers likes to paint the picture of a magical AI tool which will instantly solve all your problems and do all the work instead of a fulcrum to make doing the actual work significantly easier.


Bluntly, no. There hasn't been an improvement. At all.

We've been using machine learning in geology for far longer than it's been called that. Hell, we invented half the damn methods (seriously). Inverse theory is nothing new. Gaussian processes have been standard for 60 years. Markov models for stratigraphic sequences are commonly applied but again, have been for decades.

What hasn't changed at all is interpretation. Seimsic inversion is _very_ different from interpretation. Sure, we can run larger inverse problems, so seismic inversion has definitely improved, but that has no relationship at all to interpretation.

Put another way, to do seismic inversion you have to already have both the interpretation _and_ ground truth (i.e. the well and a model of the subsurface). At that point, you're in a data rich environment. It's a very different ball game than trying to actually develop the initial model of the subsurface with limited seismic data (usually 2d) and lots of "messier" regional datasets (e.g. gravity and magnetics).


I am wondering (knowing nothing about this) if there is an issue with the approach to acquire data that it putting AI in a difficult position. This is akin to trying to train and AI to walk in the footsteps of a geophysicist, rather than making new footsteps for the AI. I guess I would extend this to radiology too since it seems to be the same issue.

Let me give an example:

People often mention that truck drivers are safe from automation because lots of last mile work is awkward and non-standard, requiring humans to navigate the bizarre atypical situations the truck encounter. Training an AI to handle all this is far harder than getting it to drive on a highway.

What is often left out though is the idea that the infrastructure can/will change to accommodate the short comings of AI. This could look like warehouses having a "conductor" on staff who commandeers trucks for the tricky last bit of getting on the dock. Or perhaps preset radar and laser path guidance for the tight spots. I'd imagine most large volume shippers would build entire new warehouses just to accommodate automated trucks.

A long time ago people noted that horses offered much more versatility than cars since roads were rocky and muddy. How do you make a car than can traverse the terrain a horse does? You don't, you pave all the roads.


Automatic interpretation has been a thing for decades and the promise of replacing a geoscientist completely is always just over the horizon. Even with DL. The new tools are better yes, but honestly I wouldn’t invest in this space. Conventional interpretation is dead in the US. All the geos got laid off.

I’m going to call bullshit. No artificially generated well log is going to ever be better than a physically measured log.


I agreed with you up until the last paragraph. Generating data that cannot be told apart from any real data, cave ins and all, is probably one area where this can succeed.

Your other comment about picking the horizon of interest is really on point, that's where automated interpretation as it's been buzzworded to hell to date as had had no chance and has never lived up to how it was pitched. Many tools just made the problem worse.

That fact that this might change in some distant future, well it may be solved in some capacity but is it really worth the effort? Given that exploration has a limited future.


I have but one upvote to give, but as someone who worked as an interpreter and then moved onto the software side this is the problem that 99% of people don’t get.

You can train a DL model to pick every horizon, but you can’t train to pick the horizon of interest. Same with faults. Let’s not even get started with poorly imaged areas.


IMO a part of the problem here is that you have a misunderstanding on the part of deep learning people. They look at radiology, and they say "these people are just interpreting these pictures, we can train a deep learning model to do that better".

Maybe there's a bit of arrogance too, this idea that deep learning can surpass human performance in every field with enough data. That may be the case, but not if you fundamentally misunderstood the problem that needs to be solved, and the data you need to solve radiology, for instance, isn't all in the image.

Somewhat related: another area where DL seems to fail is anything that requires causal reasoning. The progress in robotics, for instance, hasn't been all that great. People will use DL for perception, but so far, using deep reinforcement learning for control only makes sense for really simple problems such as balancing your robot. When it comes to actually controlling what the robot is going to do next at a high level, people still write rules as programming code.

In terms of radiology and causal reasoning, you could imagine that if you added extra information that allows the model to deduce "this can't be a cancerous tumor because we've performed this other test", you would want your software to make that diagnosis reliably. You can't have it misdiagnose when the tumor is on the right side of the ribcage 30% of the time because there wasn't enough training data where that other test was performed. Strange failure modes like that are unacceptable.


Expanding on this, particularly regarding causal reasoning and rules, what I find especially puzzling is the desire to apply deep learning even in cases where the rules are explicitly known already, and the actual challenge would have been to reliably automate the application of the known, explicitly available rules.

Such cases include for example the application of tax law: Yes, it is complex and maybe cannot be automated entirely. However, even today, computer programs handle a large percentage of the arising cases automatically in many governments, and these programs often already have automated mechanisms to delegate a certain percentage of (randomly chosen, maybe weighted according to certain criteria) cases to humans for manual assessment and quality checks, also a case of rule-based reasoning. Even fraud detection can likely be better automated by encoding and applying the rules that auditors already use to detect suspicious cases.

The issue today is that all these rules are hard-coded, and the programs need to be rewritten and redeployed every time the laws change.


There's a perception in the DL field that encoding things into rules is bad, and that symbolic AI as a whole is bad. Probably because of backlash following the failure of symbolic AI. IMO the ideal is somewhere in the middle. There are things you want neural networks for, and there are also things you probably want rules for. The big advantage of a rule-based system is that it's much more predictable and easier to make sense of.

It's going to be very hard to engineer robust automated systems if we have no way to introspect what's going on inside and everything comes down to the neural networks opinion and behavior on a large suite of individual tests.

> The issue today is that all these rules are hard-coded, and the programs need to be rewritten and redeployed every time the laws change.

The programs are probably not being rewritten from scratch. I would argue that: the laws are, or basically should be, unambiguous code, as much as possible. If they can't be effectively translated into code, that signals ambiguity, a potential bug.


I wasn't alive in the 70s, but it feels like there's a counter-bias against expert systems borne out of those failures.

"If you're putting in rules, you're don't know how to build models."

But that's probably the difference between people having success with "AI" and banging their heads against the wall: do what works for your use case!


I have once seen an AI tool to determine what needed to be reported.

I found this remarkable, as there were clear (yet complex) rules on what needed to be reported, otherwise even the regulator wouldn't know what it was supposed to check.


I recall stories of IBM Watson's failures were focused around how they sold it as just dumping data into the machine and wonders coming out.

Meanwhile actual implementation customers weren't ready / were frustrated with how much prepping the data was required, how time consuming it was, and in a lot of ways how each situation was a sort of research project of its own.

It seems like any successful AI system will require the team working with the data to be experts in the actual data, or in this case experts in radiology ... and take a long time to really find out good outcomes / processes, if there are any to be found.

Add the fact that the medical industry is super iterative / science takes a long time to really figure out ... that's a big job.


There's no free ride, ML is data centric, you got to get close and personal with the data and its quality. That means 90% of our time is spent on data prep and evaluations.

Getting to know the weak points of your dataset takes a lot of effort and custom tool building. Speaking from experience.


And maybe look for things that are not expected...

My dad went to take a shoulder x-ray in preparation for a small bit of surgery. In the corner of the image the radiologist noticed something that didn't look right. He took more pictures, this time of the lungs, and quickly escalated the case.

My dad had fought cancer, and it turned out the cancer had spread to his lungs. He had gone to regular checks every six months for several years at that point, but the original cancer was in a different part of his body.

For a year prior he'd been short of breath, and they'd given him asthma medication... until he went to get that shoulder x-ray.


As a cancer patient that feels like negligance.


I agree. Essentially the same scenario has happened twice in my close circle since my dad.

Sadly it seems treatment here is very much focused on the organ, not the patient.

Hence why I tell people I come across who's diagnosed for the first time: learn where your cancer might spread to, and be very vigilant of changes/pain in those areas.


I guess some animals are also good at seismic interpretation. For radiology we first need to beat pigeons: https://www.mentalfloss.com/article/71455/pigeons-good-radio... (there was a HN post I think on this)

Actually mammography screening is done to my knowledge with out any background which could bias the decision. But here humans are fast anyways and even pidgins don't promise a relevant price cut. When complicated decisions need to be made . E.g on treatment we will have other problems with ai...


We used AI to analyze seismic data in the DARPA nuclear test monitoring system in the 1980s. I don’t think that it was considered to have anything but a fully automated system. That said, we had a large budget, and great teams of geophysicists and computer scientists, and 38 data collection stations around the world. In my experience, throwing money and resources at difficult problems usually gets those problems solved.


Very different sort of seismic data, FWIW.

You're referring to seismology and deciding whether something is a blast or a standard double-couple earthquake. That's fairly straightfoward, as it's mostly a matter of getting enough data from different angles. Lots of data processing and ambiguity, but in the end, you're inverting for a relatively simple mathematical model (the focal mechanism): https://en.wikipedia.org/wiki/Focal_mechanism

I'm referring to reflection seismic, where you're fundamentally interpreting an image after all of the processing to make the image (i.e. basically making a mathematic lens) has already been done.


I don't know anything about seismology, and I am going to put aside the money and focus on the math.

> The features you're interested in are almost never what's clearly imaged -- instead, you're predicting what's in that unimaged, "mushy" area over there through fundamental laws of physics like conservation of mass and understanding of the larger regional context. Those are deeply difficult to incorporate in machine learning in practice.

I was part of a university research lab over 15 years ago that was doing exactly this [1], with just regular old statistics (no AI/ML required). By modeling the variability of the stuff that you could see easily, you could produce priors strong enough to eek out the little bit of signal from the mush (which is basically what the actual radiologists do, which we know because they told us). It isn't a turn-key, black box solution like deep learning pretends to be. It takes a long time, it is highly dependent on getting good data sets, and years of labor goes into a basically bespoke solution for a single radiology problem, but the results agree with a human as closely as humans agree with each other. You also get the added bonus of understanding the relationships you are modeling when you are done.

From university lab to clinically proven diagnosis tool is of course a longer road, and I have not been involved in these projects for a long time, but my point is that the math problem on its own is tractable.

[1] http://midag.cs.unc.edu/


Funny, when I was young my father who was a geophyisicst (now retired) took me to seismological surveys, where we hit with a hammer on a plate to generate shock waves and measure velocity (mostly for building tunnels through montains). After a hard day of physical work, my father was drawing these lines at specific points of the recorded dataset (sorry I lack the proper terms and vocabulary because it was like 20 years ago) and there are some patterns you can use to identifiy those spots but most of it was based on his intuition and years of experience. Sometimes he let me do the interpretation and all I had was the those patterns, but many times he had to correct it because of what you describe (experience, terrain, environment, etc)


I wonder how often these projects truly need someone with on the ground experience guiding it, as the textbook tasks as you say are easy for even the humans


Interesting perspective. What's your take on tools that use AI/ML to accelerate applying an interpretation over a full volume? For example: https://youtu.be/mLgKtmLY3cs


Bluntly, they're useless except for a few niche cases.

Anything they're capable of picking up _isn't_ what you're actually concerned about as an interpeter. Sure they're good at picking reflectors in the shallow portion of the volume. No one cares about picking reflectors. That's not what you're doing as interpreter.

A good example is the faults in that video. Sure, it did a great job at picking the tiny-but-well-imaged and mostly irrelevant faults. Those are the sort of things you'd almost always ignore because they don't matter it detail for most applications.

The faults you care about are the ones that those methods more-or-less always fail to recognize. The significant faults are almost never imaged directly. Instead, they're inferred from deformed stratigraphy. It's definitely possible to automatically predict them using basic principles of structural geology, but it's exactly the type of thing that these sort of image-focused "automated interpretation" methods completely miss.

Simply put: These methods miss the point. They produce something that looks good, but isn't relevant to the problems that you're trying to solve. No one cares about the well-imaged portion that these methods do a good job with. They automate the part that took 0 time to begin with.


You seem extremely biased against AI in general, to the point where I very much doubt anybody would benefit from hearing your opinions on it.


I work in machine learning these days. I'm not biased against it -- it's literally my profession.

I'm biased against a specific category of applications that are being heavily pushed by people who don't actually understand the problem they're purporting to solve.

Put another way, the automated tools produce verifiably non-physical results nearly 100% of the time. The video there is a great example -- none of those faults could actually exist. They're close, but are all require violations of conservation of mass when compared to the horizons also picked by the model. Until "automated interpretation" tools start incorporating basic validation and physical constraints, they're just drawing lines. An interpretation is a _4D_ model. You _have_ to show how it developed through time -- it's part of the definition of "interpretation" and what distinguishes it from picking reflectors.

I have strong opinions because I've spent decades working in this field on both sides. I've been an exploration geologist _and_ I've developed automated interpretation tools. I've also worked outside of the oil industry in the broader tech industry.

I happen to think that structural geology is rather relevant to this problem. The law of conservation of mass still applies. You don't get to ignore it. All of these tools completely ignore it and product results that are physically impossible.


Incidentally, I don't even mean to pick on that video specifically. I actually quite deeply respect the folks at Enthought. It's just that the equivalent functionality has been around and been being pushed for about 15 years now (albeit it enabled via different algorithms over time). The deeper problem is that it usually solves the wrong problem.


I'm confused. You're saying they haven't incorporated basic physics into their model? How come? They don't have any geophysicists on staff?


>Many companies/etc keep promising to "revolutionize"

It is all about the money baby

If u fall and go to the ER, u get an xray to rule out a fracture. Many times the radiologist will read it after you leave the ER, yet he gets paid.

If u think ML cant read a trauma xray, and offer a faster service, you are wrong!! The problem is who gets paid, and who is paying the malpractice insurance

Check out in China, they have MRI machines with ML built in. U get the results before u get dressed!!


Who do you think I’d rather go after for malpractice? Someone that went to school for many years dedicated to medicine or the idiot stiffs behind a machine that can’t even spell the word “you”. That is in large part also what it is really about.

Having said that I do ML research on cross-sectional neuroimaging, and basically everything you said is nonsense.


So it seems that instead of an image recognition algo we need to feed years of univ education into the AI.


Radiologists are not even trying. They treat their methods like FDA-approved medical devices. Even basic image segmentation to help with 3D structure recognition is off-limits. The benefits of neural networks in diagnostic radiology will not occur in one shot, and I don’t think it will happen at all in the United States until people start sending their data elsewhere. But good luck getting it. I just got my CT scan from a total mis-diagnosis that resulted in an unnecessary surgery. It came on a CD that won’t read in the only drive I have access to. And even that is just images. It’s not possible to get the actual data. This is not a failure of DNN. This is active AMA hostility toward technology, and not just in radiology. Just watch, people are going to start going elsewhere for medical care. They will do everything they can to get insurance requirements, subsidies, and laws against it, but they will lose. They are a dishonest Luddite cartel, and they’re hurting people.


Radiologists most definitely are trying. Our institute's entire medical imaging research arm is driven by several very motivated practicing radiologists. You just misunderstand what it is that they do, fundamentally. Diddling with some pics and publishing papers is just not in the same league as making medical diagnoses. A lot is riding on their understanding every little artifact of the algorithm/approach that gives them a modified image to interpret. They will never accept black-box automagic, and they will always evaluate the benefits of novel algorithms together with the drawbacks of having to get used to their quirks and opaque artifacts, possibly with outcomes impacted and/or lives lost in the process. Where the risk/benefit analysis is clear, they do adopt plenty of common-sense automation tools for a very simple reason - they get paid per scan read, so their time is (lots of) money, to them.


I don't think the blame falls on practicing radiologists, but the OP is absolutely correct that medical data is way too inaccessible. It is often impossible to get your own raw data, and even worse it is sometimes impossible to share that data with another doctor. Two large hospitals in major US cities apparently can't share EEG data because they use different software to read it. Guess who wins when all your prior data gets essentially thrown out? It's not the insurance companies, and it's certainly not you - it's the new hospital.

How realistic it is to have ML involved in reading radiology results in theory I don't know, but the larger point is that in practice it is sure as hell not going to happen until patients have real access to do what they please with their own data. Not only am I pissed I can't have my own EEG data, but I also would gladly contribute it to a database for development of new tools, or any other research study that asked. But there is essentially no way to even do that, at least at either institution I've asked. Just think of all the data that is being utterly wasted right now!


>I also would gladly contribute it to a database for development of new tools, or any other research study that asked

This should be a standard question in the medical file like those related to organ donation.


The number of patients that are interested in viewing or accessing their own data has to be negligible. Last time I got an Xray they actually gave me a DVD of the imaging itself. I remember looking at it, I thought it was neat, but ultimately there was little use in there for me. I dont know what % of patients have bothered to look at it.


Viewing their own raw data may be negligible, but sharing between medical professionals is a relatively common and necessary practice. Currently it is extremely difficult to get one doctor to share medical information with another, and it shouldn't be.


Provider organizations are understandably reluctant to accept removable media from unknown sources due to the risk of malware. Many of the computers that doctors use don't have DVD drives or they're disabled for security.


Cloud is still a thing.


It's not about the patient reviewing their own data as much as it is about the patient having easy access to their data and can easily share that data with other consumers of it (i.e. some AI based interpretation service)


'Easy access' is scary for hospitals because it means increased possibility of HIPAA violations.


The P in HIPAA stands for portability. It should be a HIPAA violation for them to not give me the original data sets for my healthcare when I request them in person.


The data will get better

Healthkit ftw


> patients have real access to do what they please with their own data... contribute it to a database...

Misconception #2 is that there's some "data moat" or whatever.


I am aware there isn't, what I'm saying is there should be - particularly for dense datatypes like EEG that we probably aren't fully leveraging at the moment.


That's false. Both Epic and Cerner, the largest EMR companies, have databases of millions and millions of patients. They're used for research, and stuff.


Yeah but all the EMR data I've seen have just been doctor notes, some prescription info, standard clinical scales, etc. Not that this can't be interesting, but there's not really a major database for richer data types like imaging or EEG (AFAIK)


I left radiotherapy for this reason: they haven't the faintest idea what they are doing, and are constrained by whatever manufacturers managed to get the FDA to approve, which is 20 year old tech at best. Not that radiotherapists/logists know how to deal with modern tech... A small hint: our software repo was migrated TO svn in 2014.

When people do 'AI' in radiology, they mean making little with scripts in Tensorflow. Sure, it's a beginning, but I've seen entire institutes being unable to get to grips and move past that stage. You wouldn't be able to tell from their slides, of course.


Not sure how modern AI is going to help here.

> they haven't the faintest idea what they are doing, and are constrained by whatever manufacturers managed to get the FDA to approve (...)

If you take this layer of proprietary magic nobody understands, and add DNNs to mix, you'll get... two layers of proprietary magic which nobody understands (and possibly owned by two different parties).


If only there were two layers of proprietary magic ;)


@brnt It's great to see someone else from radiotherapy on HN. I'd love to chat if you're down. Shoot me an email.


I've left the field for greener pastures ;) Looked at your website, interesting, but curious how you deal with all the legacy software. Java was considered quite recent at one of the places I worked, they were rolling Delphi in 2019 and were not planning to switch!


It is an interesting market, for sure. The fact that these vendors are reluctant to upgrade tech stacks actually creates a bunch of opportunities. (Although we wish they'd just upgrade their Windows version:)


Also in the US, I had a similar 'awakening' if you will after being at the side of a loved one for a little over two years of intensive medical intervention. I've been left quite bitter and ultimately distrustful about where things stand today.

That said I do recognize that I have the advantage of not making life and death decisions and have no idea what its like to weigh the advantages of innovation against the risk of untimely death or significant impairment/expense that comes with advancing the frontiers of medicine.


Couple of points: The 21st Century Cures Act has recently expanded rules for information portability, which will make it much easier to get access to your data in the future. The challenge here has nothing to do with radiologists hoarding your data. The lack of interoperability typically stems from limitations of electronic health information systems. Most radiologists would love to be able to look at your scans from multiple previous hospitals where you were imaged previously, but technical barriers currently make that difficult.


I don't blame the practicing radiologist, but I also don't buy this is purely a technical issue. The hospital is quite literally incentivized to have you repeat tests. I highly doubt they care to make data accessibility/portability a priority. Hopefully these new rules will force their hand.


You are right, to an extent. Health systems and EHR vendors both have historically had an economic disincentive to share data. Think “ecosystem lock-in”. My impression is that things are gradually changing for the better.


There is an IHE-Standard Profile that specifies how data has to be to laid out in portable media(USB,CD). All PACS systems follow it and it's basically a no-problem today.

I don't doubt about your anecdote, but I don't think that it's a very common occurrence.

https://wiki.ihe.net/index.php/Cross-enterprise_Document_Med...


What do you mean it's "off limits"? There are several companies developing ML models that process CT scans, for example: https://www.radformation.com/autocontour/autocontour


But how many have commercially available/succesful products?


You have the legal right to obtain a copy of your medical data. Providers can require you to pay reasonable administrative fees for making data copies but they do have to give it to you. If they don't comply then you can file a formal complaint.

https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...


That's complicated. I work at a very large health care org that employs ~1000 radiologists. We definitely would love to have a good solution, but there just aren't good enough vendor solutions and even working with vendors it's hard to get things to an acceptable state. In that sense I have more hope in derm.


The problem is with the money and power. Doctors will never let go their income for a data scientist. This type of invention will never start in the US, it will start in a communist country where leaders can move mountains or in Africa where there is a big lack of Doctors.


US based companies have been shipping AI/ML tools for nearly 30 years at this point, which undermines your argument.

The biggest problems are data access (big) and data quality/labelling quality (bigger).

Medical conservatism is a real issue, but nowhere near as big as those. There isn't a big cabal trying to keep AI out, it just hasn't worked very well so far.

FDA is reasonably responsive (for an agency like that) these days, and has been doing planning for more of this sort of tech: https://www.fda.gov/medical-devices/software-medical-device-...


I don't buy this, doctors are not one united body. Doctors with an AI tool will be more efficient than doctors without it. If the tool has a measurable positive impact on patients (outside of cost reduction) then it will become necessary to have the AI in order to get the patients.


> a CD that won’t read in the only drive I have access to.

You can get an external cdrom drive for about 20 bucks.


Radiologist here with an interest in this topic. I think the problem with most AI applications in radiology thus far is that they simply don't add enough value to the system to gain widespread use. If something truly revolutionary comes along, and it causes a clinical benefit, healthcare systems will shift to adapt this in a few years. AI just hasn't lived up to it's promise, and I agree it's because most of the people involved don't get that the job of a radiologist is way more complex than they think it is.

Everytime I open a journal, I see more examples of either downright AI nonsense ('We used AI to detect COVID by the sounds of a cough') or stuff that's just cooked up in a lab somewhere for a publication ('Our algorithm can detect pathology X with an accuracy of 95%, here's our AUC').

Hyperbolic headlines - Geoff Hinton saying in 2016 that it's time to stop training radiologists springs to mind - don't help the over promise of AI, and then they shoot themselves in the foot when they underdeliver.

Earlier discussions about radiologists being self interested in sabotaging AI is tinfoil hat stuff - if I had an AI algorithm in the morning that could sort out the 20 lung nodules in a scan, or tell me which MS plaque is new in a field of 40, I'd be able to report twice as many scans and make twice as much money.

Companies come along every month promising their AI pixie dust is going to improve your life. It probably will, but 10 years from now, not today. The AI Rad companies are caught in an endless hype cycle of overpromising and under delivering.


> self interested in sabotaging AI is tinfoil hat stuff

Agree this in nonsense. Not a radiologist but have worked with many.

The big barriers to AI impact in radiology are a) translation is a lot harder than people think, b) access to enough high quality data with good cohort characteristics c) good labeling (most of the interesting problems aren't really amenable to unsupervised) a d) generalization, as always.

It doesn't help that for the most part medical device companies aren't good at algorithms and algorithms companies aren't good at devices, lots of rookie mistakes made on both sides.


Also PACS isn't designed to implement algorithms. PACS is legacy software that is, by and large, terrible.


> Also PACS isn't designed to implement algorithms.

That doesn't really matter too much from the implementing-ML point of view, you can just use it as a file store. DICOM files themselves are annoying too (especially if they bury stuff in private tags), as are HL7 (and EMR integrations) but .. that's mostly just work.

Agree the viewers lack flexibility but that's a lot more solvable than say the morass of EMR. If you are just looking at image interpretation visualizing things isn't so bad, if you had the models to visualize.


"Expert Systems" that could diagnose and treat disease were technically successful in the 1970, see

https://en.wikipedia.org/wiki/Mycin

This technology never made it to market because of various barriers; at that time you didn't have computer terminals in a hospital or medical practice.

Docs want to keep their feeling of autonomy despite much medical knowledge being rote memorization and rule-based.

The vanguard of medicine is "patient centered" and tries to feed back statistics to help in decisions like "what pill do I prescribe this patient for high blood pressure?" -- the kind of 'reasoning with uncertainty' that an A.I. can do better than you.

As for radiology the problem is that images are limited in what they can resolve. Tumors can hide in the twisty passages of the abdomen and imaging by MRI is frequently inconclusive in common sorts of pain such as back pain, knee pain, shoulder pain, neck pain and ass pain.


AI Winter (https://en.wikipedia.org/wiki/AI_winter).

    1966: failure of machine translation
    1970: abandonment of connectionism
    Period of overlapping trends:
        1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
        1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
        1973–74: DARPA's cutbacks to academic AI research in general
    1987: collapse of the LISP machine market
    1988: cancellation of new spending on AI by the Strategic Computing Initiative
    1993: resistance to new expert systems deployment and maintenance
    1990s: end of the Fifth Generation computer project's original goals
I got my bachelors in 1990, and took a lot of classes in AI around that time. Have you ever worked with an expert system like Mycin? It is really quite difficult to pull out an expert's knowledge, rules of thumb, and experience-based intuitions. Difficult and expensive. Those that were not tightly focused on a limited domain were also generally not satisfactory, and those that were failed hilariously if any one parameter was outside the system's model.

Yes, doctors have a lot of cultural baggage that reduces their effectiveness. But there's a completely different reason why AI has not replaced them. After many, many attempts.


Connectionism is back with a vengeance. It still struggles with text but vision problems like 'detect pedestrian with camera and turn on the persistence-of-vision lightstrip at the right time' are solved.

Many expert systems were based on "production rules" and it's a strange story that we have production rules engines that are orders of magnitude more scalable than what we had in the 1980s. Between improved RETE and "look it up in the hashtable" it has been a revolution but production rules have not escaped a few special applications such as banking.

Talk to a veteran of "business rules" project and about 50% of the time they will tell you it was a success, the other 50% of the time they made mistakes up front and went into the weeds.

Machine learners today repeat the same experiments with the same data sets... That doesn't get you into commercially useful terrain.

Cleaning up a data set and defining the problem such that it can be classified accurately is painful in the exact same way extracting rules out of the expert is painful. It's closely related to the concept of "unit test" but it is still a stretch to convince financial messaging experts to publish a set of sample messages for a standard with a high degree of coverage. You can do neat things with text if you can get 1000 to 20,000 labeled samples, but most people give up around 10.


"AI" can't even perform as well as humans (despite plenty of promises) in a field like radiology. The idea of an AI family doc system or ER doc system actually making diagnoses (instead of being a glorified productivity tool*) is downright hilarious. Lots and lots of luck interpreting barely coherent, contradictory and often misleading inputs from patients, dealing with lost records or typos, etc.

Doctors don't get paid the big bucks for rule-based solutions based on rote memorization. They get paid the big bucks to understand when it's inappropriate to rely on them.

* which IS a worthy goal to aspire to and actually helpful


> "AI" can't even perform as well as humans (despite plenty of promises) in a field like radiology. The idea of an AI family doc system or ER doc system actually making diagnoses (instead of being a glorified productivity tool*) is downright hilarious. Lots and lots of luck interpreting barely coherent, contradictory and often misleading inputs from patients, dealing with lost records or typos, etc.

I think the future of that might be with wearables like the Apple Watch. While it probably won't replace doctors wholesale, applying ML to the data gathered from various sensors continously seems like a much better promise to me.


> They get paid the big bucks to understand when it's inappropriate to rely on them.

An automated system could record and analyze more outcome and biometric data than a group of doctors, over time obtaining more experience about when to apply or not the various medical rules. Human experience doesn't scale like a dataset or a model.

I bet some diagnostics could be correctly predicted by a model that a human can't understand, especially if they require manipulating more information than a human can hold at once.


| Docs want to keep their feeling of autonomy despite much medical knowledge being rote memorization and rule-based.

There is so much arrogance and ignorance in this thread.


I’ve discussed this with both doctors and radiologists, and they all seem to back the exact same mantra. So while that might just be anecdotal evidence, I have a difficult time seeing how your blank dismissal has any value in the discussion. Do you have any data, evidence or even just arguments to put forth to support you oppinions?


Well I work day to day as a doctor and I know for myself what the job entails, and I don't know where to start with a comment like this.

I've just finished a clinic this morning. Interestingly there was a relevant case to this discussion. I have someone being treated for an incurable lymphoma with a tablet chemotherapy. Has been on it for nearly 5 years. She had a scan recently which has shown some pulmonary fibrosis. She has some new respiratory symptoms. The question is, has this drug caused her lung fibrosis? Is it even fibrosis at all - it can look similar to cardiac failure, and she has that too. There are around 5 CT scans over the course of several years, I had to bring them up and cross-reference it to the times of various treatments (NB I'm not a radiologist) to assess whether I thought the progression of it fitted with the drug. To complicate matters I know this lady has especially bad lymphoma which relapsed quickly after her first treatment, so if I stop her current treatment, she would probably progress and die of the disease, though I'm unsure what timeframe that would be (it depends on the individual disease). So I'm reluctant to stop it. I might take her back to our MDT where I can discuss it with our radiologist, who will pore over the scans and try to see whether we can blame the drug, and whether it could be explained moreso by the heart failure.

This was a single 10min follow up patient in a busy clinic, and I only write this out to give a glimpse of the thought processes that go through decision making for one follow up patient in a clinic of 30. This isn't rote learning or rule-based treatment. It is very much patient-centred and very specific for this individual. It requires discussion with the patient - what they are happy to tolerate, dealing with the unknowns of what would happen if we continued it or stopped.

Personally I would love for machine learning to be part of what I use to help guide decisions, there are particular situations where I know it would have helped. But it is irritating to read your comment and to not only be accused of being obstructive for AI adoption, but also to be misrepresented as some kind of robot that looks up every disease in a medical textbook in my head and blindly follows some protocol for deciding what to do.

Most people can be trained to drive fairly quickly, how about we figure out AI driving first before attempting to simulate a job that has taken me 15 years of training to get to a position where I can take these decisions in clinic?


> The vanguard of medicine is "patient centered" and tries to feed back statistics to help in decisions like "what pill do I prescribe this patient for high blood pressure?" -- the kind of 'reasoning with uncertainty' that an A.I. can do better than you.

I think this illustrates why AI in medicine is a hard problem. I'm not actually sure this is a clear cut AI/Statistics problem.

Mainly because "what pill do I prescribe this patient for high blood pressure?" has lots of hidden questions.

AI solves "what pill will statistically leads to a higher survival rate", but that is not the only consideration.

Often doctors have to balance side effects and other treatments.

What is easier for the patent: A lifestyle change to reduce blood pressure? or Enduring the side effects of the pill?

This type of question is quite difficult for our AI's to answer at the moment

Most drugs have side effects that are hard to objectively measure the impact of.


There are also coverage rules to consider. Payers often require providers to try less expensive treatments first and will only authorize more expensive pills if the patient doesn't respond well.


The metric of radiology jobs as a sign of the lack of AI revolution in the field seems poor to me. Sadly, much of our medical infrastructure (and the jobs it creates) only have a very tenuous relationship to the actual care and the quality that it delivers. Rather, most of the infrastructure tries to optimize for the billing and the legislation that surrounds it.

One of the ways to immediately see this is for us, a technical crowd, to puzzle on why it seems that Moore's law doesn't seem to affect medical technology... AT ALL. Some of the same procedures using the same machines from decades ago today cost A LOT more than they used to, for instance.

This isn't to say that this AI revolution in radiology hasn't been underwhelming; I just think that using this job metric is a poor indicator of the technology's capability.


The role of the radiologist isn't just mapping image[range_x, range_y, range_z] to disease -- it's also including a vast amount of Bayesian priors on the rest of the patient's notes, and their referring colleagues' hints of an indication.

For example, often the question isn't just "does this person have mitral valve regurgitation yes/no", it's more along the lines of "is there evidence from this cardiac MRI scan that their mitral valve regurgitation is significant and able to explain the symptoms that we have -- and if so, is it amenable to treatment". That's a totally different question -- are the symptoms beyond what would be expected for the patient; and is there a plausible mechanism are all second-level radiological questions, well beyond the level of "please classify this stack of images into either healthy, cyst, or tumour". Another random example would be the little old lady who comes in with breathlessness at rest: she may well have a large, slow-growing lung cancer (that any AI algorithm would easily diagnose) that she may well die with or of, but the acute dysponea could be down to an opportunistic LRTI that remains treatable with a course of antibiotics (and visible on a plain film chest x-ray). Capturing that sort of information is a lot, lot harder.

You're also forgetting that the cost of an expensive imaging modality like MRI or CT is amortised over 10 years, and that -- by far -- the biggest cost of running the service is the staff. The doctors do more than push buttons. In many services, actually, they don't acquire the scans or interact with patients at all.


Agreed,that's why most AI/ml in radiology is limited to critical findings, identifying acute areas for the radiologist to review, it's not making a diagnosis of it.

And on the EHR/history side, there's ML starting to be used to organize and highlight relevant info so the rad doesn't have to go searching for it.

These are both tools for the radiologist to interpret exams faster, and more accurately.

It's not taking them out of the picture.

Eventually likely to happen... But not where "AI" is today


Agreed but some questions are also a lot easier and a lot more common. In x-ray, looking for a bone fracture is a single task that requires no information about patient and can be done by an algorithm.


Indeed. And in my country, nurse-practitioners can diagnose and manage simple uncomplicated fractures, for example.


to be fair, Geoff Hinton invited this comparison when he made that quote, which has been repeated ad infinitum in the past 5 years and probably brought a lot of existential dread to radiologists. The AI field should have repudiated it harder, instead they embraced it because it was flattering.


I think that the role of radiologists in the medical system is misunderstood. Radiologists are consultants. Yes, in some cases - many cases, even - you just want a result to an answer to a specific, common question from an imaging study. And in those cases, I am sure that deep learning-based readings will do a fine job. But for more diffuse inquiries, or for times when there is disagreement or uncertainty over a reading, radiologists are wonderful colleagues to engage in discussion.

I'm not super interested in predicting employment trends, but it's hard to imagine a world where the radiologist-as-consultant disappears.


I really respect the intensity of training all medical practitioners have and the responsibility society puts on them. However i think there is an urgent need of reforming medical systems to leverage all the new trends in a responsible manner. Augmenting the capabilities of Doctors is one way, but better frictionless anonymised data sharing can also be very useful. However the only thing that prevents a success of new approach other than incumbents is that it is difficult to determine winners & losers of new approaches and it is likely that larger players have better chances of being more successful instead of many smaller players.


That, and trying to standardize a data format for something as complex as a medical record


It's an oversimplification to say AI will replace xyz job.

It seems more likely that AI will simply sift through the data more thuroughly and look holistically to catch things a radiologist might miss.

A radiologist, for example, might miss spotting a small tumor in an xray taking for an unrelated hip injury.

AI has a lot of complimentary value.


It is a good point but I think one part is a bit backward in an ironic way. One reason AI hasn't replaced radiologists is because radiologists are typically very good physicians, and specifically do not look at images in isolation but review the record, talk with the physicians, sometimes the patients too, etc. So its actually backwards (in some cases) -- AI struggles because it looks at the image in isolation, while the radiologist is looking at the patient more holistically.


Radiologists don't talk to patients (usually), so there's no reason why AI cannot be given all the same patient data. ...although reading doctors' notes is probably a whole 'nother AI program.


> .although reading doctors' notes is probably a whole 'nother AI program.

yep. One that also has a long history, and a lot of current players - an nobody has really good traction there either.


But aren't we already over-diagnosing some cancers? Spotting more tiny tumors in unrelated images might do more harm (through procedures/treatment) than ignoring them. I'm not sure if we're really better off detecting every anomaly in someone's body.


Because modalities like MRI are non-ionizing and therefore not intrinsically harmful, I think it is reasonable to consider a wild extreme: in some future, what if a large group of people underwent imaging every month or every year. It's possible to imagine gaining a very good understanding of which lesions have malignant potential and which ones don't.

The transition period that we are in now - where we are gaining information but not yet sure how to act on all of it - is painful. There are a lot of presumably unnecessary follow-up procedures. But it's possible that at some future point, we'll understand that 0.8mm nodules come and go throughout a lifetime and don't merit any action, whereas {some specific nodule in some specific location} almost always progresses to disease.

Obviously what I'm describing is research, and so I'm not saying that we should treat clinical protocols differently right now. But I think it's not too hard to imagine that we can get to a point where we have a very good idea about which lesions to follow/treat and which lesions to leave be.


Why would we not want to know about a tumor in the body? I assume competent doctors will assess the risk of such a thing, but knowing about it is better than not.


Doctors will optimize for patient outcomes, usually by doing all they can. Sometimes, this doesn't scale well. For example, the US Preventative Services Task Force stopped recommending routine PSA screening among asymptomatic patients to detect prostate cancer in 2012. They based their decision on a careful review of medical research, noting the screening didn't have much of an effect on mortality but could cause stress or invasive follow-up tests. Urologists generally opposed the decision. The USPSTF has since walked it back to, "Talk about the risks and benefits." I've looked at survey results for my state, and the numbers indicate a good proportion of men are told the benefits of a PSA but not any risks

Patients are even less reasonable. If you tell somebody they have a tumor, they will now have a constant stress. If you say "cancer," they'll likely undergo expensive and potentially harmful treatment, even if "watch and wait" was a totally valid choice (e.g., slow-developing prostate cancer for very old men). Remember how Angelina Jolie had a double mastectomy after being told she had a good chance of developing breast cancer? That behavior will lead to a lot of unnecessary pain, debt, and lower-quality lives if it became normal.

It'd be hard if not impossible to ask doctors don't share knowledge about a tumor with patients. But in some cases we intentionally ask them not to go looking for tumors because the expected value of a positive result is a negative impact.


Everyone gets cancer eventually, it's inevitable if you live long enough. There's no point in knowing that a small, slow growing tumor will kill you in 10 years if a heart attack is going to kill you in 5 years anyway. Knowing about the tumor just creates more psychological stress and potentially extra unnecessary medical treatments for no benefit.


I don’t see radiology work decreasing either. Instead, I think it will serve more people but at lower cost per person. No one will skip medical services if they can afford them, but currently prices are high. Imagine a future where a radiologist serves 10x customers as before by leveraging smart technologies, for similar overall compensation.


I agree with you and with the sibling comment!


> Yes, in some cases - many cases, even - you just want a result to an answer to a specific, common question from an imaging study

And these questions are already outsourced to India.


The original article referred to by the blog post is here from last week-- https://qz.com/2016153/ai-promised-to-revolutionize-radiolog... .

The conclusion is that AI will revolutionize radiology. It's just that nobody knows when. And it's not like there's some socioeconomic or whatever barrier preventing AI from being used (as an aside, there are barriers of course)-- it's simply that AI isn't good enough yet.

It's not a surprise to anyone who relies on radiologists and has reviewed the current AI state of the art. Yes, with machine X on patients meeting criteria Y, you can rule out specific disease Z. But the algorithms don't generalize very well. It's like declaring you'll have self-driving cars in 5 years because you can drive one straight on a highway in sunny Arizona that only occasionally causes fatal crashes.


Am I missing something or does this look like poor uptake for (possibly) reasons other than performance? The article cites a lack of use as justification for the assumption that the model doesn't work. It might just take time.

Especially in medical. A century (+) ago hand-washing took time to adopt unless I'm mis-remembering.


Both that and "people refuse to use technology that makes them redundant" indeed seem to be hinted at. However, one of the quotes says that poor performance is the reason and if you click through to the original article, it seems fairly clear that there is a problem with the existing systems adapting poorly to different setups.


It's not about "poor performance", in the same way as poor IDE performance isn't really the root cause of my laptop's stubborn inability to write good Python code. Radiology is about using a generalist medical education to diagnose (or be instrumental in diagnosing) patients. Pattern matching or statistical information are a rather modest subset of that skillset.


but the article is right in saying that AI advocates said no need for any new radiologists.

I know this in the field of vehicle damages, AI is good but not good enough to take over... and that has been the case for a long while yet every now and then a new company / product comes saying that the future is AI and no need for human effort.


True. I was internalizing "no new need" as hyperbole because it didn't make sense (given the reality of medical) but that's my mistake.

On re-reading, a highly specialized and entrenched workforce having a 30% uptake on a new technology in only a few years seems phenomenal.


Advancements in Medicine happen in spurts because of the regulatory review process and the risk aversion.

The advance must either be VERY significantly better to warrant the approval process, OR an extremely low risk incremental change.

So what we end up with is this sputtering of tiny and big advances.


With digital lightboxes, 3d imaging, automated segmentation and other workflow improvements, a lot of the "low hanging fruit" has already been removed from the process.

When my doctor thought I might have a stress fracture, I went for x-rays. I with my stint 10 year previous in medical imaging could tell at a glance I didn't have a stress fracture. The radiologists report was a brief "does not present with stress fracture nor any other visible issue". AI is not going to eliminate this; a radiologist is still going to need to spend 30 seconds "sign" the diagnosis, it's not going to take much out of the system.

The real work in radiology is the hard cases, ones that require diagnosis and consulting with other specialists. If AI can help a orthopaedic surgeon plan a surgery for a car accident victim; then it will start replacing radiologists.


If:

1. I was really concerned about a stress fracture

2. and the history and examination findings strongly suggested one

3. and if it would make a difference to managing the case,

then I would have escalated to MRI / nuclear scan +- SPECT fusion ( choice between the two depending on time frame amd affordability to the patient.)


Few years back I contracted for an AI startup, long story short we ran a simple test of one anotator radiologist with 15years of experience to another(similar amount of experience) over 50 or so CT scans. They agreed on about 60% of the time lol and I mean easily spot table size “things” that were annotated as potentially malignant nodules or “not at all because they were “scars”.

That’s when I knew we did not wtf we were doing.


Reader variability is one of the many things that make this stuff a lot harder than it looks at from the outside.


My take on it was that we are not so far removed from witchdoctors when it comes to medicine then we would like to believe. Hard data on efficiency and performance of human doctors is very far from hard and their ability to diagnose things that are not blatantly obvious and are bout to kill you dead unless you get lucky and surgery works or chemo works(or does not and we have no clue why) is very limited.

There is a huge amount of we want to believe.. and survivor bias going on in cancer treatment. Lot's of it is very harmful as well.


I believe that the problem is psychological.

Many years ago, back in the 1990s, wavelet based algorithms were able to outperform humans on detecting tumors. The thing is that the algorithms were better on the easy parts of the mammogram, and worse on the hard parts. So researchers thought that humans plus software should do better still, because the humans would focus on the hard parts that they did better and the software would catch the rest.

Unfortunately according to a talk that I was at, it didn't work that way. It turns out that radiologists already spend most of their time on the hard parts. So they quickly dismissed the mistakes of theirs that they found as careless errors, and focused on the mistakes of the algorithm in the hard part as evidence that the algorithm didn't work well. And the result was that the radiologists were so resistant to working with the software that it never got deployed.

For the same psychological reason I expect radiologists to never voluntarily adopt AI. And they will resist until we reach a point that the decision is taken out of their hands because hospitals face malpractice suits for not having used superior AI.


The QZ article is narrowly correct but widely misleading. It almost willfully ignores the momentum and direction.

In reality, radiologists will not be summarily replaced one day. They will get more and more productive as tools extend their reach. This can occur even as the number of radiologists increases.

Here's a recent example where Hinton was right in concept: recent AI work for lung cancer detection made radiologists perform better in an FDA 510k clearance.

20 readers reviewed all of 232 cases using both a second-reader as well as a concurrent first reader workflows. Following the read according to both workflows, five expert radiologists reviewed all consolidated marks. The reference standard was based on reader majority (three out of five) followed by expert adjudication, as needed. As a result of the study’s truthing process, 143 cases were identified as including at least one true nodule and 89 with no true nodules. All endpoints of the analyses were satisfactorily met. These analyses demonstrated that all readers showed a significant improvement for the detection of pulmonary nodules (solid, part-solid and ground glass) with both reading workflows.

https://www.accessdata.fda.gov/cdrh_docs/pdf20/K203258.pdf

(I am proud to have worked with others on versions of the above, but do not speak for them or the approval, etc)

The AI revolution in medicine is here. That is not in dispute by most clinicians in training now, nor, from all signs, by the FDA. Not everyone is making use of it yet, and not all of it is perfect (as with radiologists - just try to get a clean training set). But the idea that machine learning/ai is overpromising is like criticizing Steve Jobs in 2008 for overpromising the iphone by saying it hasn't totally changed your life yet. Ok.


There are indeed areas where it's being used to complement radiologists as a second review and reduce the recall rate https://www.kheironmed.com/news/press-release-new-results-sh...


This is how it needs to be approached. AI systems and rule based systems that work together with the clinicians to enhance their decision making ability instead of replacing them.


> The AI revolution in medicine is here.

There were limited scope CADe results showing improvements over average readers 20 years ago, and people calling it a 'revolution' then. I'm not sure anything has really shifted; the real problems in making clinical impact remain hard.


I work in a startup that is building a software for radiologists that uses AI. From what I experienced so far is that the software is definitely not the problem. Our software already is better at detecting lesions, aneurisms and we are close to tumors too. Our goal is not to replace the radiologist but rather to decrease the error rate. But there is definitely a difference between training a model at home with perfectly preprocessed data and working with the raw 'real-life' data + monetizing it + making the UX/Ui for it etc.


"ML promises to revolutionize 'X' because of the explosion of data in the modern era."

Outside of some singularity whack-jobs, that's always been the promise. The explosion of data in the field is a necessary requirement.

Healthcare fields make it nigh impossible to access data in way that will allow for fast prototyping or detailed experimentation. This isn't just about privacy either. Each Hospital treats even anonymized samples as a prospective source of income and a competitive advantage. I understand why they do it from a profit motive perspective, but it is certainly being traded off against prospective decreases in healthcare prices and significantly improved diagnostics.

ML revolutionized Vision because of Imagnenet and Coco. ML revolutionized Language when Google scraped the entire internet for BERT. Graph neural networks have started working now that they're being run on internet sized knowledge graphs. Even self-driving companies know that the key to autonomous diving mecca lies in the data and not the models. (Karpathy goes into intricate detail here during his talks)

If a field wishes to claim that ML has failed to revolutionize it, I would ask it to first meet the one requirement ML needs satisfied: Large-scale publically-ish available labelled data. The sad thing is that Healthcare is not incapable of providing this. It's just that the individual players do not want to cooperate to make it happen.


If you want to replace radiologists, then start by understanding what radiologists do. If your only answer to that is 'they describe what they see' then you'll have to think a lot harder than that.


The relevant question is not why radiologists aren't widely using AI software, but how accurate is the AI software relative to their human counterparts. Previous studies on the subject indicate that the accuracy of radiologists and AI software is comparable.

Radiologists are among the highest paid medical specialists and have little incentive to use AI software. It would be against their own interest - their compensation would go down, and their skills would be commoditized. Never mind that if they provided diagnosis feedback to the AI software to further strengthen the ANN models it would accelerate the decline of their profession.

HMOs and governments ultimately set the pay scale for this service. Some hospitals are already outsourcing radiology work to India. It's just a matter of time before AI is used more widely in the field due to cost constraints.


I would be happy with it being a 2nd opinion clinic. Not replacing radiologists, but "hey doc, have you considered X, Y, and Z that make the model think it is actually A instead of B?"


That's traditionally how most ML has been used in radiology systems (where it is).


Medical physics student here. I work for a hospital that pays $silly per annum to use a type of expensive treatment planning software for radiation oncology. The software comes with a built-in automatic contouring based on "AI".

One of our units covered contouring and the role of the medical physicist vis a vis contouring, which is generally to act as a double check layer behind the radiologist. We received about an hour of instruction on how to contour. After that, the instructor and the class unanimously agreed that every single student had learned to beat the software at recognizing the parotid gland. And not by a small margin.

Why is it so bad? Security is a big reason. The software that can be installed on hospital computers is tightly controlled. Our research group is currently hamstrung by IT after they got mad at us for using PowerShell scripts to rename files. This was itself a workaround for the limitations of above-mentioned software. In turn, we tend to end up with a few exorbitantly priced omnibus programs rather than a lot of nice Unixy utilities that do one thing and do it well, because it lowers the IT approval overhead and the market has gone that way.

Even though my personal situation is frustrating, I obviously recognize that you can't simply allow hospital faculty to install whatever executables they please in the age of ransomware. Commenters hoping for a quick fix are wrong. Almost all meme alternatives have downsides that won't be obvious at first.

(But I still wish every day that Windows would go away.)


There was some interesting work recently published in nature on augmenting therapy selection:

https://www.nature.com/articles/s41591-021-01359-w

“Overall, 89% of ML-generated RT plans were considered clinically acceptable and 72% were selected over human-generated RT plans in head-to-head comparisons.”

This seems like it could be a way forward. Where AI is used to propose alternative and improve patient outcomes.


That's the way it is used today - for instance in mammography there is Computer aided detection (CAD):

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1665219/

That's been in use for some time. But like many parts of radiology it really only can be a second look tool that as you mentioned proposes alternatives or suggests things. The false positive rate for CAD is substantially higher than for humans because of the human ability to see symmetry and patterns in very diverse tissue sets like one sees in screening mammography.

And the nature of screening tests like mammography means that actually percentages like "89%" isn't really good enough. You have to be more specific and sensitive than that to have a successful program, and I'm not sure that ML will ever be able to get there...there's a lot of experience and human intuition involved at some point that would be hard to replicate...and I know that because people have been trying to do that for decades.


The comparisons are pretty tricky to do right, especially with systems that have been trained with the assumption that they are operating as "a second check". For what it's worth, that language was popularized by the first such system approved to market by FDA, in the mid-late 90s. It had, amongst other things, a NN stage.

Even at that time, such systems were better than some radiologists at most tasks, and most radiologists at some tasks - but breadth was the problem, as was generalization over a lot of different set ups.

I think this is more a data problem than an algorithmic one. With something as narrow as screening mammo CAD (very different than diagnostic), it's quite plausible that it could become a more effective "1st pass" tool than humans on average, but to get there would involve unprecedented data sharing and access (that 1st system was trained on a few thousand 2d sets, nothing like enough to capture sample variability)


The reason a clean alternative to radiologists in the form of AI is not available because of the inertia of the medical system. Due to its innate conservative nature, a beta testing in a third country with successful results will only be the pathway for it to be adopted by richer countries with stricter medical systems. I feel AI in medicine is a boon for developing countries if used properly. Especially diagnostics.


> Especially diagnostics.

I don't think so. Maybe I'm just too old, but I remember vividly that the same was said about expert systems back in the late 90s and early 2000s.

20 years later and no one is even considering expert systems for automated diagnosis anymore. The problem with current machine learning models is their blackbox character.

You cannot query the system for why a diagnosis was made and verify it's "reasoning". Tests rely on using the systems as oracles instead and in medical diagnosis, a patient's medical history is just as important as the latest lab results.

No amount of ML (in its current form) will be able to manage to interview patients accordingly. It might work as a tool for assisting professionals, but it's nowhere near in a state that warrant's its use for automated diagnosing of patients.


Wtf - many ML models today are either full on white boxes or are directly interpretable in various ways (e.g. LIME algorithm). Even neural networks have good interpretability tools (e.g. captum).

ML is not the black box nightmare that I see it described as on here. You can figure out feature contributions and can quite easily (and accurately) verify it's reasoning. If you really need these kind of models, look into various kinds of tree based ML models like random forests or boosted trees...


> look into various kinds of tree based ML models like random forests or boosted trees...

Those are the expert systems that have fallen out of favour over a decade ago, so thanks, but no thanks.

> many ML models today are either full on white boxes or are directly interpretable in various ways

Sources please, and remember to use relevant sources only, i.e. interpretation of medical image analysis like here [1].

Notably activation maps told researchers precisely nothing about what the neural net was actually basing its conclusions on:

> For other predictions, such as SBP and BMI, the attention masks were non-specific, such as uniform ‘attention’ or highlighting the circular border of the image, suggesting that the signals for those predictions may be distributed more diffusely throughout the image.

So much for "fully white boxes" and "direct interpretability"...

[1] https://storage.googleapis.com/pub-tools-public-publication-...


As I said above, it seems fairly clear if you go to the original article that there is a real problem with the existing systems adapting poorly to different setups - they are trained on one system/hospital and then don't generalise well.


Why should medical "beta testing" happen in a third-world country? Is there some reason the higher risk of an experimental procedure is more acceptable there than (say) Boston or Dallas?


Unfortunately doctors are a lot less prevalent in a lot of developing countries, especially in subsaharan Africa. If an AI can do a third of the things that a doctor can do, that means that many more people can be treated. In the US, it just means that the appointments can be cheaper. So the developing countries have a lot more to gain from things like AI. The US and other developed countries should be doing more to help the situation, including training and paying doctors to work in those countries, but AI can potentially save a lot of lives there in the meantime.

Of course, AI can only save lives if it works reliably, which doesn’t seems to be the case yet, but that hopefully can be overcome.


you think doctors are scarce in subsaharan africa but mri machines and xrays and ultrasounds are plentiful?


I know that they aren’t, but that doesn’t mean that AI that doesn’t use those tools is impossible.


It’s must easier and faster to buy a machine than train a doctor


It really isn't. Some countries can produce a trained physician for less than the cost of a new MRI machine. And beyond the capital expense those machines are expensive to operate due to technicians, maintenance, consumable supplies, power, etc.


This is in contradiction to my experience working on medicine in Africa, where there are often very well trained people coping with a water system that doesn't always work.


90% reliable doctor in Switzerland > 10% reliable AI > 0% reliable no doctor in Africa


AI assisted virtual Doctor > No Doctor


I have always maintained that seismic interpretation entails the derivation of a plausible geological model, obviously limited to what a seismic image can provide. On the other hand, insight in the local geology and knowledge about the physical properties of ambient rocks and fluids are key in narrowing down to "plausible" solutions. Similarities with interpretation of radiology seem obvious. In the last years, a fair number of seismic interpretation articles have appeared, reporting on the use of AI-like techniques, more focused on 2 & 3D pattern recognition rather than on interpretation in low signal-to-noise environments, as described below. Evidently, only success stories are reported, but it is safe to conclude that AI-like techniques definitely are an addition tool in the evaluation of the subsurface. Hence I would presume that something similar eventually will take place in the development of interpretation of radiology data. After all, geophysical techniques and physical methods for medical purposes have both benefited from each other since many decades


I've just been in an MDT meeting that was meant to have a radiologist in it, but due to annual leave we didn't have anyone. I think people in tech don't have much of an idea of what radiologists do - the conclusion from a scan depends very much on the clinical context. In an MDT setting there is significant discussion about the relevance and importance of particular findings.


A lot of imaging has significant, non-trivial consequences.

I am sitting here waiting for a MRI of a rare and interesting soft tissue tumour to come back in the next hour or two.

https://rarediseases.org/rare-diseases/tenosynovial-giant-ce...

It was picked up by a musculoskeletal radiologist who noticed some slightly atypical features on an ultrasound and reccomended that I send the patient for a MRI.

I arranged the ultrasound since the clinical course was atypical and I was reminded of a case of PVNS (a diffuse form of this ) that I had seem a few years ago.

I have arranged to have one of the leading soft tissue sarcoma experts in the country to see the person if the MRI confirms the suspicion.

Hopefully, the fact that it has been detected early will minimise the likelihood of consequences like limb amputation and early funeral expenses.


>Many companies/etc keep promising to "revolutionize"

I think the technology is already here, but society does not allow technology to fail at a similar rate as a human. Also the second question, who is to blame when it fails? Doctors have malpractice insurance. A radiologist has to sign every report(and get paid). When a tesla auto-pilot has an accident, it hits the news. This is while humanity is having thousands of accidents a day.

Mammography is the most difficult radiography to interpret. Cant we start with the regular chest xrays? how about bone fractures and trauma x-rays? Those are easier, and i am sure the cost of such xray will be very low.

So I think the problem is with politics and legal.

Do u know that 80% of doctors visits are for simple complaints like headache, back pain or prescription refill? Do u really think AI cant solve this?

It is all about the money baby


Title should read "Peak of Inflated Expectations Reached".

I've worked on many health AI/ML projects. The last decade has produced tremendously powerful prototypes which lay clear paths forward for productization.

Sure, assay software that no one cares enough about hasn't been updated, but you better believe the medical apparatus as a whole will welcome any tool that increases throughput and increases margins.

Automating radiology or facilitating radiology does just that. Sure radiologists might not like it, but radiologist do not operate health systems, MBAs do.

For some perspective, medical devices take 3-7 years for approval. Most are not game-changing technologies. The ImageNet moment came in 2012. How can we reasonably expect to have functioning automated radiology only a decade from when we realized deep nets could classify cats and dogs?


> radiologist do not operate health systems, MBAs do.

This is highly dependent on your geographical location.


I think this article points out the fact that the creation of a technology in the lab, and its effective operational deployment are two different problems, both difficult, requiring different skills and resources.

An imperfect-but-instructive analogy would be between vaccine development and vaccine delivery. Once a vaccine has been developed and shown to be safe and effective, the hard work is just beginning. In the case of COVID, billions of doses must be produced, and then delivered to people, the delivery requires not just shipping of the doses, but matching the doses with an equal number (billions) of syringes, hypodermic needles, cotton swabs, band-aids, alcohol swabs, etc. People have to be recruited to deliver the doses, systems must be created to manage the demand and the queues, etc. The operational problem of delivering the vaccine to the world is arguably harder than its creation and testing.

Likewise, the successful operational rollout of an AI-mediated automated or semi-automated decisioning problem is a complex problem requiring a totally different skillset than that of ML researchers in the lab. Computer systems and human procedures have to be created to manage the automation of the decision; decision results must be tracked and errors fed back to the lab to update models. Radiologists (including new radiologists) will of course be needed to understand the errors and provide correct labels, etc. Trust and mindshare in the medical community has to be built up. These things are not easy.


Surprised to read this. Having worked in the field, I see a growing interest in AI from the radiology community attested by the RSNA new AI journal. It's not about replacing radiologists but helping them in their daily work, as a safety net (double check) or as a prioritization tool.


The intrinsic uniqueness of human physiology and the differing assessments made by health practitioners make this area of medicine quite challenging.

This is compounded by the fact that different device manufacturers in the field of radiology each has their own proprietary technology that delivers different medical imaging analysis.

While there has been a lot of headway performed in terms of data interchange, the race by the multitude of players in this area of medicine is staggering that one will always try to proclaim as more revolutionary and innovative.


"What happened? The inert AI revolution in radiology is yet another example of how AI has overpromised and under delivered. . . ."

Isn't this how all of the previous AI Winters started?


If 95% is good enough for you, machine learning will probably get you there rather easily.

With many of the really valuable use-cases, it's just not good enough. If 100% of the time you need an expert to tell if a sample falls within the 95% of successes or 5% failures, you're not adding any value.

Even if you're bulk processing stuff that would've otherwise been ignored, somebody will have to deal with those signals. The net effect is more work, not less.

In other words, would-be radiologists ought to stay in school.


AI, and tech in general has revolutionised Radiology.

It just hasn't revolutionised "Radiologists". Nor was it ever intended to.

Not many people deeply realise this, but Radiology is a specialisation, just like Oncology or Cardiology. A radiologist is not just someone who "interprets images on a computer": a diagnosis is sought based on the radiological findings, patient history, and other examination/investigation findings.


AI and data science dev here, in the trenches.

My rule of thumb is that the overlap between jobs AI can replace and RPA can replace seems to be almost 100% of AI capabilities.

The successful AI projects I've encountered usually either build something totally new or augment the existing workforce.

It can be hard for us technologists to appreciate, but the inefficiencies of technology, policy, and people configurations can't always be resolved by technology alone.


> .. but the inefficiencies of technology .. can't always be resolved by technology alone

Technology will always resolve inefficiencies of technology. Check your mobile in your pocket, and mobile in the 90's.


I can appreciate your confusion and apologize for my lack of clarity. Note that my comment was on the interplay of the three domains, not the technology domain in a vacuum.


> Geoffrey Hinton … in 2016, “We should stop training radiologists now, it’s just completely obvious within five years deep learning is going to do better than radiologists.”

>… Indeed, there is now a shortage of radiologists that is predicted to increase over the next decade.

I hope those two aren’t related. It’s ok to bet your wealth on some technology or business existing or not in 5 years time, but people’s lives and health are more important


At the end of the day it's the human's duty to provide a diagnosis. The lack of a complete product solution sure isn't helping, but even if there's a company that provides that, most people still wouldn't trust that if it's purely driven by AI. The way AI should go into these fields is to first provide tools for the specialists already in the field, to increase their productivity.


In my experience, a lot or radiology is closer to fortune telling than image analysis.

I worked with image processing people who were researching ways to automate or semi-automate the work done in medical imaging. They quickly learned that the best way to learn from someone in the medical profession was not to sit down and talk with them, but to to follow them around all day. The ground-truth lay in the gestalt.


This discussion is a lot older than "deep" models. While statements like Hinton's quoted one are obviously silly (often the case with naive takes from deep learning maximalists) there is clearly a lot of room for more impact from algorithms, but I think it's mostly limited by data quality and access.

This is not an easy problem to solve for a range of reasons: privacy, logistics, incentives.


There seems to be a lot of startups in this space.... from a google search:

https://www.medicalstartups.org/top/radiology/

I know of one local startup personally...

https://www.aimetrics.com/

Does anyone know if any of them are getting any traction?


It's not a bad idea tbh.

Currently nurse practitioners (think: nursing degree then two years of online college) are winning the right to run their own independent medical practices all over the place. You can get a Xanax prescription after your first 15min zoom call in much of the US right now.

The political consensus is that doctors are overeducated and overpriced so I think an AI replacement could still win licensing even if it doesn't match their accuracy.


What gets me about doctors, and maybe I'm just unlucky/haven't seen enough doctors, is that I never get that "expert" vibe from them.

You know when you're talking to someone who does, say, database management. And they have been at it for 15 years, have a bunch accreditation and are well compensated for their work. You just get the impression that you can pull the most esoteric question about databases out, and they'll go on for 45 minutes about all the nuances of it. No matter how hard you try, you with a mild database understanding, would never be able to pin them.

I just have never gotten that vibe from a doctor. I always felt like I was only a question or two away from them shrugging, me googling, and me finding the answer.


I think a lot of it has to do with the fact that the database guy actually implements things in an environment that can be manipulated at will.

Doctors don't implement things in an environment that they control. Patients come to them with a chief complaint, and the doctor tries to resolve or manage it to the best of their ability with a minimal intervention according to a set of guidelines someone else wrote down.

A doctor can't sit there and play with the diagnostic/treatment process in the same way a database guy can go play with the database software. At best the doctor can sit there with a textbook or medical journal and try to memorize more facts, or take notes, but it's not the same as pulling apart code, running it in different ways, and seeing how it behaves.

Medical school is a continuous process of memorizing shit off of flash cards culled from a textbook. You don't actually build anything, or implement anything, or do anything in a real-world sense that would make you an expert in the same way as someone who was working with a system that they were able to take apart and play with and manipulate. There's no real way to develop the kind of deep knowledge you're talking about in that environment.

A diesel mechanic can pull apart and engine. Hold every part in their hand. Drive a diesel-engine vehicle. Observe all the things that go wrong. Simulate, and innovate. An individual doctor can't really do any of that. Medical schools are even axing dissections, so med students are lucky if they get to see what the hell peritoneum actually looks like.


It's very interesting to see this on HN because we're actively working in this space, albeit on building a training platform but the long-term goal is to generate models that can outperform the current ones that require a lot of expert input.

Shameless plug: https://www.rapmed.net


Does anyone know the collective time and energy put into deep learning models versus the social benefit? I recognize that this is near impossible to calculate and the benefits will hopefully be for many years to come.

It does feel like the hype around deep learning has been large though and significant progress has been not as sticky as hoped.


These systems are basically banned from working as actual replacements for radiologists, so it's no surprise they aren't yet. We have repeatedly proved the superiority of expert systems (in the 90s) and AI and select medical tasks. However, there is a legal monopoly (in the US) that requires most medical tasks to be performed by expensive doctors.

If people were able to use these tools directly, we could see dramatically better results because we would be giving people decent healthcare at basically zero cost. Cost is by far the biggest problem in healthcare today. Low cost would change behavior in a large number of medical tasks, and early detection of cancer is the most obvious. If you could get a mediocre readout for free, you would probably do so more often. Cancer in particular is almost entirely an early detection problem.

Using AI to assist radiologists is probably never going to be a huge thing. Just like AI assisted truck driving is never going to be huge (because it doesn't solve the core problem).


> We have repeatedly proved the superiority of expert systems (in the 90s) and AI and select medical tasks.

Part of the problem here, though, is a human radiologist may look at a "is the bone broken? x-ray and go "yeah, obviously, but what's this subtle spot here?" and find an early state bone cancer or something along those lines. There's a value to that.

The AI might give you the right answer, too, but miss the more subtle issue the AI isn't equipped to spot.


Spelling: "it's failing" instead of "its failing", in the title no less.


I worked in a small research company that had a method (segmentation + CNNs, etc) a few years back. We had some exciting stuff also on masking effects, but as soon as we got into SaMD and the main revenue stream (grants) dried up, engineering closed down.


These guys - MVision - seem to promise automated segmentation using AI. Is this failing or more into the correct direction? https://www.mvision.ai/


I've done a project in this AI domain a decade ago: look at different scans and then decide if a pixel is cancer tissue or not. Project succeeded and I'm pretty sure some radiologist is enjoying his 4h work week.


I don't think it's Luddites holding AI back as some comments have suggested. In the medical field, indemnity is the name of the game. An expert will always have to sign off on whatever the AI suggests.


Nobody'll see this, but totally offtopic: see the weird random horizontal line in the margin off to the right? It's a bit of injected spam. The site seems to have been slightly hacked.


You gotta give it a minute! This isn’t like Facebook shipping their latest startup clone where they just slap something together in 10 weeks and call it a day. This will be a multi decade process.


How could it? We find most things we're capable of finding. Medicine needs more treatments, cures, and prevention techniques, not more diagnosis.


The biggest reason corporations want AI to succeed is so they no longer have to share even a meagre percentage of revenue with you any longer.


The biggest reason humans want AI to succeed however is so that they can stop spending their time serving other humans in repetitive, mundane, and boring work.

Wealth distribution is a problem that can be solved better or worse with or without AI. Better with AI looks like things along the lines of low working hours with good pay, UBI, or so on, things we simply can't afford without automating the work. Worse without AI looks like things like slavery, something we don't even have incentive to resort to if we do automate the work.

Let's not confuse our current political issues with how we distribute wealth with issues with AI. They're issues with our institutions, some of those issues are exacerbated by better technology, but they can and should be solved.


Do you not feel it naive to assume that those that currently hold the vast majority of wealth and power won't continue to do so and exploit the rest of us? I mean, they will own all the AI's, right?


Yes and no.

I don't think it's likely that inequality will go away, a minority will likely continue to hold the vast majority of wealth.

But, as technology gets better the incentive to exploit the bottom end of the wealth and power spectrum goes down, technology does the job better than manual labor. Moreover people don't actually like to see other people suffer as a rule of thumb, so given the absence of other incentives the QOL for the bottom end of the spectrum likely goes up. UBI is a much bigger ask when it's 30% of the GDP than 3% (made up numbers).

Especially given that we have reasonably functional democracies in many countries today, I think it's pretty likely that the social problem works out well.

It's also not like I expect AIs/useful programs to be a rare resource, the best one might be, but like all computer programs they're infinitely clonable, there are only artificial sources of scarcity, and scarcity of (more or less fungible) computational power. This isn't a setup that lends itself well to the hording of resources. Moreover maintaining the technology will probably consume some humans in frankly not that exciting jobs like "maintenance programming"... and those humans probably won't come from the high end up the wealth/power spectrum, but will be high leverage. That points to a society that is likely to have reasonably good class mobility. The cards are stacked in favor of democracy working reasonably well.


That is a way more optimistic outlook than I have had for the last 15 or so years. I sincerely appreciate your response and outlook. Not sure if I agree or disagree but, that's just my opinion, man. :)


Or, as they say: ”It is difficult to get a man to understand something when his salary depends upon his not understanding it.”


Isn’t this simply a case of over regulation?


It can be true both that AI beats radiologists AND the number of radiologist jobs in the U.S. is going up.


In my 'biased' view, AI has already revolutionized more fields than most people will 'ever' recognize. However unfounded fears and insecurities around jobs are keeping its real potential at bay.

My bet is the actual impact will first be realized in poorest countries (India included) and then will spread to more advanced countries (US/G7).


This attitude will lead to the "AI Winter" again don't do it.


AI isn't here to replace radiologists. It is here to augment them.


ai promised to revolutionize ________ but so far its failing


Let me start with declaring conflict of interest: I work in one of the aforementioned AI startups, qure.ai. Bear with my long comment.

AI is starting to revolutionise radiology and imaging, just not in the ways we think. You would imagine radiologists getting replaced by some automatic algorithm and we stop training radiologists thereafter. This is not gonna happen anytime soon. Besides, there's not much to gain by doing that. If there are already trained radiologists in a hospital, it's pretty dumb to replace them with AI IMO.

AI instead is revolutionising imaging in a different way. Whenever we imagine AI for radiology, you probably imagine dark room, scanners and films. I appeal you to imagine patient instead. And point of care. Imaging is one of the best diagnostics out there: non invasive and you can actually see what is happening inside the body without opening it up. Are we training enough radiologists to support this diagnostic panacea? In other words, is imaging limited by the growth of radiologists?

Data does suggest lack of radiologists. Especially in the lower and medical income countries.[1] Most of the world's population lives in these countries. In these countries, hospitals can afford CT or X-Ray scanners (at least the pre-owned ones) but can't afford having a radiologist on premise. In India, there are roughly 10 radiologists per million.[2] (For comparison, US has ~ 10x more radiologists.) Are enough imaging exams being ordered by these 10 radiologists? What point is there to 'enhance' or 'replace' these 10 radiologists?

So, coming to my point: AI will create new care pathways and will revolutionize imaging by allowing more scans to be ordered. And this is happening as we speak. In March 2021, WHO released guidelines saying that AI can be used as an alternative to human readers for X-Rays in the tuberculosis (TB) screening [3]. It turns out AI is both more sensitive and specific than human reader (see table 4 in [3]). Because TB is not a 'rich country disease', nobody noticed this, author included likely. Does this directive hurt radiologists? Nope, because there are none to be hurt: Most of the TB cases are in rural areas and no radiologist will travel to random nowhere village in Vietnam. This means more X-rays can be ordered, more patients treated, all without taking on the burden of training ultra-specialist for 10 years.

References:

1. https://twitter.com/mattlungrenMD/status/1382355232601079811

2. https://health.economictimes.indiatimes.com/news/industry/th...

3. https://apps.who.int/iris/bitstream/handle/10665/340255/9789...


and so is grammar


If you take career advice from the brainfarts of thought leaders in any other field besides the one you're intent to join, you're going to have a bad time.

And even then, thought leaders rarely build, but they sure love the sound of their own impotent voices and the disproportionate influence platforms like TED provide them to virtue signal and other buzzwords the dystopic tech hivemind conjured into existence to stay relevant.

Caveat Emptor...


ML/AI is such an irresistible siren song for so many… the possibilities are seemingly endless. But the “sales and marketing” people are getting over their skis selling AI/ML to the point of smothering the tech. The next AI winter is going to be long and cold…


"only about 11% of radiologists used AI for image interpretation in a clinical practice. Of those not using AI, 72% have no plans to do so while approximately"

I hope the author is joking. Does he really expect the technology to be pushed forward by the people that is going to replace??? They are your main obstacle after the technical issues, don't use their adoption as a metric.

It is just crazy to say to people "hey I'm going to make your profession obsolete and take your job an status in society away by implementing this new tech that is just way better that you, help me do it".


Are ignored research papers a source of startup ideas?

Should entrepeneurs build products based on this research, to sell or get aquired by incumbents?


That guy, Jovan Pulitzer, of election audit fame, claims to have patents on this. Not saying anything.... Just it's a popular field, and seems like lots of people piling in...... Without expected results....


"only about 11% of radiologists used AI for image interpretation in a clinical practice. Of those not using AI, 72% have no plans to do so"

Highly intelligent people, of which radiologists would fall under, are not going to adopt technology clearly aimed at replacing them.


Has AI revolutionized anything other than driving up user engagement/addiction on shitty websites?


Speech and language transcription and translation has gone a very long way. Still not perfect but almost at human level in some instances.


Have you ever heard about Alexa?


I don't know, the NSA has been capable of that level of mass surveillance for a long time before Amazon was.


Asking such a question is only proof of your own ignorance. I invite you to discover the state of the art and its scopes https://paperswithcode.com/sota


While this comment is a good start, we should remember that for some scores, SOTA is only loosely correlated with improvements in downstream performance. This is true in things like summarization with ROUGE scores (which suck and everyone hates them)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: