Hacker News new | past | comments | ask | show | jobs | submit login

I realize I should have been more precise. I agree that there are many areas in which AI can and already has excelled humans and less often makes grave mistakes than humans. I specifically had natural language processing with a focus on "intelligent" conversation in mind. The issues in that area might have less to do with the pattern recognition ability and more with the lack of appropriate meta-cognition, introspection, and self-doubt. Maybe having several AIs internally berate which answer is best before uttering it would already do the trick, though.



> AI can and already has excelled humans and less often makes grave mistakes than humans.

Radiology, to continue the example, isn't one of them. We've been doing ML/AI in radiology data since the 90s, and results have been, and remain, decidedly mixed.


A couple of points:

1. It's easy to forget how recent many of modern ML methods for computer vision are. (E.g. U-net only goes back to 2015!)

2. It's not totally clear to be what you mean by "mixed results" (have we solved every problem in radiology? probably not). However, it is clear that there certainly have been some successes. Here's one example:

https://www.nature.com/articles/s41586-019-1799-6.epdf?autho...


That paper is a good example actually. The first clinically approved, commercially available breast screening CADe system using NN models was available in the 90s. It too was aimed at the 2nd reader problem. At the time there was a lot of concern radiology circles that algorithms were going to take over. That pretty quickly died down as people worked with CADe and CAD systems.

Breast is one of the obvious targets, as data availability is pretty good. So over 25 years there have been incremental improvements, sure, but no real eye opening jumps, The move to deep models has helped a bit but nothing revolutionary. You still find very influential radiologists who aren’t convinced it’s worth the time, yet. I think all of them expect it to be a growing part of the workflows over time but that’s about it. Personally I think the impact will both be much bigger than the pessimistic radiologists think, and much further off than the optimistic ML think (for both non technical and technical reasons).

I suspect the broader availability of good digital data has had far more impact than the modeling updates. Don’t get me wrong, I appreciate the tools and modeling developed over the last decade - but I think the big wins are far more about data, and secondarily compute availability than about models.


> It's not totally clear to be what you mean by "mixed results" (have we solved every problem in radiology? probably not)

To expand a bit as maybe not clear from my other reply (can't edit). Not only have we not solved every problem in radiology, we haven't really knocked a single one out of the park.

By mixed results, I mean that the practical, i.e. clinical impact of these approaches has been pretty small, and this is likely to continue to be true for foreseeable future To be fair, there are lots of non-technical and cultural issues behind this - not just failure to generalize.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: