Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

"Given a detailed list of symptoms" is sure holding a lot of weight in that statement. There's way too much information that doctors tacitly understand from interactions with patients that you really cannot rely on those patients supplying in a "detailed list". Could it diagnose correctly, some of the time? Sure. But the false positive rate would be huge given LLMs suggestible nature. See the half dozen news stories covering AI induced psychosis for reference.

Regardless, it's diagnostic capability is distinct from the dangers it presents, which is what the parent comment was mentioning.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: