> bedside manner + being able to remember a lot of things
My impression is that accompanying a patient is super important, it helps to understand illness, to have a plan in case of more complex treatments, etc.
Then my doctor has the ability to know me and gauge my health. She's also very good at probabilities and detecting when something really goes wrong.
I'm sure that being able to do that require a lot more than numbers.
(I'm studying data sciences, I trust them, but my guts tell me that diagnosis is in a whole different ballpark)
What many people don't realize is that medicine as a whole is already some sort of expert system (i.e.: a flavor of AI).
There are researchers that conduct experiments to produce meaningful data and extract conclusions from that data. Then there are expert panels that produce guidelines from the results of that research. Most diagnostics and treatments are prescribed following decision diagrams that doctors themselves call... algorithms!
There are several limitations that prevent us from applying other AI techniques to the problem. Off the top of my head:
- We do not have the technology for machines to capture the contextual and communication nuances that doctors pick up on. There can be a world of difference between the exact same statement given by two different patients or even the same patient in two different situations. Likewise, the effect of a doctors' statement can be quite literally the opposite depending on who the patient is and their state of mind. One of the most important aspects of the GP's job is to handle these differences to achieve the best possible outcomes for their patients.
- Society at large is not ready to trust machines to make such intimately relevant decisions. It is not uncommon for patients to hide relevant information from their doctors, and to blatantly ignore the recommendations from them. This would be many times worse if the doctor part wasn't human.
- We cannot apply modern inference techniques (e.g.: deep learning) to the global problem because we have strict rules that prevent medical data collection and analysis without a clear purpose. Furthermore, these techniques tend to produce unexplainable results -which is unacceptable in this field-. As a result, there's not enough political capital to relax those rules.
The attending physician in a modern hospital system is primarily a manager. Their main concern is treatment of the patient's medical issue, but their role isn't limited to that. This patient is refusing care but also refuses to leave, what do we do? How should we schedule care around a patient who requires the entire floor to assist in daily activities of living? They may not get the last word on matters outside of their responsibilities, but being the physician their words carry weight. This role has remained pretty much constant through the modern medical system, even as medicines change and nurses and technicians gain more responsibilities.
A computer cannot perform that role with the current paradigm of AI, even the worst and most arrogant doctor is more qualified leader than any computer.
> We cannot apply modern inference techniques (e.g.: deep learning) to the global problem because we have strict rules that prevent medical data collection and analysis without a clear purpose.
I mean, China will likely do it, as long as they can capture high quality data, so there's that.
“Super important” — more like “super nice-to-have.” Hospitals don’t have any single person on staff who stays attached to particular in-patients. Who knows you? Your chart.
Yes, of course, hospital care would be better in many ways if we did have somebody who statefully understood particular patients’ needs.
But what I’m saying is, the GPs in hospitals could be replaced with stateless diagnostic AI without making hospital care any worse than it is now. And hospital care is a large part of the medical system, so only replacing diagnostics there (while leaving primary-care GPs alone) would still be a major optimization, freeing many doctors to provide better care, go into specialties, etc.
That's simply false. You obviously have no idea how hospital care is actually delivered. To start with, every admitted patient has an assigned attending physician who is responsible for coordinating the care team. Some things can be documented in the patient chart but there are always gaps. Clinical decision support systems for partially automating diagnosis could potentially be helpful in some limited circumstances but the ones built so far mostly don't work very well.
Then where exactly does the oft-cited kafkaesque nightmare of being "lost in the US hospital system" come from (with patients put in the wrong wards and forgotten for sometimes months; given inappropriate medications that doesn't end up recorded; tested repeatedly for the same problems because the test results were "lost"; etc.)?
Just because someone is responsible for you doesn’t mean the system works competently enough to make sure you receive only what you need in a timely manner
> To start with, every admitted patient has an assigned attending physician who is responsible for coordinating the care team.
That's rarely (if ever) a 1:1 ratio. That attending physician is almost certainly juggling multiple patients. Same with the rest of the care team. There's a reason why the first thing one does when approaching a patient bed is to look at the chart.
> Some things can be documented in the patient chart but there are always gaps.
Then those gaps need closed, stat. Once those gaps are closed...
> Clinical decision support systems for partially automating diagnosis could potentially be helpful in some limited circumstances but the ones built so far mostly don't work very well.
...then this will improve considerably. Garbage in, garbage out.
That's largely pointless. The critical data elements do get charted. But time spent closing data entry gaps on patient charts is time not spent actually caring for patients. There are simply not enough clinicians to do all that, or funding to pay them. Furthermore there are many aspects of patient conditions that can't really be coded in a useful way. A skilled, experienced clinician can intuit a great deal from subtle signs like skin color, breathe sounds, tone of voice, small movements, etc. Healthcare relies on tacit knowledge far more than arrogant, ignorant software developers understand.
And in most routine cases the diagnosis is the easy part. The hard stuff is actually working with patients and doing the hands-on procedures, which won't be significantly automated in our lifetimes.
Pattern recognition algorithms do have some promise for computer assisted interpretation of things like medical images and ECG waveforms where the input data is already in digital form. We can't rely exclusively on algorithms for patient care, but if the physician reaches a conclusion different from the human physician, then it's probably worth taking a deeper look and getting a second opinion.
> And in most routine cases the diagnosis is the easy part.
...which is why we shouldn't be wasting such valuable resources (people with an aptitude for medicine) on doing such an easy, routine task all day long, no? It'd be like if chefs spent most of their time hand-grinding spices rather than cooking.
> A skilled, experienced clinician can intuit a great deal from subtle signs like skin color, breathe sounds, tone of voice, small movements, etc. / Pattern recognition algorithms do have some promise for computer assisted interpretation of things like medical images and ECG waveforms where the input data is already in digital form.
You're seemingly contradicting yourself here: you're saying that the places where ML shows the most promise, are exactly with the tasks that would best replace the things doctors are doing. The only reason that ML can't do those things, is that people aren't putting data like "a video exam of the patient by a nurse" into the chart where the diagnostic algorithm can see it / be trained on it.
>Hospitals don’t have any single person on staff who stays attached to particular in-patients.
This is incorrect. Doctors are assigned to patients, and if there is a complication any time or day or night the doctor is contacted to decide what to do. The entire point is so that there is one person who is familiar with the patient, who is also responsible if anything goes wrong. I don't know how malpractice would work with an AI, but given the number of malpractice cases yearly in the US it'd either be sued out of viability or have to be provided with legal immunity(possibly the worst scenario IMO)
Knowing the ontology of your patients and their risk is also a tenet of a doctor's job, but we can do it with AI too. Hell, ontological engineering had a revamp specifically so that we could have a standardized model to describe any and all "parts" of a "whole" in a way that machines could understand.
It also helps to have a relationship with a patient (or person).
There are some people who will never, ever complain about anything. When they complain of severe abdominal pain, for example, you pull out all the stops immediately to figure out what's wrong, because it's probably really bad.
On the other hand, there are hypochondriacs and people will low pain tolerance. While they can certainly also become seriously ill -- and one must never forget this -- the tempo and pace of workup and order of intervention is markedly different, absent other information that shifts the pretest probabilities.
Sometimes a relationship is bad. If you think someone’s a hypochondriac, but in fact they’re unusually sensitive, you’ll dismiss a lot of what they say and that can be quite damaging over time. (Especially if they’re female https://www.health.harvard.edu/blog/women-and-pain-dispariti...).
I wouldn’t eliminate GPs from the process, but many people actually would like to hear what the robots have to say about their medical conditions. Having second opinions of this sort available might lead to better patient outcomes.
There is no evidence that diagnostic robots would actually produce better outcomes. The hypochondriacs are already able to Google their symptoms and make themselves sick with anxiety.
Lol. Maybe people who don’t have any medical problems.
There isn’t enough humanity in healthcare to begin with. Replacement of doctors with AI sounds pretty horrific. General practice isn’t where healthcare costs are going bonkers, and it seems weird to want to cost-cut something that actually kind of works in favor of bullshit.
Know what would be a great use of AI? Something real like analyzing all of the telemetry in EMRs to provide better guidance to doctors to proactively guide people. Some CVSHealth chatbot telling me whatever is a waste of time.
Automated diagnosis applications have existed for decades. They have proven useful in limited circumstances for certain specialties and rare conditions but for routine medical care they're more hassle than they're worth.
That’s me. I really, really appreciate a GP that both understands that I’m not doing it on purpose, and can reassure me that nothing is wrong, or figure out that we actually do need more testing this time.
Unfortunately it’s been years since I had one like that :/
What data is being collected on you? Once a year blood test if that even?
I actually suspect it would be trivial to beat my doctor after 5 years of higher frequency full blood panel data collection.
10 full blood panel samples a year, have 20 million people do that for a data set we can do classification on. I think my doctor is kind of out of business then.
Will never happen in my life though with health insurance and health bureaucracy.
Beat your doctor on what? You can already get 10 full blood panel tests per year if you want. You can just pay for it and don't need insurance. But what will you do with the data? For most people the results won't tell you anything useful.
It won't happen primarily due to government regulation. Medical information has "dangerous, don't touch this" written all over it, and everyone is scared to try.
My impression is that accompanying a patient is super important, it helps to understand illness, to have a plan in case of more complex treatments, etc.
Then my doctor has the ability to know me and gauge my health. She's also very good at probabilities and detecting when something really goes wrong.
I'm sure that being able to do that require a lot more than numbers.
(I'm studying data sciences, I trust them, but my guts tell me that diagnosis is in a whole different ballpark)