I used to be a doctor in the NHS, and have worked on a medical startup also, so I’d like to think I have some relevant perspective and experience with respect to patient safety and confidentiality.
There is a lot of hand-wringing in this article, with no reasonable attempt to discuss the balance between possible benefit and possible harm to the patient.
A lot of the paper argues that there was no direct care relationship between deep mind and each patient, and that the data transfer was therefore inappropriate. I disagree, and think in the future we will see many algorithmic systems involved in direct patient care. After all, you could pay an army of clerks to review the notes and results (I.e. patient data) of every patient at the royal free hospital, run it through an algorithm (paper flowsheet), then notify the doctor that there might be impending acute kidney injury, leaving it to the doctor to make the clinical decision about whether there was in fact AKI (true positive) or not (false negative), like we do with EVERY piece of information we get when assessing a patient. That could have been done any time in the last 100 years, without anyone being concerned about the appropriateness of it.
Deep mind was essentially doing the same thing on behalf of the doctors at the hospital.
The risk of harm and the regulatory framework should certainly be considered and developed, but it has to be weighed against the potential benefits, same as every other medical advance (and setback) in human history. Listing a whole bunch of potential dangers is an easy way to write an article that gets attention, but I’m more worried about articles like this unreasonably impeding progress in the field of “algorithmic medicine” than I am about Deep Mind and the Royal Free’s approach to this, at this exploratory stage.
I would feel differently if there was an identifiable business model that conflicted with the interests of the patient, but I think at this super early stage we should err on the side of exploration, not restriction.
Personally, having sat on 'watson for healthcare' presentations by top architects, and having had conversations with them, I'm far less optimistic about the whole AI in healthcare approach.
Isn't Watson known to be somewhat snake-oily? I don't think it's reasonable to transfer assessments based on Watson to Deepmind or any other organization.
Assuming Watson is "snake-oily" (Chess and TV demos aside), I don't see anybody else on the planet, let alone that startup Google bought, solving this one without people dying from misdiagnosis.
I'll write two things here, and let you derive further your own conclusions:
> solving this one without people dying from misdiagnosis
Why would that be the best metric. I'd prefer "overall minimization of death and sickness", not the "no mistakes allowed for DeepMind" as if doctors don't make lots of mistakes that cost human life.
As someone working in this space, there is a massive disconnect between the engineers and the medical providers. One has never been in clinic. The other never took linear algebra. To stand with a foot in both worlds is utterly fascinating.
Funny you say that, I just started studying for an MS in math and stats, while working as an ER doc :) Thonking about maybe going on to do a PhD - I was interested in learning how machine learning might safely improve medical care, and began to suspect that to do so without understanding the math would be like diagnosing disease without knowing anatomy, phyiology, biochemistry, pathology, pharmacology,...
It turns out that math is at least as fun as emergency medicine.. :)
There is a principle in medicine that you need to demonstrate efficacy first, before taking further risks at the potential expense of the public. Advancing the tech sounds good on paper, but the cost is unknown and the benefits unclear at best. “Someday it’ll be great” isn’t an argument for your government giving Google private information.
There is a principle in medicine that you need to demonstrate efficacy first, before taking further risks at the potential expense of the public.
This is a key point. It should be noted that health care systems and credit bureaus have in common the ability to accumulate information people without those people effectively having choice. Thus it's especially important to have controls and "hand wringing" that a company is just taking data without the destination or use being evident is entirely appropriate.
Would the same be true for the manual version of the same procedure I outlined? If so, what are the risks you are concerned about? If not, why would the computerized version be different?
Of course it's true that a manual version of google/deep-mind should involve informed consent for data being given to a third party - and should also be tested for its effectiveness.
The large group of clerks you describe seems like it could leak a lot of data, just for example.
The thing is neither the Google algorithm nor your idea have proven effectiveness. The "obviously it would improve things so worries over implementation and data control are hand wringing" attitude seems to neglect the fact that all sorts of apparently logical steps actually have turned out to not do the good people expected (Cholesterol drugs being a good example, yes they lower cholesterol, no they don't improve life expectancy, etc)- and also that a good program should be implementable with good data controls, there no reason to frame this as a trade-off.
Even worse, if the public comes to associate ML in medicine with shady dealings and lost privacy, then adoption of proven tech will be delayed or stopped. The ethical constraints around medicine exist for good reasons, and shortcuts can cost lives in innumerable ways.
While having some cynicism over Tory deals regarding NHS data is understandable, and it reads like the agreement with DeepMind was not up to the standards expected, it seems to me less about a political party trying to "monetize" patients as it is an NHS trust trying to maximize what they can deliver in the context of limited resources.
I had a conversation with a representative from a medical analytics company just before Christmas who portrayed radiographers and others who would have their jobs threatened by advances in medical imaging analytics as being obstructionist. It's also a common trope to hear that the Trusts do endless proof-of-concepts and redevelop solutions other Trusts have already created due to political power dynamics.
While entering a situation where Google has monopolistic power over data or analytics would be undesirable, I think it's unreasonable to say that the only solution for the NHS is to feed more money into the present dysfunctional system. I personally find it quite distasteful that we can't bring in cheaper, more accurate solutions with fewer barriers just because they come from the private sector.
> DeepMind did not have the requisite approvals for research from the Health Research Authority (HRA) and, in the case of identifiable data in particular, the Confidentiality Advisory Group (CAG)
This is kinda the crux of the matter. All the exciting, and likely beneficial, machine-learning that we would expect from a relationship between an organisation that is good with analytics and another organisation with a lot of data, can happen only under the guise of research.
Any kind of medical research requires ethical approval, and in the UK, this is not an easy process. It requires a responsible organisation (usually academic, like a university, but it can be commercial), extensive documentation on how the data will be collected, handled, processed. Consent and information leaflets, duration of study etc etc is all part of this, and this needs to forwarded to the regional ethics committee - a terrifying panel of around 8 people who are there to ensure that you've thought of everything surrounding the ethical aspects of your research. After this, the hospital's R&D department need to check through and make sure that the things they are responsible for - i.e. ensuring patient data does not leave the premises/country without good reason, are all ok.
There's good reason for all of this, too - it's not that long ago that major breaches of medical ethics have been part of research [1], and medical confidentiality is taken seriously in the UK.
As an example of research done right - here is an observational study on nearly 1 million patients, tracking their admission, discharge from hospital and subsequent complications [2]. Every single patient consented and agreed to their information being used by the researchers for these purposes.
Personally, I'm excited to see machine/deep learning applied to medical data - it's clearly going to be of huge benefit (well, it's happening anyway within academic institutions), but no one should be suggesting it is done without the usual ethical approval. Probably what needs to happen here is for Google or similar to pair up to organisations used to obtaining ethical approval for research - Universities, and work collaboratively with them.
if google wants data, why don't they just ask the public for it? You know the old fashioned way. Open a store, staff it with real humans who can talk to real humans. My guess is that they are trying a 'heuristic' solution- trying to 'add value' to data that already exists. If that's the case, why don't they just ask 23 and me for their DNA database. But I'll tell you right now, I am tired of being the 'product' for Facebook , google, etc. If they want my data, they need to ASK me for it.
It turns out, that the value of this data in aggregate is pretty high - there is the possibility of saving probably 100 million person-years per year of lives through algorithmic medicine (ie. each person on earth lives 1 year longer), and the raw records could form a substantial cut of that value.
The flip side is researchers don't need everybodies data. A few million records would be very beneficial, and they can likely offer something worth a few cents to persuade enough people to hand over data.
Lots of hand wringing in the article, and the comments section too. My highly limited understanding of the domain is that the data suck, the algorithms suck, the the process sucks, and the patient is completely disregarded at best.
We don't have the time, money, or discipline to collect and code and scrub the data to the necessary degree where an algorithm could pick up where we left off and actually make a diagnosis that wasn't blatantly obvious from the onset. But there's so much money sloshing around in the whole system that damned if we don't have the IBMs and the Googles of the world trying, even though they both know better. Solving the real problems just isn't sexy.
The 5/10/25% efficiency improvement isn't in an "AI" flagging charts with diagnostic codes. It's in somehow restraining the bureaucracy from chasing these damn projects with their billions of dollars and get back to the very hard and boring work of properly running their hospitals.
I once had dinner with a CFO of a large hospital system. Hospitals are like airlines. $25 billion dollars to open the doors each year even if no one is in the rooms. Hospitals, like airlines, are all about ASM (available seat miles) and RASM (revenue per available seat mile), except with hospitals I'm not sure what the standardized billing unit is, but we might as well denote it in pints of blood.
These massive systems are run with exactly the efficiency and precision which you would expect of a large, modern, American system. That is to say, they are more than likely to crash and derail on their inaugural voyage then not. They are not, as we like to say in SV, a meritocracy. And they most certainly will not benefit from more algorithms telling them what they already know they need to be doing but don't have nearly the time to do in the first place.
When I can show up for an appointment and trust that, a) it hasn't been rescheduled without me knowing, b) that the doctor isn't actually in a conference in Madrid this week and no one cared to clear his schedule, c) that they didn't book patients in 20 minutes blocks for the whole day when they know full well the average time to clear is 45 minutes, and d) the doctor actually knows my name and has some way to read a summary of my chart that might actually inform him of why I'm there today.... then maybe we can start thinking about how adding some algebraic analysis can help.
Are you saying that AI should wait for hospital management and doctors to step their game up, before being involved in diagnosis and medical monitoring?
Medicine is a fast evolving field. If a doctor takes time to read up the latest papers, she doesn't have enough time left to treat patients, if she spends all her time treating patients, she's got no time left for refreshing the latest discoveries. In the end, you are going to see an outdated doctor, because that's how it works. I'd prefer that doctors rely on AI and be up to date while treating patients.
AI might have a huge impact in poor countries where doctors are scarce. The alternative to AI is "no doctors, no help" for many. Just like self driving cars - they don't need to be perfect, just less deadly than human drivers, and in this case, access to medicine = life.
I don't want advertising companies anywhere near my medical records. If I was in a position where the NHS handed my data to Google, I'd be beyond pissed off.
It's worth noting that DeepMind and Royal Free have revised their agreements to address the initial complaints, such as those this paper addresses (note, this paper came out in March).
A.I. Expert systems like MYCIN were supposed to revolutionize medince in the 1980s. They were sort of like the symptom handbooks you might see in WebMD.
If this is true, it paints a rather ugly picture of the future. "1984" comes to mind, but not in the sense that it's cruel - just in sense that humans don't hold their destiny in their own hands. It looks like even a powerful government like UK can't protect their citizens against large corporations in a field which is deeply personal - health.
I must say I never thought about it before, but Deep Mind really doesn't seem trustworthy entity to me. Owned by an advertising agency and willing to pull deals like this? Not good.
Allow me to play devil's advocate for a second, please. I think it's worth asking: Is health really personal? That is, given the influence of environment, as well as societal (i.e., you conform to the behaviours around you) is it time to apply a more holistic lens?
Furthermore, wouldn't prevention also benefit from supplementing the micro (individual) with the macro? The awareness and "counterattack" to obesity comes to mind.
> I think it's worth asking: Is health really personal?
No one asks that because the answer is obvious.
I don't mind the government, or a non profit, using data (in an anonymous and transparent way) to benefit the health of everyone. But giving such personal data to a for-profit mega-corp, whose own processes are opaque and unaudited, is criminal.
What is the obvious answer? It's obvious to me that my ailments and sickness aren't unique to me and there is no point in claiming they are. While bad actors can cause social damage with leaking healthcare details that points more to the issue that healthcare is taboo in society. Security through obscurity is no way to live.
Healthcare is obviously not personal, it's something we're all in together. Turning healthcare into a me vs them survival game isn't healthy but is the current capitalist solution.
“Can’t protect” is weird language to use when it wasn’t DeepMind who approached the other party. The article says it was the other way around.
To me it looks like the NHS wasn’t really careful, threw a lot of data at DeepMind and they said yes. After getting burned like this I am willing to bet they will be a lot more careful in the future.
Indeed - the article refers to subsequent agreements between DeepMind and the NHS which did fulfil the necessary requirements.
I don't think there's much of a story here apart from basic incompetence in a single instance - mostly on the part of the NHS trust (who really should have known better) in not understanding or applying the necessary governance, and partly DeepMind likewise.
But of course, without the hand-wringing and links to Google and advertising, there's much less of a story to promote...
Please don't take HN into partisan flamewars. Even if you're right, this is just the sort of political flamebait that nothing good comes of here, and it's destructive to the purpose of this site.
Even if someone else was trolling and/or breaking the site guidelines, you're still responsible for not posting like this. Please don't post like this again.
There is a lot of hand-wringing in this article, with no reasonable attempt to discuss the balance between possible benefit and possible harm to the patient.
A lot of the paper argues that there was no direct care relationship between deep mind and each patient, and that the data transfer was therefore inappropriate. I disagree, and think in the future we will see many algorithmic systems involved in direct patient care. After all, you could pay an army of clerks to review the notes and results (I.e. patient data) of every patient at the royal free hospital, run it through an algorithm (paper flowsheet), then notify the doctor that there might be impending acute kidney injury, leaving it to the doctor to make the clinical decision about whether there was in fact AKI (true positive) or not (false negative), like we do with EVERY piece of information we get when assessing a patient. That could have been done any time in the last 100 years, without anyone being concerned about the appropriateness of it. Deep mind was essentially doing the same thing on behalf of the doctors at the hospital. The risk of harm and the regulatory framework should certainly be considered and developed, but it has to be weighed against the potential benefits, same as every other medical advance (and setback) in human history. Listing a whole bunch of potential dangers is an easy way to write an article that gets attention, but I’m more worried about articles like this unreasonably impeding progress in the field of “algorithmic medicine” than I am about Deep Mind and the Royal Free’s approach to this, at this exploratory stage.
I would feel differently if there was an identifiable business model that conflicted with the interests of the patient, but I think at this super early stage we should err on the side of exploration, not restriction.