Hacker News new | past | comments | ask | show | jobs | submit login

Google is absolutely doing the right thing. Anything that can be done to enhance the efficacy of EMR software will help lead to better patient outcomes. As far as I can tell, Google is approaching the problem foremost from the perspective of "how can we make people healthier by making software more effective." Epic and Cerner are simply not doing that. Somewhere in the comments here someone mentioned that most EMR software is a billing system with record keeping attached. Although implementations vary between hospitals, the software is always purchased by administrators, typically with little input from doctors.

In most implementations, neither Cerner not Epic encourage structured data recording except for billing codes. This means that if a patient comes through the medical system frequently, doctors have to read pages and pages of unstructured text to get a sense of what's going on for the patient. The software is designed to be sold to administrators and, as currently designed is unquestionably leading to worse outcomes for patients.

Google has made a lot of mistakes. If they build something doctors truly want to use and that helps properly organize information, however, and if Google doesn't misuse the data/people aren't unreasonably paranoid about data misuse, it will be a godsend for the industry and it will save lives.

We haven't even begun to talk about the potential for machine learning and statistics to understand what treatments are most effective if data are structured properly. This is unquestionably an interesting and unique case where collecting massive amounts of data and handing in to a trusted, competent, creative, research-oriented authority could have incredible benefit for mankind. I don't trust Cerner or Epic with this mission for a second.




The problem is that Google has the most obvious conflict of interest of all cloud and AI providers when it comes to data use.

Many people simply don't believe anything Google says, and even those who do believe them to some degree know that others won't.

Google has become a reputational hazard in fields that deal with sensitive data such as healthcare.

Some decisions made by current management don't help at all. Instead of separating the advertising business from other activities like cloud and AI, they are moving in the exact opposite direction.

Why? I find this entirely baffling.


It's data ownership laws and adequate monitoring (including whistle blower laws with adequate rewards) and enforcement that is necessary. Replace "Google" with any unknown or known bad actor that can be taken over by a bad actor, and we return to the same problem.


At the end of the day you're right, but look at it from the perspective of a healthcare decision maker.

What if something questionable happens, even if it's legal, and you're found to have handed over patient data to Google, the largest ad targeting firm on the planet?

Pointing at legalities will not save your reputation or your job in the face of glaring misalignment of interests.

It's like letting your dog guard your employer's sausages after giving it state of the art training in self control.


I understand that monitoring and keeping people in check, and perhaps you're right that there should be a physical barrier between certain systems, however there's still nothing stopping bad actors from infiltrating in these systems. I personally think ads should be obliterated, as they're simply shallow and cheap methods of manipulating people; Presidential candidate Andrew Yang's plan is to tax ads higher in his VAT strategy than other things, progressively increase it over time, basically as a mechanism similar to a carbon tax countering pollution that's bad for us. Tesla spends no money on advertising, the word of mouth and media attention they get is all earned - meanwhile other vehicle manufacturers compete through emotional ads spend money which increases the cost of the vehicles, perpetuates the ad industrial complex, and to some degree limits people's depth of critical thinking for their buying decisions re: the emotional manipulation of ads bombarding them to build a good, relatable feeling, familiarizing them with specific products to make it feel like a safe, good choice.


I don't think ads should be obliterated. People will always try to influence other people and ads are just one way of doing that. In my opinion, there are far more nefarious, manipulative and intransparent ways of pursuing the same goal.

Also, ads are currently indispensible for privacy as weird as that may sound. There is currently no widespread, privacy preserving form of electronic payment, and I very much doubt that there will ever be one.

What we should do is regulate/restrict ad targeting and make sure that advertising is as transparent as possible.

But none of that has anything to do with Google's reputation issues and conflict of interest. Google should split off its ad business from all other activities. Otherwise they will always remain an advertising company and distrust in everything else they do will only grow.


How about an "ad system" that people actively engage with when they're wanting to discover vs. being bombarded everywhere - perhaps not understanding the implications of manipulation of ads have on them? At minimum it changes the user experience greatly.


We can always wish for things to be less annoying, but the difficult question is how to make it happen without causing more unintended consequences than intended ones.

You have to ask yourself what advertisers would do if you ban those in-your-face super annoying ads. Stop spending money on trying to influence our decisions? I don't think so.


Except other methods will certainly cost more, bringing their cost of products up, and so someone's products who are better and known as such through word of mouth will have a competitive advantage on price.


I'm not so sure about that. Ad spending has remained roughly the same as a share of GDP for over a century.

https://www.bloomberg.com/news/articles/2014-03-03/advertisi...

I think transparency is more important than fighting annoyances.


The problem I have with this is the simple fact, that an ad company can never know my medical history. Intentions and benefits aside, this is not solvable by Google.

> people aren't unreasonably paranoid about data misuse

Is this a joke? Google is an ad company. And if I look at the US, I think there are periphery problems here.

What really improves the situation in the medical field are large databases of indications.

A remote city or village has a few general practitioners. They cannot know about any form of illness that has been indentified. To help them, they need an information infrastructure to support them for diagnostics. That would help people and might save lives.

> We haven't even begun to talk about the potential for machine learning

Then you just have to ask patients if they want to share their data. That is not asking to much. Until then medical data should be protected.


GCP is not an ad company


Neither are Chrome or Android (or all of the other Google products that were once silo'd from each other through privacy agreements and could have been monetized in other ways)

Now they all do their best to feed your data straight into Google, while other similar products do their best to protect it. How long until GCP says 'let's pool all of our data for machine learning' or 'centralize your network traffic for security analysis' and 'We've updated our privacy policy, please leave immediately if you don't agree to the updates'


Thank you. These people cheering seem to live in an alternate reality where google has been trustworthy and not a corporate psychopath


The income of the owning company comes to 85% from advertising. It is pretty save to say that they are an ad company.

It is also very likely that any data collected is used to reinforce and consolidate their main business.


Oh come on. There are contracts with GCP customers in place. There is a massive set of regulations around data misuse. Do you really think Google would risk the company-destroying liability of being caught using data improperly (something that‘s not in their T&c)? Who‘s to say there wouldn‘t be whistleblower tomorrow.

A company can do more than one thing. It probably would be a decent idea to make GCP a subsidiary, just so it‘s absolutely clear to everyone. Their capabilities in data analysis and ML make this an ideal project.

Like why is no one saying Amazon might be misusing data from AWS to benefit themselves. This just seems like the usual HN bias.


Has any company ever been punished for data misuse (i.e. severely, enough to make other companies think twice before doing the same thing)?

Apple listens to Siri conversations, whatever. Experian leaks private data, no biggie. Experian gets hacked, oops. Grindr sells your sexual orientation data, business as usual.


Different scenario. Voice assistant customers are regular users. The opposing side is a government agency who collects fines.

In a cloud data misuse case the civil liabilities are actually terrible, because you’re going up against other cash-rich corporations. Insert saying: stealing from the poor vs stealing from the wealthy.


Actually a lot of people point out that Amazon abuses it's storefront data as market research for it's own first party products on a scale nobody else can match. (And then, in turn, gives its first party products better placement in the store, to then crush the competition.)

The difference is, Amazon hasn't demonstrated a massive desire to collect and use health data, and fundamentally, isn't an ad company.

The other thing you're forgetting is that Google regularly changes the terms of the agreements it makes. When it bought DoubleClick, it swore that DoubleClick would never be able to access people's Google account data and that there would be a clear firewall of ad personalization information. Up until they changed their terms of service so that they could: https://www.propublica.org/article/google-has-quietly-droppe...

Google literally embodies the classic "I am altering the deal. Pray I do not alter it any further." Just because they claim they won't abuse the health data they are zealously collecting today doesn't mean they won't change their mind tomorrow.

An ad company should not be allowed to hold your health data.


Amazon using it‘s marketplace data for their own purposes, while obviously also problematic, is not the same thing as going behind their AWS clients back and using their data. The data literally belongs to the client.

I‘m pretty sure they are only accessing their data as allowed by the contracts for maintenance for example. Likewise for GCP and Azure.

Regarding your second argument. Doubleclick is on the consumer side. That‘s why GCP has it‘s own CEO, it‘s own buildings and org in general. They are seperate from the consumer side.

Put yourself in Google‘s shoes. They can make tons of money by revolutionizing the health care sector, improving patient care at the same time. Why would they fuck it up by feeding data to the ad side illegally. The risk is too high. No one would ever trust them again and they‘d probably be sued out of existence.

Is it not feasible that they are simply trying everything they can to become less reliant on ads. They have stated several times that the cloud side will be the dominant revenue source in the future. Is that possible? No idea. But strategically it makes sense to push cloud with all possible force.


I highly doubt this. A company of that size with separate P&L division (ex. GCP), operate like collection of mini companies.

Executive of one 'mini company' is free to take their own decision which is good for their division and their OKR.


I doubt that the data flow will be contained within these "mini companies". Formally they are the same company and even with the inefficiency of large corps, I doubt that Google doesn't have the capabilities to efficiently exchange information.

That aside, I do not want to rely on the inefficiency of internal processes for data protection.


The data flow is quite well-contained. I obviously can't offer you proof that you'd be likely to accept, but I do work on GCP and have experience with how data is partitioned.


I believe you.

I do not, however, believe that Google has any incentive to keep it that way, once there's buy-in. What Google's doing here is great, but on the other hand it's Google that's doing it.


Let's assume Google pulls a "gotcha" five years down the road and meshes its medical data into its advertising data.

What incentive do doctors and patients have to keep vending the data to Google at that point? And what incentive would other Cloud customers have to trust their data wouldn't get aggregated?

The GCP business model is different from Google's other business models and they know it.


> What incentive do doctors and patients have to keep vending the data to Google at that point?

Inertia, if nothing else. Moving platforms, especially in a highly-regulated industry, is no small thing.


That argument seems insufficient, or inertia would prevent people from moving onto the platform in the first place.


The problem here is that even if that is the case at the moment, the same organisation still has possession of the data and those partition walls can probably be moved later if the leadership of the organisation decide to do so.

Regardless of your personal good intentions and honesty, or anyone else's working there right now, a lot of people are never going to trust an organisation with the track record and potential conflicts of interest that Google has to process sensitive personal data responsibly. Its leaders and the investors backing them made their bed by helping to create the culture of pervasive involuntary surveillance that we all now suffer, and they will forever have to lie in that bed as a result.

It's unfortunate, because clearly there is considerable potential for improving patient outcomes through better use of big data and automation in medicine, and no doubt many of the people working on these kinds of projects have nothing but good intentions. However, until the culture of the technologists operates on the same kind of ethical and legal level as the culture of the clinicians, I don't see how the trust is ever going to be there now. The individuals doing the work need to be personally responsible for behaving ethically, even if they are directed to do otherwise by their superiors, like doctors and real engineers. Organisations that fail to meet the required standards need to face penalties that are an existential threat, so their investors stand to lose everything and their leaders can end their own careers if anyone breaks the rules deliberately or through gross negligence. Without those kinds of obvious, strong incentives, with the way so many players in the tech industry have exploited people's data in recent years, I think the barrier may simply be too high to clear.


[flagged]


That crosses into personal attack. We ban accounts that do that, so please don't.

https://news.ycombinator.com/newsguidelines.html


> In most implementations, neither Cerner not Epic encourage structured data recording except for billing codes.

> The software is designed to be sold to administrators and, as currently designed is unquestionably leading to worse outcomes for patients.

disclaimer: I’m a former Epic employee who worked on their inpatient app.

Do you have data or literature to support these statements for a typical Epic implementation or is this just anecdotal and biased?

Although there were ALWAYS some sort of issues and pushback from clinicians during implementation and go live, Epic software and data flow literally revolves around patient care and not billing. I don’t recall exactly which apps came first when Epic was born in the 1970s, but I believe their billing and admin apps came much later than many of the clinical apps. It’s actually why they still use a backend on top of mumps and cache, because the way data is stored and flows in these systems can do so literally around the core patient data structure, which should theoretically make a clinicians job easier to review a complete patient record (not make it easier for billing).


I worked in healthit for a few years. I have not seen a single piece of software or spec that wasn't up for a major revamp. Well, maybe snomed


Generally agree with your sentiments, with one correction: Scheduling/Revenue apps came first (Cadence was the first app to go live in 1983) with clinical apps launching in the 90s (Ambulatory in 1992 and Inpatient in 1998).

Source: staff meetings (former Epic IS) and this old article: https://isthmus.com/news/cover-story/epic-systems-an-epic-ti...


I try hard not to give my data to Google, for a lot of reasons. As a patient, having my healthcare provider hand my data over for them to monetize in God-knows-what ways is...undesirable.


This is where things get mixed up. EPIC is suggesting that using Google Cloud is somehow equivalent with handing data over to Google so that Google works with that data.

These are absolutely, extremely different. Assuming that Google will somehow break into your VMs or GKE clusters to get data out and monetize it is crazy, of course, but EPIC tries to suggest that.


Google is an unethical technological giant based in a major power (read: zero consequences). The data they would receive from this is extremely sensitive, and therefore lucrative. At the very least, their government would be interested in this information and they would be a good target for such interest simply because of their size and severely lacking ethical compass.

All these things taken together means it is not crazy to imagine that this is the case. If it is not the case currently, we have to assume it will be the case soon and that when it does become the case, historic data is still available. They are therefore always a bad choice.


I was actually referring to Google teaming up with Ascension to “share data”, which is what the second half of the article is about (and thus presumably related to this decision?). I don’t think Google would be so brazen to do what you’re suggesting, their typical playbook is to hide behind euphemisms and reassurances about the security and anonymity of your data, while providing no evidence of either, and then quietly profiting from it.


| is crazy, of course

Sure, and what company would drive around the entire planet and collect peoples private info and WiFi networks? That’s crazy obviously lmao


It isn‘t the same as going against T&C and getting sued by every customer they have.


Apple?


> if Google doesn't misuse the data/people aren't unreasonably paranoid about data misuse

It’s pretty hard to trust google from where I’m standing right now. So many incentives to do the wrong thing.

Edit - I can’t help but to say they are still better than most... at least for now. They should get some credit for their track record. It’d just be nice to know how their incentives are still aligned to the end consumers, and how to know how that would change.


-> most EMR software is a billing system with record keeping attached

not really. Maybe that's how they started, but that's not what they have evolved into.

-> the software is always purchased by administrators, typically with little input from doctors.

again, maybe that's how it used to happen but anywhere that I've worked or consulted on has a myriad of doctors, nurses, etc that have input. The last hospital I worked for wouldn't even let us change a single field in the EMR without it being approved by a committee of nurses.

-> In most implementations, neither Cerner not Epic encourage structured data recording except for billing codes.

This is not true at all. In fact, it is the clinicians' preference to write or dictate notes that is the reason for this. Both Epic and Cerner provide digital forms that allow for selections from lists or checkboxes which are then stored as discrete data elements in their respective databases. I'm not going to say either are perfect and - surprise! - clinicians hate change so they always hate their EMR, but it's not for the reasons you state.

I'm curious which division of Google you work for and how much experience you have working for an EMR vendor or healthcare system.


You sound like a “G can do no wrong” fanboy...

Google is absolutely NOT doing the right thing here. They’re are stealing people’s private medical data without their consent.

|and if Google doesn't misuse the data -Don’t worry, they will, like every piece of data they’ve ever fathered.

|people aren't unreasonably paranoid about data misuse, -They are, because google has shown they can’t be trusted ever. They view fines as the cost of doing business so why should they care?


>In most implementations, neither Cerner not Epic encourage structured data recording except for billing codes. This means that if a patient comes through the medical system frequently, doctors have to read pages and pages of unstructured text to get a sense of what's going on for the patient. The software is designed to be sold to administrators and, as currently designed is unquestionably leading to worse outcomes for patients.

So instead doctors need to spend hours typing up their notes using the just correct code or term so it's structured? As I understand it, doctors find having to type things into EMRs to be a giant time sink as it is that takes away from their ability to care for patients.


Even to the extent that it creates an entirely new job area! (medical scribes)


problem foremost from the perspective of "how can we make people healthier by making software more effective."

The perspective of “how can we extract and exploit more personal data”, FTFY.

We are long past the point of giving Google any benefit of the doubt. They keep attempting to exfiltrate data from the NHS as well.


I agree with your comments about how EMRs are bought by executives not doctors. This is exactly the same as companies buying SAP and then everyone else is stuck implementing it for 2 years and then hates using it daily. I work in healthcare and have literally never met a single person who uses Epic not absolutely HATE it!

Healthcare can't be fixed with software or cloud providers. Healthcare is a problem of four groups with misaligned incentives. Any "solution" usually only considers one or two of these groups. The only way to solve healthcare in the US is to push regulations via CMS (Medicare). Medicare is such a huge piece of the pie that they drive the ship. Anything they mandate will move to commercial payors since they need to implement it everywhere anyway.

Healthcare is like homelessness. It is a political problem not a technical problem without solutions. We know what will fix it, we just don't have the will to do so.


I'm in healthcare software, worked for epic, and now am trying to solve part of the problem. If love to learn more about what you think are the 4 corners thar need to be solved.


The four groups are the payors, the health systems/providers, the employers and the patients(employees). It is almost impossible to balance four competing goals.

The solution is to make it fewer competing concerns. We need to make employer provided healthcare illegal (go back to pre WWII) and force competition on the open market. We can still have you buy your own with HSA pre-tax dollars. We need to move to reference based pricing. Payors can't charge more than 1.3 * Medicare. This also solves the "out of network" bullshit, it is all the same so just pay. We also need to separate wellness/preventative from "health insurance". The one you know you will and should perform annually and the other is against catastrophe. Unfortunately, today we treat them as the same thing. I don't think a Medicare for all works but I do think it might work if it was for primary care only. Specialty care and Urgent/Emergency care could be handle by private payors.


I just don't understand how Google, as a server and storage provider for any EMR, could use the data themselves for anything. Because HIPAA. Yes, it's possible to anonymize patient data for use in research. But the anonymizing techniques are subject to review by the researcher's Institutional Review Board. They review the purpose of each study, the use of the data, and the ways the study preserves patient confidentiality.

Institutional Review Boards are responsible for ethical scrutiny of research involving human subjects. Informed consent, safety, etc.

As long as HIPAA is intact, I'm not sure any panic about misuse of patient data is justified, by any cloud provider.

That being said, systems like Cerner and EPIC definitely serve the interests of fee-for-service medical providers. Read this article by Atul Gawande. https://www.newyorker.com/magazine/2018/11/12/why-doctors-ha...


You seem to be under the impression that Google is building an EHR system. It is not. It is just losing hundreds of GCP contracts due to incompetence.


What is google doing exactly?




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: