Hacker News new | past | comments | ask | show | jobs | submit login

I doubt that the data flow will be contained within these "mini companies". Formally they are the same company and even with the inefficiency of large corps, I doubt that Google doesn't have the capabilities to efficiently exchange information.

That aside, I do not want to rely on the inefficiency of internal processes for data protection.




The data flow is quite well-contained. I obviously can't offer you proof that you'd be likely to accept, but I do work on GCP and have experience with how data is partitioned.


I believe you.

I do not, however, believe that Google has any incentive to keep it that way, once there's buy-in. What Google's doing here is great, but on the other hand it's Google that's doing it.


Let's assume Google pulls a "gotcha" five years down the road and meshes its medical data into its advertising data.

What incentive do doctors and patients have to keep vending the data to Google at that point? And what incentive would other Cloud customers have to trust their data wouldn't get aggregated?

The GCP business model is different from Google's other business models and they know it.


> What incentive do doctors and patients have to keep vending the data to Google at that point?

Inertia, if nothing else. Moving platforms, especially in a highly-regulated industry, is no small thing.


That argument seems insufficient, or inertia would prevent people from moving onto the platform in the first place.


The problem here is that even if that is the case at the moment, the same organisation still has possession of the data and those partition walls can probably be moved later if the leadership of the organisation decide to do so.

Regardless of your personal good intentions and honesty, or anyone else's working there right now, a lot of people are never going to trust an organisation with the track record and potential conflicts of interest that Google has to process sensitive personal data responsibly. Its leaders and the investors backing them made their bed by helping to create the culture of pervasive involuntary surveillance that we all now suffer, and they will forever have to lie in that bed as a result.

It's unfortunate, because clearly there is considerable potential for improving patient outcomes through better use of big data and automation in medicine, and no doubt many of the people working on these kinds of projects have nothing but good intentions. However, until the culture of the technologists operates on the same kind of ethical and legal level as the culture of the clinicians, I don't see how the trust is ever going to be there now. The individuals doing the work need to be personally responsible for behaving ethically, even if they are directed to do otherwise by their superiors, like doctors and real engineers. Organisations that fail to meet the required standards need to face penalties that are an existential threat, so their investors stand to lose everything and their leaders can end their own careers if anyone breaks the rules deliberately or through gross negligence. Without those kinds of obvious, strong incentives, with the way so many players in the tech industry have exploited people's data in recent years, I think the barrier may simply be too high to clear.


[flagged]


That crosses into personal attack. We ban accounts that do that, so please don't.

https://news.ycombinator.com/newsguidelines.html




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: