The problem here is that even if that is the case at the moment, the same organisation still has possession of the data and those partition walls can probably be moved later if the leadership of the organisation decide to do so.
Regardless of your personal good intentions and honesty, or anyone else's working there right now, a lot of people are never going to trust an organisation with the track record and potential conflicts of interest that Google has to process sensitive personal data responsibly. Its leaders and the investors backing them made their bed by helping to create the culture of pervasive involuntary surveillance that we all now suffer, and they will forever have to lie in that bed as a result.
It's unfortunate, because clearly there is considerable potential for improving patient outcomes through better use of big data and automation in medicine, and no doubt many of the people working on these kinds of projects have nothing but good intentions. However, until the culture of the technologists operates on the same kind of ethical and legal level as the culture of the clinicians, I don't see how the trust is ever going to be there now. The individuals doing the work need to be personally responsible for behaving ethically, even if they are directed to do otherwise by their superiors, like doctors and real engineers. Organisations that fail to meet the required standards need to face penalties that are an existential threat, so their investors stand to lose everything and their leaders can end their own careers if anyone breaks the rules deliberately or through gross negligence. Without those kinds of obvious, strong incentives, with the way so many players in the tech industry have exploited people's data in recent years, I think the barrier may simply be too high to clear.
Regardless of your personal good intentions and honesty, or anyone else's working there right now, a lot of people are never going to trust an organisation with the track record and potential conflicts of interest that Google has to process sensitive personal data responsibly. Its leaders and the investors backing them made their bed by helping to create the culture of pervasive involuntary surveillance that we all now suffer, and they will forever have to lie in that bed as a result.
It's unfortunate, because clearly there is considerable potential for improving patient outcomes through better use of big data and automation in medicine, and no doubt many of the people working on these kinds of projects have nothing but good intentions. However, until the culture of the technologists operates on the same kind of ethical and legal level as the culture of the clinicians, I don't see how the trust is ever going to be there now. The individuals doing the work need to be personally responsible for behaving ethically, even if they are directed to do otherwise by their superiors, like doctors and real engineers. Organisations that fail to meet the required standards need to face penalties that are an existential threat, so their investors stand to lose everything and their leaders can end their own careers if anyone breaks the rules deliberately or through gross negligence. Without those kinds of obvious, strong incentives, with the way so many players in the tech industry have exploited people's data in recent years, I think the barrier may simply be too high to clear.