I first joined Google in 2010. Back then user data concerns were about good security and engineering. There wasn't a big bureaucracy around regulatory compliance. GDPR didn't even come into effect until 2018.
Our data wasn't actually "user data" in the sense Google usually deals with. It wasn't data collected incidentally after click-through consent from billions of random people on the internet as they use their computers in daily life. It was ~100 people, many of them employees, who voluntarily participated in a one hour in-person data collection session after signing ink-on-paper consent forms, who received monetary compensation for the use of their data, and the data was used solely for training and evaluating models and not cross-linked with any other data for any other purpose. But Google's privacy bureaucracy wanted to apply the same processes and standards as user data collected continuously from billions of internet users.
But of course ultimately it didn't matter. The privacy bureaucracy issues were not at all related to the division-wide strategy pivot that killed our team (and many others). It just made my life very frustrating for the year or so before that happened. And I understand why the bureaucracy exists. In today's climate the PR risk to Google from a hit piece headline like "Google scans your eyeballs and we have the leaked data" is much higher than the probable benefit from a small team's engineering work. So they err on the side of slowing things way down. But that doesn't make it any less frustrating for that small team. And it makes me quite pessimistic about the future development of new technology at Google. I expect that their continuing failure to deploy AI anywhere near as good as GPT-4 can be attributed to similar locally rational risk-averse bureaucracy...
by the time a major jurisdiction like the EU brought GDPR, regulatory compliance was already long-overdue (as usual, Govs plays catch-up with the business world), hence what followed was a rapid rise in "bureaucracy" (i guess). For example, if a company falls short in compliance (say Cambridge Analytica) issue bubbles-up to the Network (say FB or ByteDance), if FB fails it bubbles up to the Marketplace (say Apple), if Apple fails at this level, it's so high up Govs get involved to the point that it could trigger inter-continental trade wars.
Hence we're seeing Apple, Bytedance, Amazon (no doubt Google) etc make regulatory compliance a bigger part of their core business than ever before - prevention in favor of treatment.
Your team's case seems unfortunate, given the narrow scope in the trials. My initial guess is that anything involving eye-scanning could trigger Biometric ID (iris recognition) compliance worries. I get your concern about future tech developments, I also think businesses (startups) have to find new ways to account (adapt) for these changes/requirements - "bureaucracy" will naturally increase.
Our data wasn't actually "user data" in the sense Google usually deals with. It wasn't data collected incidentally after click-through consent from billions of random people on the internet as they use their computers in daily life. It was ~100 people, many of them employees, who voluntarily participated in a one hour in-person data collection session after signing ink-on-paper consent forms, who received monetary compensation for the use of their data, and the data was used solely for training and evaluating models and not cross-linked with any other data for any other purpose. But Google's privacy bureaucracy wanted to apply the same processes and standards as user data collected continuously from billions of internet users.
But of course ultimately it didn't matter. The privacy bureaucracy issues were not at all related to the division-wide strategy pivot that killed our team (and many others). It just made my life very frustrating for the year or so before that happened. And I understand why the bureaucracy exists. In today's climate the PR risk to Google from a hit piece headline like "Google scans your eyeballs and we have the leaked data" is much higher than the probable benefit from a small team's engineering work. So they err on the side of slowing things way down. But that doesn't make it any less frustrating for that small team. And it makes me quite pessimistic about the future development of new technology at Google. I expect that their continuing failure to deploy AI anywhere near as good as GPT-4 can be attributed to similar locally rational risk-averse bureaucracy...