to me these innovations seem akin to Concept Cars in the Motor industry; there's some utility, until some executive takes it center-stage, and pisses-off most of the core users.
the biggest value in these networks is real User-generated content, you can't beat billions of real users capturing real content and sharing habitually.
even if wording in the Terms permit certain research/usage, you've got market and political climates to consider.
there was a craze for DNA analysis 10+ years ago, the idea being 'if we can analyze e-commerce transactions, why not the human DNA!' The UPS being mostly around Health than Ancestry. That's flopped in my view.
Recent Gnome sequencing research is revealing that actually a Gene (downstream) doesn't necessitate a Health/Medical Condition (upstream) [1]. I think we need highest security measures, user education and Regulation when it comes to DNA, medical records, and biometric data (face, finger, iris, voice etc).
Charles Darwin & Co documented their theory of evolution well, there's enough ancestry there for most i think, at least as a solid starting point / platform. My guess would be if there was more education around theory of evolution (science), there would be less interest in Ancestry services (DNA based), leaving only a Medical case for them, and hence demanding greater protection/security.
you said you were ex-google, how come the privacy policy surprised you? privacy around user-data is big concern, especially for tech giants. Some companies are forced to leave entire Markets, to adhere to privacy/use-data policies.
I first joined Google in 2010. Back then user data concerns were about good security and engineering. There wasn't a big bureaucracy around regulatory compliance. GDPR didn't even come into effect until 2018.
Our data wasn't actually "user data" in the sense Google usually deals with. It wasn't data collected incidentally after click-through consent from billions of random people on the internet as they use their computers in daily life. It was ~100 people, many of them employees, who voluntarily participated in a one hour in-person data collection session after signing ink-on-paper consent forms, who received monetary compensation for the use of their data, and the data was used solely for training and evaluating models and not cross-linked with any other data for any other purpose. But Google's privacy bureaucracy wanted to apply the same processes and standards as user data collected continuously from billions of internet users.
But of course ultimately it didn't matter. The privacy bureaucracy issues were not at all related to the division-wide strategy pivot that killed our team (and many others). It just made my life very frustrating for the year or so before that happened. And I understand why the bureaucracy exists. In today's climate the PR risk to Google from a hit piece headline like "Google scans your eyeballs and we have the leaked data" is much higher than the probable benefit from a small team's engineering work. So they err on the side of slowing things way down. But that doesn't make it any less frustrating for that small team. And it makes me quite pessimistic about the future development of new technology at Google. I expect that their continuing failure to deploy AI anywhere near as good as GPT-4 can be attributed to similar locally rational risk-averse bureaucracy...
by the time a major jurisdiction like the EU brought GDPR, regulatory compliance was already long-overdue (as usual, Govs plays catch-up with the business world), hence what followed was a rapid rise in "bureaucracy" (i guess). For example, if a company falls short in compliance (say Cambridge Analytica) issue bubbles-up to the Network (say FB or ByteDance), if FB fails it bubbles up to the Marketplace (say Apple), if Apple fails at this level, it's so high up Govs get involved to the point that it could trigger inter-continental trade wars.
Hence we're seeing Apple, Bytedance, Amazon (no doubt Google) etc make regulatory compliance a bigger part of their core business than ever before - prevention in favor of treatment.
Your team's case seems unfortunate, given the narrow scope in the trials. My initial guess is that anything involving eye-scanning could trigger Biometric ID (iris recognition) compliance worries. I get your concern about future tech developments, I also think businesses (startups) have to find new ways to account (adapt) for these changes/requirements - "bureaucracy" will naturally increase.
it was very difficult to make money in the early days of internet (90s, 2000s), so ads* (via tracking) were a quick-and-easy way to make some money. then came social networking, and magnified that* through network-effects ... they got way too comfortable and didn't see it become a major political issue - hence gov regulations and enforcement.
that business model has long gone in my view - new businesses will move in and innovate in that vacuum.
Losing the CEO must not push significant number of your staff to throw hissy fits and jump ship - it doesn't instill confidence in investors, partners, and crucially customers.
as this event turned into a farce, it's evident that neither the company nor it's key investors accounted much for the "bus factor/problem" i.e loosing a key-person threatened to destroy the whole enterprise.
for me, it's UTC everywhere by default, let clients convert UTC to local time. dbs/upstream systems do that already (these days). computing is predominantly distributed these days