Migrating everything to the same externally-available cloud infrastructure is exactly the thing that Google needs to do, but has been failing at for going on a decade now. Yes, I'm bitter about it.
Is it not the case that it is actually the opposite? That GCP is running on the internally-available cloud infrastructure at Google?
How would they turn that inside out? What would GCP actually run on if they ran GCP on GCP? Or do you mean run search/gmail/youtube on GCP? Isn't their internal infra basically kubernetes in all but name?
I've heard that Google is bad at dog fooding their own cloud infrastructure and even that Borg is still dominant over Kubernetes in their internal infra but don't have any insider knowledge. If true this is one thing that Amazon is vastly superior than Google at.
Internal teams are using some GCP services, but overall nearly all teams stick with Borg. Many key services like Search are not able to switch to GCP b/c performance degradation.
Yeah there was a post about that on HN that got a lot of attention a week or two ago. I think that letter/essay was written before google cloud platform was really a thing, would be interesting to know how things have changed since ~2006.
It depends where you are in the stack. At a low level they're using the same data centers, but the public stuff like GCP and Kubernetes is a rewrite of earlier internal software and migration of higher-level servers to a different stack is nontrivial.
Amazon Cloud itself is not externally available, and runs on parts of its infrastructure that are non-public. It's like GCP in that regard, except that most of the rest of Amazon's products are then built on top of this cloud, whereas with Google that's not the case; there's a parallel internal-only infrastructure that every Google product you've ever heard of
(Search/Gmail/YouTube/etc) runs on.
One of the more interesting things I've observed over the past few years is the incremental effort to get AWS teams "lower in the stack" onto native AWS offerings. This makes sense: the less parallel or underlying infrastructure you have to maintain to scale and support the public cloud, the more time and attention you can devote to public services. Many (most?) new AWS services are fully native AWS (including the one I work on :)), and design and operational readiness reviews require justification for not using native AWS rather than the opposite.
At least in azure, the "platform" that azure runs on doesn't have anything like a managed sql database or a distributed queue service. It's strictly a compute platform. I'd imagine GP is suggesting that only the things that absolutely must run on internal-only infra do so and the rest like YouTube or Gmail run purely on top of GCP (which only happens to be on top of internal infra)
The claim is that some of the data was migrated to publicly available systems. I'm guessing that quite a bit of it is stored in amazon specific systems, for two reasons. First, availability. If the amazon product catalog depends on Aurora, then when Aurora has an issue, so does Amazon retail. Second, custom solutions that were developed before modern alternatives existed have an insurmountable inertia in a big company.
No, not at all, but it rarely goes down at the same time. I mean, I could be wrong, but that's my read of the announcement. Another issue with the managed services is that they don't provide total control over configuration, which is a complete show stopper for a large high performance data store.