> Customers lost access to virtual machines and disks in the availability zone. The other two zones in the region were less affected, with Google reporting less than one percent of operations experiencing internal errors.
Contrary to the article's title, it looks like this was a zonal rather than regional outage.
I didn't catch the Developer Voices episode, but it's on my listening list now!
At a low level, I'm guessing that we do many of the same things - batching writes, aggressively colocating and caching reads, leveraging multi-part uploads, and doing all the standard tail-at-scale stuff to manage S3 latency. We have been testing with Antithesis, and we reached out to Kyle Kingsbury.
Zoomed out a bit, a few differences with Warpstream jump out:
- Directionally, we want Bufstream to _understand_ the data flowing through it. We see so many Kafka teams struggling to manage data quality and effectively govern data usage, and we think they'd be better served by a queue that can do more than shuttle bytes around. Naturally, we come at that problem with a bias toward Protobuf.
- Bufstream runs fully isolated from Buf in its default configuration, and it doesn't rely on a proprietary metadata service.
- Bufstream supports transactions and exactly-once semantics [0]. We see these modern Kafka features used often, especially with Kafka Streams and Connect. Per their docs, Warpstream doesn't support them yet.
Disaggregating storage and compute is a well-trodden path for infrastructure in the cloud, and it's past time for Kafka to join the party. I'm excited to see what shakes out of the next few years of innovation in this space.
That's a good example! The topics in question are created by the Kafka Streams client library using the standard Kafka APIs, so it works just fine with Bufstream. The Kafka ecosystem takes a thick client approach to many problems, so the same answer applies to many similar Kafka-adjacent systems.
There are, of course, some internal details that Bufstream doesn't recreate. We haven't seen many cases where application logic relies directly on the double-underscore internal topics, though - especially since much of that information is also exposed via admin APIs that Bufstream _does_ implement.
In places I've seen this used, front-end developers can run any query in development environments. In production (and sometimes staging) environments, queries must be allowlisted.
This gives the front-end developers lots of flexibility when initially developing new screens and components. Once the UI is ready to ship, the backend team checks to make sure that performance is acceptable (optimizing if necessary), allowlists the new query/queries, and ships to production.
I’m intrigued by the support for CBOR log output: I haven’t seen CBOR support built into any other widely-used infrastructure software before. Is HAProxy unusual in this regard, or am I just unaware of CBOR support elsewhere?
I honestly don't know, all I know is that we've had demands from users at very high loads because the logs are more compact and their parsing is more efficient. And once you have JSON output encoding, it's not much work to produce another encoding. Maybe other ones will follow by the way, we'll see.
Per multiple sources, including the team currently maintaining the Protobuf toolchain within Google, proto3 was largely designed by Rob Pike. Of course the Protobuf wire format is quite a bit older, but some aspects of proto3 and Go's shared semantics (like implicit zero values) do seem to have come from the same mind.
That's a much smaller claim, proto3 schemas are a fairly minor evolution over proto2 schemas, mostly removing features. My impression was always that it was either removing things that were expensive to support in Javascript, impossible to serialize to idiomatic JSON, or that the team thought were misfeatures that nobody should ever use. That's a far cry from the original claim that Go and protobufs were co-designed "in the same meetings".
But even that limited claim is kind of hard to believe. Can you link to one of those multiple sources making that claim?
proto1 was designed by Sanjay and Jeff, proto2 by Kenton Varda (who later designed Cap'n'Proto), proto3 I can't remember who but I never heard Rob Pike being credited with this. He did write the very first Go proto binding package though and a separate serialization package called gob with a totally separate rpc library.
ZeroSSL are the only ones providing certificates for IP addresses and free year-long certificates. Cudos to them for disrupting this market, almost monopolized by letsencrypt. Don't have high hopes, though, big players probably will kill them as they killed other free certificate issuers. For some reason letsencrypt status-quo as the only free certificate issuer benefits big players.
Luggage is fairly expensive and easy to reuse - sizing isn’t personal, and most designs are fairly bland. I’m surprised that the bags themselves aren’t a substantial source of income.
Right, but in order to lose your bag, it has to first pass through the TSA. I doubt a bag would make it all the way Unclaimed Baggage with the locks intact.
Expanding this for those not familiar with Go's history: Ken Thompson, formerly of Bell Labs and co-creator of B and Unix, was deeply involved in Go's early days. Rob Pike, also ex-Bell Labs, was also one of Go's principal designers.
I can't find a source now, but I believe Rob described Russ Cox as "the only programmer I've met as gifted as Ken." High praise indeed.
Contrary to the article's title, it looks like this was a zonal rather than regional outage.
GCP incident report (also linked in the article): https://status.cloud.google.com/incidents/e3yQSE1ysCGjCVEn2q...