Hacker News new | past | comments | ask | show | jobs | submit | akshayshah's comments login

> Customers lost access to virtual machines and disks in the availability zone. The other two zones in the region were less affected, with Google reporting less than one percent of operations experiencing internal errors.

Contrary to the article's title, it looks like this was a zonal rather than regional outage.

GCP incident report (also linked in the article): https://status.cloud.google.com/incidents/e3yQSE1ysCGjCVEn2q...


From the incident report also problems with europe-west5-c


I didn't catch the Developer Voices episode, but it's on my listening list now!

At a low level, I'm guessing that we do many of the same things - batching writes, aggressively colocating and caching reads, leveraging multi-part uploads, and doing all the standard tail-at-scale stuff to manage S3 latency. We have been testing with Antithesis, and we reached out to Kyle Kingsbury.

Zoomed out a bit, a few differences with Warpstream jump out:

- Directionally, we want Bufstream to _understand_ the data flowing through it. We see so many Kafka teams struggling to manage data quality and effectively govern data usage, and we think they'd be better served by a queue that can do more than shuttle bytes around. Naturally, we come at that problem with a bias toward Protobuf.

- Bufstream runs fully isolated from Buf in its default configuration, and it doesn't rely on a proprietary metadata service.

- Bufstream supports transactions and exactly-once semantics [0]. We see these modern Kafka features used often, especially with Kafka Streams and Connect. Per their docs, Warpstream doesn't support them yet.

Disaggregating storage and compute is a well-trodden path for infrastructure in the cloud, and it's past time for Kafka to join the party. I'm excited to see what shakes out of the next few years of innovation in this space.

[0]: https://buf.build/docs/bufstream/kafka-compatibility/conform...


Thanks! btw, y'all should definitely go on Developer Voices and talk about BufStream!


That's a good example! The topics in question are created by the Kafka Streams client library using the standard Kafka APIs, so it works just fine with Bufstream. The Kafka ecosystem takes a thick client approach to many problems, so the same answer applies to many similar Kafka-adjacent systems.

There are, of course, some internal details that Bufstream doesn't recreate. We haven't seen many cases where application logic relies directly on the double-underscore internal topics, though - especially since much of that information is also exposed via admin APIs that Bufstream _does_ implement.


In places I've seen this used, front-end developers can run any query in development environments. In production (and sometimes staging) environments, queries must be allowlisted.

This gives the front-end developers lots of flexibility when initially developing new screens and components. Once the UI is ready to ship, the backend team checks to make sure that performance is acceptable (optimizing if necessary), allowlists the new query/queries, and ships to production.


https://connectrpc.com

We just joined the CNCF, too!


I’m intrigued by the support for CBOR log output: I haven’t seen CBOR support built into any other widely-used infrastructure software before. Is HAProxy unusual in this regard, or am I just unaware of CBOR support elsewhere?


I honestly don't know, all I know is that we've had demands from users at very high loads because the logs are more compact and their parsing is more efficient. And once you have JSON output encoding, it's not much work to produce another encoding. Maybe other ones will follow by the way, we'll see.


Per multiple sources, including the team currently maintaining the Protobuf toolchain within Google, proto3 was largely designed by Rob Pike. Of course the Protobuf wire format is quite a bit older, but some aspects of proto3 and Go's shared semantics (like implicit zero values) do seem to have come from the same mind.


That's a much smaller claim, proto3 schemas are a fairly minor evolution over proto2 schemas, mostly removing features. My impression was always that it was either removing things that were expensive to support in Javascript, impossible to serialize to idiomatic JSON, or that the team thought were misfeatures that nobody should ever use. That's a far cry from the original claim that Go and protobufs were co-designed "in the same meetings".

But even that limited claim is kind of hard to believe. Can you link to one of those multiple sources making that claim?


proto1 was designed by Sanjay and Jeff, proto2 by Kenton Varda (who later designed Cap'n'Proto), proto3 I can't remember who but I never heard Rob Pike being credited with this. He did write the very first Go proto binding package though and a separate serialization package called gob with a totally separate rpc library.


Apparently the author recently sold the project to a company called apilayer: https://lukas.im/2020/01/30/selling-dehydrated/index.html

They plan to keep the project open source and employ Lukas to continue maintaining it.


APILayer = ZeroSSL


ZeroSSL are the only ones providing certificates for IP addresses and free year-long certificates. Cudos to them for disrupting this market, almost monopolized by letsencrypt. Don't have high hopes, though, big players probably will kill them as they killed other free certificate issuers. For some reason letsencrypt status-quo as the only free certificate issuer benefits big players.


The certificate for zerossl.com is also issued by Let's Encrypt.

https://crt.sh/?q=zerossl.com


letsencrypt is hardly the only free cert issuer. even google offers them https://security.googleblog.com/2023/05/google-trust-service...


Which ‘big players’?


Jesus, not only do people seriously use this but somebody bought it? The world is insane.


Luggage is fairly expensive and easy to reuse - sizing isn’t personal, and most designs are fairly bland. I’m surprised that the bags themselves aren’t a substantial source of income.


You can get a new suitcase at Ross for $50 if you dig around at a few locations. Why buy a used one?


Probably comes down to how difficult it is to gain access, and whether that's destructive.


You can put whatever locks you want on your bags, but the TSA can and will cut anything they can't easily open.

https://www.tsa.gov/blog/2014/02/18/tsa-travel-tips-tuesday-...


Mind that "TSA approved" locks are trivially easy to open too. You can just buy the TSA keys for most of those.


I absolutely get that. But this private company is not the TSA, either.


Right, but in order to lose your bag, it has to first pass through the TSA. I doubt a bag would make it all the way Unclaimed Baggage with the locks intact.


The TSA has X-ray, to assess a bag contents.

This company can't sell bag contents without gaining entry into the bag.


I guarantee they have TSA keys


They're for sale on the internet ...


Expanding this for those not familiar with Go's history: Ken Thompson, formerly of Bell Labs and co-creator of B and Unix, was deeply involved in Go's early days. Rob Pike, also ex-Bell Labs, was also one of Go's principal designers.

I can't find a source now, but I believe Rob described Russ Cox as "the only programmer I've met as gifted as Ken." High praise indeed.


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: