COVID. People in semi house arrest mode deprived of social interaction forgot how to be decent human beings. Shortly after I left the US, I vividly recall reading the news about a 6yo shot dead in his mum's car by a road rager on the freeway, and wondered why I ever stayed in the US for so long. Dodging the Gilroy garlic festival shooting because I took over Sunday on-call for a teammate should've been the first sign.
When I can't convince someone in a technical discussion of my viewpoint because they have their heels dug into a random viewpoint on a blog post penned by a random nobody, I write a blog post and link it back without mentioning the author's identity. Suddenly my words would carry weight.
Not sure how to feel about it tbh.
I also write to keep snapshots of my ideas and thought processes, regardless of how nonsensical they may be.
It would be part of your architecture design trade off. Disregarding whatever AWS peddles, you can have a cellular infrastructure (like Google), but have non-cellular systems architecture built on top of it (still deployed across different cells but the dependent services in the critical path residing in different cells or regions depending on resource availability and/or use case). The reroutes can occur at the global load balancer level, or it can happen at your own FE service. Or you could just drain the cell from the load balancer and the client retries go to a separate cell. Or you could have the load balancer not failover at all in case, which is generally the recommendation to prevent cascading failure.
It's down to your own design and acceptable business decisions, and what your business deems as availability.
Microsoft had been badgering me to upgrade my Windows 10, only it doesn't allow me to do so when I click on the link given. Then I recalled that I only needed Windows all this time to run a single application. Installed Ubuntu with Wine and I no longer boot into Microsoft nonsense. There was some 4K scaling issue with Wine that was annoying, but the very latest update fixed it.
The testing practice I've seen is to have a testdata/ directory with a bunch of textprotos for different test cases. If you're using Bazel, just include the entire directory glob as data dependency for the unit tests. The test tables are essentially just appropriately named textproto filenames that are unmarshaled into the proto message to be tested.
Then again I've also seen people do these thousand line in-code literal string protos which really grind my gears.
Vim is this editor that a senior backend engineer told me to go try for a week, when I was still relatively junior. Like really try to remember all the shortcuts, not some half-assed attempt then laugh about not being able to remember <esc>:wq. If I were able to demonstrate fluent Vim skills at the end of the week, he'd gift me his TKL mechanical keyboard with a Vim keycap on the escape button.
It's been 9 years since that challenge, and I'm still typing on the Filco keyboard. Good memories. Although I've since moved on to NeoVim for the LSPs. It's nice that I can freely edit files on any machine for something I picked up in a week.
That's pretty much it. I think the main issue nowadays is that companies think full stack engineering means OG(FE BE DB) + CICD + Infra + security compliance + SRE.
If a team of 5-10 SWEs have to do all of that while only graded on feature releases, k8s would massively suck.
I also agree that experienced platform/infra engineers tend to whine less about k8s.
Google paid SREs 67% their hourly rate for tier 1 oncall outside business hours, regardless of whether they were paged. So 12h shifts on weekends were a full day's pay. Convertible to Off-in-Lieu. I had so many off days. Not sure if it's still the case after those layoffs.