Is having one key per zone worth paying money for? It's on the list of features I'd like to implement for PTRDNS because it makes sense for my own use case, but I don't know if there's enough interest to make it jump to the top of this list.
Although you get to set your own standards "A bug was discovered after upgrading software" isn't very illuminating vis a vis quality. That does happen from time to time in most software.
In my experience an AMD card on linux is a great experience unless you want to do something AI related, in which case there will be random kernel panics (which, in all fairness, may one day go away - then I'll be back on AMD cards because their software support on Linux was otherwise much better than Nvidia's). There might be some kernel upgrades that should be skipped, but using an older kernel is no problem.
What technique are you using for redirecting traffic to region B when region A is offline? And what happens if I have 2 nodes in a region and one goes offline?
For high-availability deployments, we leverage Fly.io's global Anycast network and DNS-based health checks. When a machine in region A goes offline, Fly's Anycast routing automatically directs traffic to healthy machines in other regions without manual intervention.
For intra-region redundancy, we deploy 2 nodes per region in HA mode. If one node fails, traffic is seamlessly routed to the other node in the same region through Fly.io's internal load balancing. This provides N+1 redundancy within each region, ensuring service continuity even during single-node failures.
Depends on the setup and what your goals are. Anycast typically takes the shortest route based on topology. This is particularly nice when you use something like caddy (because of the huge plugin system, you can do lots of stuff directly on the edge) to build your own CDN by caching at the edge or go all in and use caddy-lua to build apps at the edge. Gluing together dns systems (health checks, proximity + edge nodes) can be similar but the benefits of being "edge" largely go away as soon as you add the extra hop to a different region server.
That's an excellent example of what I like to call "good IT hygiene". I too would like to know what kind of tools you have to perform the functional and integration tests, and to execute the various rollouts.
Without going too deeply into details, we use common non-cloud-native platforms such as Jenkins to configure and schedule the tests. Unit tests are often baked into Makefiles while functional / integration tests are usually written as shell scripts, python scripts, or (depending on what needs to happen) even Ansible playbooks. This allows us to avoid cloud vendor lock-in, while using the cloud to host this infra and the deployment envs themselves.
Edit: we use Makefiles, not because we are writing code in C (we are not) but because our tech culture is very familiar with using 'make' to orchestrate polyglot language builds and deployments.
> P.S., I swear the certification PDFs used to include this information (e.g., https://cloud.google.com/security/compliance/iso-27018?hl=en) but now these are all behind "Contact Sales" and some new Certification Manager page in the console.
This is not good, I can't think of any actual reason to hide those certificates.
For comparison, AWS makes their ISO-27001 certificate available at https://aws.amazon.com/compliance/iso-27001-faqs/ and also cites the certifying agent, most of which have a search page from where you can find all the certificates they've issued.
I'm using Vector for my own infrastructure and at work, at the time it seemed the best option to ship logs to various destinations. Are there any alternatives?
I'd prefer an application to automatically fill in my data in those junk ATS systems, such as WorkDay, that pretend to parse my LinkedIn profile or a PDF resume and inevitably makes me do all the copy-paste twice.
A couple of years ago it was so bad that I stopped applying as soon as I saw that WorkDay crap pop up, regardless of the company.
If the domain of easy applications is automated entries and copy paste, then Workday is indeed the desired tooling. LinkedIn Easy Apply serves the applicant, but I can't imagine any recruiter loves it.
For Workday, use a very simple resume. No columns, no bullet points (use asterisks), no tables.
There's usually an option to upload another file near the end of the form. After it has filled in the fields using your plain resume, delete it and upload the nicer one.
Paddle may be an option, depending on what your SaaS will sell, but if they start pretending that your business falls into a high-risk category they will demand 3 months of processing statements before allowing you to use their platform.