So I need to write some code to keep track of which of A or B is running, then something else to template out a nginx configuration to switch between the two. Then I need to figure out how to upgrade the inactive one, test it, flip the port over in nginx, and flip it back if it breaks.
At minimum it requires good-enough health checks so that k8s can detect if the new config doesn't work, and automatically rollback, otherwise you're looking at "no downtime except when there's a mistake" situation.
...and to really check that the health check and everything else in your .yaml file actually works, you will probably have to spin up another instance just so that you can verify your config, unless you like debugging broken configs on live. Well, of course you can always fix your mistake and go "kubectl reaplce -f", but that kinda goes against the requirement of "no downtime".
I grant that k8s makes it easier to spin up a new instance for testing.
I'm pretty sure I could write a shell script to do that, with a bunch of grep/sed hackery in under an hour or two. For a single server, personal project, this is probably the simpler approach for me.
Or
kubectl apply -f app.yaml
Which is less complex?