The reason to target k8s on cloud vms is that cloud VMs don't subdivide as easily or as cleanly. Managing them is a pain. K8s is an abstraction layer for that - Rather than building whole machine images for each product, you create lighter weight docker images (how light weight is a point of some contention), and you only have to install your logging, monitoring, and etc once.
Your advice about bigger machines is spot on - K8s biggest problem is how relatively heavyweight the kublet is, with memory requirements of roughly half a gig. On a modern 128g server node that's a reasonable overhead, for small companies running a few workloads on 16g nodes it's a cost of doing business, but if you're running 8 or 4g nodes, it looks pretty grim for your utilization.
You can run pods, with podman and avoid the entire k8s stack or even use minikube on a machine if you wanted to. Now that rootless is the default in k8s[0] the workflow is even more convenient and you can even use systemd with isolated users on the VM to provide more modularity and seporation.
It really just depends on if you feel that you get value from the orchestration that full k8s offers.
Note that on k8s or podman, you can get rid of most of the 'cost' of that virtualization for single placement and or long lived pods by simply sharing a emptyDir or volume shared between pod members.
There is enough there for you to test to see that the performance is so close to native sharing unix sockets that way, that there is very little performance cost and a lot of security and workflow benefits to gain.
As podman is daemonless, easily rootless, and on mac even allows you to ssh into the local linux vm with `podman machine ssh` you aren't stuck with the hidden abstractions of docker-desktop which hides that from you it has lots of value.
Plus you can dump a k8s like yaml to use for the above with:
podman kube generate pgdemo-pod
So you can gain the advantages of k8s without the overhead of the cluster, and there are ways to launch those pods from systemd even from a local user that has zero sudo abilities etc...
I am using it to validate that upstream containers don't have dial home by producing pcap files, and I would also typically run the above with no network on the pgsql host, so it doesn't have internet access.
IMHO the confusion of k8s pods, being the minimal unit of deployment, with the fact that they are just a collection of containers with specific shared namespaces in the general form is missed.
As Redhat gave podman to CNCF in 2024, I have shifted to it, so haven't seen if rancher can do the same.
The point being is that you don't even need the complexity of minikube on VM's, you can use most of the workflow even for the traditional model.
Your advice about bigger machines is spot on - K8s biggest problem is how relatively heavyweight the kublet is, with memory requirements of roughly half a gig. On a modern 128g server node that's a reasonable overhead, for small companies running a few workloads on 16g nodes it's a cost of doing business, but if you're running 8 or 4g nodes, it looks pretty grim for your utilization.