Disagree. I run Ceph in Proxmox and have for years on a small cluster of 3 used R620 servers without any SSDs.
It’s just worked. I’ve lost two of the machines due to memory failures at two different points in time and the k8s clusters sitting on top didn’t fail, even the Postgres databases running with cnpg remained ready and available during both hardware failures.
Oh sure it works, not denying that. My point is that performance isn't great and if you only have a small cluster then it doesn't take much to make everything fall over because your failure domains are huge (in your case, you only have 3).
But then to offset the above, it also depends on how important your environment is; homelabs don't usually require five nines.
I am a big Proxmox fan but I dislike how easy it makes Ceph to run (or rather, how it appears to be easy). Ceph can fail in so many ways (I've seen a lot of them) and most people who set a Ceph cluster up through the UI are going to have a hard time recovering their data when things go south.
It’s just worked. I’ve lost two of the machines due to memory failures at two different points in time and the k8s clusters sitting on top didn’t fail, even the Postgres databases running with cnpg remained ready and available during both hardware failures.