EDIT: Let me explain a bit more: once you start running a distributed system you stop caring about OS, filesystem and disk level issues, because you have redundancy on another level. And it makes all the difference. You don't worry anymore, you can always just reboot, hard-reset or take a node out to investigate. Suddenly you realize that it's not a big deal even if some node starts freezing or some process starts OOMing and crashing, you don't care, you just let them.
If dont want to care for the OS, who do you expect to keep it in check? Someone has to worry about it: you can try and abstract it away and minimize it but its never gone.
I see this line of thought that somehow self-hosted "clouds"/clusters look after themselves, but thats usually not the case.
You'd come close by buying QNAP or Synology hardware, they provide software updates (including OS). Even buying software solutions like Unraid. I dont know how maintainable FreeNAS is for someone who doesnt want to worry about OS tho..
> I see this line of thought that somehow self-hosted "clouds"/clusters look after themselves, but thats usually not the case.
They don't really look after themselves, but do handle failures on the highest possible level. Which makes it unnecessary to keep each OS in check. What's critical for a single NAS box is critical for a distributed storage only if all boxes of some replica have the same problem at the exact same time, otherwise you just reboot and move on and it doesn't matter if it happens again, it doesn't cause any downtime.
I actually speak from my own experience. I run and maintain a distributed key-value storage for many years. Although I designed and implemented it myself (and redesigned a bunch of times), I don't see how experience with those other distributed storages, like Swift, would be any different.