First things first: don't get me wrong, I do understand your point.
The thing is: you have a system administration problem, whether you want or not (that is a big part of what you're actually paying for when you buy Dropbox or when you let Google feed on your data).
Now, as an hobbyist, when you start depending on services you set up and manage yourself, it would be a good idea to take some time to learn additional tools to enjoy your hobbies more.
Now on a lighter tone, there are simpler ways to have a backup strategy, as long as you are okay with lower guarantees.
You might not use zfs, and use simple LVM snapshots. You might want to use no snapshotting at all and just do a nightly backup via a cronjob: at 3AM you just switch everything down (docker-compose down if you're using it), do a rsync to another host, start it back up. It's way simpler but you'd only get a yesterday's copy in case of problem.
But then again, that would safeguard you when doing upgrades: disable backup, perform upgrade, test everything, re-enable backup, resume operations. Worst case scenario you rsync back the yesterday's data and you resume normal operation.
"It's way simpler but you'd only get a yesterday's copy in case of problem."
I have a restic backup running on that plan instead of rsync, which means I get true backups. The nice thing about that it is that this can be integrated into any "docker compose" pipeline that you like. I'm generally not as hot on Docker as a lot of people but it does do a nice job of containing household services into a text file that can be easily checked into source control, and easily backed up, as long as it can be run in docker.
It's a pity that Sandstorm started before Docker was a practical option for most people. There's probably some room for a Sandstorm 2.0 that "just" uses Docker and provides some grease around setting up this stuff on a system from a top-level configuration file or something. It would go from a massive project in which you have to "port" everything to something some hobbyists could set up. It wouldn't be as integrated, but it would work.
Wasn't Sandstorm a bit incompatible with Docker? Notably it didn't just containerize apps, it communicated over a custom protocol to fully isolate and limit their permissions. Eg network/disk access was tightly controlled.
Though perhaps there was a shim layer? Eg over normal containers, it shimmed network/disk from the container over the Sandstorm RPC buffer?
Really cool tech regardless, but it had a big tech maintenance burden. That's my fear in all these self hosted apps. Everything needs to be maintained for it to feel good to the user, and that seems like such a tall ask.
> you have a system administration problem, whether you want or not
Right. You can pay people to do things for you, or you can do them yourself, but either way the things have to be done, and they should be done by someone who is good at it and has a contract with you -- employment or otherwise.
One has to be able to start somewhere. How do you "get good at it" ? You proceed via steps. you challenge yourself, you reach an improvement, enjoy that improvement for a while, then you challenge yourself again when you see room for improvement.
But just saying "nah let somebody else do that" is not what we want here. We're hobbyist, we want and enjoy doing stuff ourselves. Doing a sub-optimal work is okay, we will improve over time :)
sharing our experiences and procedures here is part of that
This is true to a point. But eventually, you've gotten all you can from learning and managing a new thing. You can't reasonably make it more efficient and there are no benefits to spending more time learning it. This is when it shifts from a hobbyist's exploration to a routine, mundane task that requires time and attention while offering no _new_ benefit.
For some hobbyists there's comfort in this repetition; for others, it's just a time sink with high opportunity cost.
There is a middleground, imo. The way apps are designed massively impacts the general requirements of system administration.
What we're seeing is largely centralized applications and the work it takes to manage them. Ignore UX for a second, and imagine you wrote a database on top of a distributed system - ala IPFS - and all modifications were effectively pushed into IPFS. This suddenly boils the system administration tasks down to:
1. make sure my IPFS node is up to date
2. make sure my computer is online
And even those can be heavily mitigated with peers who follow each other.
Now we're not there yet, i'm not advertising a better solution. I'm simply saying that part of the administration is a heavy lift simply because of how these apps were written. I think we can do better for the home user.
Secure Scuttlebutt is a lot easier to maintain, for example. The most important thing with that is that you simply connect to the internet and publish your posts/fetch other posts. In doing so, other people make backups for you and you of them. Backing up your key seems like the highest priority.. and even that could be eliminated i imagine, in the P2P model at least. Very low maintenance.
>You can pay people to do things for you, or you can do them yourself, but either way the things have to be done
Nah. I had an elaborate home setup for a while as a hobby and the ongoing hassles (including NextCloud upgrade complexities) just led me to turning it all off and making do with simpler or no solutions.
I’ve learned my lesson about mixing hobbyist tinkering with something your family comes to expect as an everyday convenience - that while you on a random Saturday morning might be hyped about deploying the latest self hosted cool stuff, the other you on some random Thursday at 10pm when everything malfunctions is gonna hate past-you’s guts for putting you in this position.
> You can pay people to do things for you, or you can do them yourself,
or be a parent of a geek and have it done, with 24/7/365 support and training, and remote support of some magical things like "hey! I had a button appearing and I pressed it and now I am not sure I have internet anymore". Of course said "customer" has no idea about what was on the button. Etc. etc.
Maybe their hobby is not tinkering with Nextcloud and they would rather put that limited time/energy into setting up k8s clusters or developing a web app. Who knows?
The point is with limited time one has to pick their battles and maybe setting up zpools and a full next cloud docker compose isn't what they want to spend their time on.
Again, I see your point, because I've been there :)
But you're missing an important point of view: do you rely on that data?
If it's a toy project, don't even bother, just ignore all my replies.
If you do rely on nextcloud and the data stored there, having a backup procedure and safeguards for the upgrade process helps a lot.
Next time you perform an upgrade you can proceed without fears and stress, and way faster (if you run on docker) and that frees up time to play with kubernetes clusters and webapp development :)
Except it's not your call to make, or OP's call to make.
You're already getting quite a piece of software for free, demanding extended long-term support isn't really fair, expecially if you consider that they offer a simple update procedure.
> The point is with limited time one has to pick their battles
Yeah, that's why I pay for a managed K8s instance for my toy projects but do my own sysadmin work on various self-hosted things. The former is not my hobby so I'd rather pay someone else to do it.
This is an inherent limitation of our current tech stack, and unfortunately the cheapest mitigation we have is "take full system snapshot a.k.a. do your sysadmin work". The alternative (LTS release etc) all cost much more money.
> It's literally just branching at one release and fixing bugs in that release for a few years
This takes engineering time, i.e. money. It may also benefit upstream branches, but again porting patch between branches takes time, especially after massive refactoring happened on latest branch.
I agree that the problem of storing data securely is a problem that you have whether you want it to or not, but I was mostly lamenting that Nextcloud does preciously little to help help you to solve this problem, as it suffers from the same problem itself (possibly worse because now you've got a data durability problem with more moving parts).
The thing is: you have a system administration problem, whether you want or not (that is a big part of what you're actually paying for when you buy Dropbox or when you let Google feed on your data).
Now, as an hobbyist, when you start depending on services you set up and manage yourself, it would be a good idea to take some time to learn additional tools to enjoy your hobbies more.
Think about this as in "leveling up" your hobby.
---------------------------------------------------------------------
Now on a lighter tone, there are simpler ways to have a backup strategy, as long as you are okay with lower guarantees.
You might not use zfs, and use simple LVM snapshots. You might want to use no snapshotting at all and just do a nightly backup via a cronjob: at 3AM you just switch everything down (docker-compose down if you're using it), do a rsync to another host, start it back up. It's way simpler but you'd only get a yesterday's copy in case of problem.
But then again, that would safeguard you when doing upgrades: disable backup, perform upgrade, test everything, re-enable backup, resume operations. Worst case scenario you rsync back the yesterday's data and you resume normal operation.