I really wish there was an LTS release that was supported for at least 2 years (just bugfixes, no new features). I self host my own instance, and I really just want to set it and forget it.
I don't mind doing low risk patches every few months or weeks, but I don't want to do a major version upgrade every 4-6 months.
I did my last major version upgrade only 15 months ago, and I am now 4 major versions behind, which means:
1) I upgrade from 17->18->19->20->21 and hope nothing breaks!
2) I either start over with the latest version
I like that open source moves fast, but at some point, I just want to stop fiddling with it and let it run with minimal maintenance.
> 1) I upgrade from 17->18->19->20->21 and hope nothing breaks!
I did a similar path (started from 18 iirc) and nothing broke.
But there's a catch, because I have some safeguards in place:
1. Nextcloud has its own dataset in a ZFS zpool. I take snapshots hourly, and I took a snapshot just before upgrading
2. I run nextcloud and its own postgreql via docker-compose. the docker-compose file along with the configuration and data are stored in nextcloud's own dataset. This means that os-level dependencies are not a problem for me. this also mean that reverting the whole thing to before-upgrade is very easy: just rollback to the before-update snapshot.
3. (unrelated) snapshot are replicated to another location, which means that I might perform the upgrade on that other site and switch the dns when it's done and if i'm satisfied. I don't do that, for my personal use 1-2 hours downtime it's okay.
4. I'll let nextcloud perform its auto-upgrade procedures, take a snapshot after every upgrade, and at the end I'll perform the tasks suggested in the self-assesment page (adding indexes, changing columns types etc).
You don't have a nextcloud problem, you have a system administration problem.
That's very true. I run quite a few services on my local network for my family (wireguard, nextcloud, homeassistant, frigate, pihole, jellyfin, bitwarden, ...).
While I enjoy setting up and playing with these service, I need to think about managing all these services as little as possible as I don't want to spend all my free time being a system admin.
Also, often a new release is not just a system admin task. Sure, it may not be _that_ hard to do a full backup, pull new docker images, spin them up and verify everything. The time sink comes from keeping track of all the releases of all the different projects, reading up about changes, how the upgrade process works, and so on.
On top of that, my family has become reliant on several of these services, especially nextcloud and bitwarden. The last thing they want are major changes to it. Long term stability with minimal changes can be a feature!
I am exactly in the same situation as you (I did not know frigate, but I do not have cameras either - otherwise you listed my main systems).
I managed to reduce administration to a minimum by using watchtower to automatically upgrade my containers and using mostly the :latest label.
This bit me only twice in a few years:
- with the 19-20 migration of Nextcloud, I had one big blank screen when logging in but the synchronization was working. Turns out it was a new default app (something about dashboarding) that was causing it. Googling an fixing took an hour.
- with one upgrade of Home Assistant where my devices were not available anymore, there was a problem with the upgrade which they fixed quickly but I have already upgraded. Reading the docs/forum and fixing took an hour.
I can live with these two hours across two or three years.
I backup /etc on my server with Borg and I know that, worst case, I will recover. I tested this DRP two weeks ago bare metal (recovering to an empty VM from scratch, that is an ubuntu ISO and ultimately getting my encrypted backups from a friend's system -> it really helped to highlight what I was missing)
I'm currently testing a new appliance setup with Nextcloud which includes the ability to use containers as a default for everything, if your container can be moved to an empty VM, nothing gets deleted as I didn't touch it. I would be really happy if this helped.
Could you please elaborate a bit on the appliance?
I use a home-grade PC with Ubuntu LTS on which there is nothing except for:
- docker
- borg (backup program)
- wireguard (VPN)
- sshd
I then copy /etc/docker from backup, mount some external disks with the data (either backed up or not for things I do not care about), reboot and I am done.
My recovery lasted one hour from starting the download of the ISO to being back on line.
I don't disagree but as a hobbyist I don't really want system administration problems. Well and I was mostly interested in Nextcloud as a possible alternative to Dropbox/Google drive with versioning and, I hoped, backups.
However the only proper backup solution that I could confidently state would allow me to recover should disaster strike was the one you just explained e.g. putting everything in docker and snapshotting the entire filesystem. At which point I'm basically running 3 virtual file systems on top of each other just to have a better UI, which seemed a bit silly.
First things first: don't get me wrong, I do understand your point.
The thing is: you have a system administration problem, whether you want or not (that is a big part of what you're actually paying for when you buy Dropbox or when you let Google feed on your data).
Now, as an hobbyist, when you start depending on services you set up and manage yourself, it would be a good idea to take some time to learn additional tools to enjoy your hobbies more.
Now on a lighter tone, there are simpler ways to have a backup strategy, as long as you are okay with lower guarantees.
You might not use zfs, and use simple LVM snapshots. You might want to use no snapshotting at all and just do a nightly backup via a cronjob: at 3AM you just switch everything down (docker-compose down if you're using it), do a rsync to another host, start it back up. It's way simpler but you'd only get a yesterday's copy in case of problem.
But then again, that would safeguard you when doing upgrades: disable backup, perform upgrade, test everything, re-enable backup, resume operations. Worst case scenario you rsync back the yesterday's data and you resume normal operation.
"It's way simpler but you'd only get a yesterday's copy in case of problem."
I have a restic backup running on that plan instead of rsync, which means I get true backups. The nice thing about that it is that this can be integrated into any "docker compose" pipeline that you like. I'm generally not as hot on Docker as a lot of people but it does do a nice job of containing household services into a text file that can be easily checked into source control, and easily backed up, as long as it can be run in docker.
It's a pity that Sandstorm started before Docker was a practical option for most people. There's probably some room for a Sandstorm 2.0 that "just" uses Docker and provides some grease around setting up this stuff on a system from a top-level configuration file or something. It would go from a massive project in which you have to "port" everything to something some hobbyists could set up. It wouldn't be as integrated, but it would work.
Wasn't Sandstorm a bit incompatible with Docker? Notably it didn't just containerize apps, it communicated over a custom protocol to fully isolate and limit their permissions. Eg network/disk access was tightly controlled.
Though perhaps there was a shim layer? Eg over normal containers, it shimmed network/disk from the container over the Sandstorm RPC buffer?
Really cool tech regardless, but it had a big tech maintenance burden. That's my fear in all these self hosted apps. Everything needs to be maintained for it to feel good to the user, and that seems like such a tall ask.
> you have a system administration problem, whether you want or not
Right. You can pay people to do things for you, or you can do them yourself, but either way the things have to be done, and they should be done by someone who is good at it and has a contract with you -- employment or otherwise.
One has to be able to start somewhere. How do you "get good at it" ? You proceed via steps. you challenge yourself, you reach an improvement, enjoy that improvement for a while, then you challenge yourself again when you see room for improvement.
But just saying "nah let somebody else do that" is not what we want here. We're hobbyist, we want and enjoy doing stuff ourselves. Doing a sub-optimal work is okay, we will improve over time :)
sharing our experiences and procedures here is part of that
This is true to a point. But eventually, you've gotten all you can from learning and managing a new thing. You can't reasonably make it more efficient and there are no benefits to spending more time learning it. This is when it shifts from a hobbyist's exploration to a routine, mundane task that requires time and attention while offering no _new_ benefit.
For some hobbyists there's comfort in this repetition; for others, it's just a time sink with high opportunity cost.
There is a middleground, imo. The way apps are designed massively impacts the general requirements of system administration.
What we're seeing is largely centralized applications and the work it takes to manage them. Ignore UX for a second, and imagine you wrote a database on top of a distributed system - ala IPFS - and all modifications were effectively pushed into IPFS. This suddenly boils the system administration tasks down to:
1. make sure my IPFS node is up to date
2. make sure my computer is online
And even those can be heavily mitigated with peers who follow each other.
Now we're not there yet, i'm not advertising a better solution. I'm simply saying that part of the administration is a heavy lift simply because of how these apps were written. I think we can do better for the home user.
Secure Scuttlebutt is a lot easier to maintain, for example. The most important thing with that is that you simply connect to the internet and publish your posts/fetch other posts. In doing so, other people make backups for you and you of them. Backing up your key seems like the highest priority.. and even that could be eliminated i imagine, in the P2P model at least. Very low maintenance.
>You can pay people to do things for you, or you can do them yourself, but either way the things have to be done
Nah. I had an elaborate home setup for a while as a hobby and the ongoing hassles (including NextCloud upgrade complexities) just led me to turning it all off and making do with simpler or no solutions.
I’ve learned my lesson about mixing hobbyist tinkering with something your family comes to expect as an everyday convenience - that while you on a random Saturday morning might be hyped about deploying the latest self hosted cool stuff, the other you on some random Thursday at 10pm when everything malfunctions is gonna hate past-you’s guts for putting you in this position.
> You can pay people to do things for you, or you can do them yourself,
or be a parent of a geek and have it done, with 24/7/365 support and training, and remote support of some magical things like "hey! I had a button appearing and I pressed it and now I am not sure I have internet anymore". Of course said "customer" has no idea about what was on the button. Etc. etc.
Maybe their hobby is not tinkering with Nextcloud and they would rather put that limited time/energy into setting up k8s clusters or developing a web app. Who knows?
The point is with limited time one has to pick their battles and maybe setting up zpools and a full next cloud docker compose isn't what they want to spend their time on.
Again, I see your point, because I've been there :)
But you're missing an important point of view: do you rely on that data?
If it's a toy project, don't even bother, just ignore all my replies.
If you do rely on nextcloud and the data stored there, having a backup procedure and safeguards for the upgrade process helps a lot.
Next time you perform an upgrade you can proceed without fears and stress, and way faster (if you run on docker) and that frees up time to play with kubernetes clusters and webapp development :)
Except it's not your call to make, or OP's call to make.
You're already getting quite a piece of software for free, demanding extended long-term support isn't really fair, expecially if you consider that they offer a simple update procedure.
> The point is with limited time one has to pick their battles
Yeah, that's why I pay for a managed K8s instance for my toy projects but do my own sysadmin work on various self-hosted things. The former is not my hobby so I'd rather pay someone else to do it.
This is an inherent limitation of our current tech stack, and unfortunately the cheapest mitigation we have is "take full system snapshot a.k.a. do your sysadmin work". The alternative (LTS release etc) all cost much more money.
> It's literally just branching at one release and fixing bugs in that release for a few years
This takes engineering time, i.e. money. It may also benefit upstream branches, but again porting patch between branches takes time, especially after massive refactoring happened on latest branch.
I agree that the problem of storing data securely is a problem that you have whether you want it to or not, but I was mostly lamenting that Nextcloud does preciously little to help help you to solve this problem, as it suffers from the same problem itself (possibly worse because now you've got a data durability problem with more moving parts).
> At which point I'm basically running 3 virtual file systems on top of each other just to have a better UI, which seemed a bit silly.
This sounds like a system administration problem.
Why, exactly, did you jump to docker/etc instead of what everyone (including NextCloud) recommends which is basically "keep a copy of your nextcloud folder and a dump of your database"?[0]
If you're not confident you can properly recreate your nginx config, then keep a copy of that too.
At that point you're literally like four steps to restore from a blank slate:
pkg install nginx php74 php74-extensions mariadb105-server
mysql -e 'CREATE DATABASE nextcloud;'
mysql nextcloud < backup/nextcloud.sql
rsync /path/to/backup/ /
It sounds like most of your pain comes from trying to optimize the long tail here (recovering from a backup) at the cost of normal operation.
(FWIW, my backup strategy is cron running a shell script that "rsync/mysqldump to second disk; rclone off-site". I've recovered from this successfully (from my local copy, no transfer times) in about a half hour.)
> You don't have a nextcloud problem, you have a system administration problem.
Those aren't mutually exclusive. Sure, better dev ops would make major upgrades safer and easier. But for a hobbyist self-hosting their own instance, a LTS release would be a godsend to save them hours of unpaid work.
Who said it was a challenge?
When does grunt work move beyond challenge to the point of not being worth it?
I got out of self hosting because my time is too valuable. It did teach me lots of new skills, so that was great!
However, somebody not wanting a time sink, is not them avoiding challenge.
This is the boat I'm in. And even if you do "everything right" and have snapshots before & after every update, you still need to actually debug why the update failed in the first place. So even then, LTS releases would be a greatly appreciated feature.
As someone who hosted his own as well, I agree with your sentiment exactly. I've taken down the server that I had hosting my own instance before this, and I am delaying setting up a new one simply because of what you've said here.
I imagine that those of us that want that kind of stability are encouraged to go with their hosted offering, but hopefully they'll see the value in having a slower and/or more stable release process.
For what it's worth, the upgrade process for the last few major versions went mostly without a hitch for me. I do have to give them credit for that. The only thing I continue to struggle with is the encryption design. I always end-up with some odd state for some files I cannot recover from.
I am a huge fan of Nextcloud and I couldn't agree more. My upgrade path is to just start a new instance with a fresh sync, because I was traumatized by a turbulent and uncertain upgrade on all of my instances once about two years ago. This is a product I love and choose to rely upon for my data, every day. I'm interested in the bells and whistles and I want the platform to succeed - my preference would be an LTS for my critical data, and the option to spin up newer features separately to test before adoption.
The answer to that ought to be `apt-get install nextcloud-server` and let the distro maintainers step in, really. Unfortunately because you can't skip versions on upgrade, it's not clear how to cleanly do that.
The package manager would need to have access to the code of all the intermediate versions to run the upgrades safely. That might work for some situations, but it's a hell of an overhead in general.
They also raise PHP version requirements. To keep my NextCloud on supported version, I had to update the Linux distribution on my server (was not EOL or anything) to get a PHP that supported versions of NextCloud support...
I just wanted to keep getting bug/security fixes for NextCloud.
In my experience of running my own Nextcloud instance for over 4 years, I've never had an upgrade break my instance. Caveat: I'm on the stable channel and I only update when the client prompts me to update, which is a few point releases into a new release.
That's been my experience as well. I have run Owncloud -> Nextcloud (when it was first released) since at least mid-2015, and I am on the same instance I first built.
I stay on the stable channel, and I get a notification if an app or nextcloud itself has an upgrade. The biggest issue is that the "Security & setup warnings" sometimes tells me I need to upgrade my database (and gives me the exact commands to do it) after an upgrade.
I will note that the upgrade has taken longer over the years (it used to take 5 minutes, now it can take over 30 minutes), and I think there is an issue with the backing up stage.
Also started with OwnCloud and moved to NextCloud. If I'm not mistaken I've been upgrading the same NextCloud install since version 11 or so. Now on 19.
Then just log into the web UI and check everything's still sane and follow any upgrade suggestions it has (frequently to run commands to add columns/indexes to the database).
NextCloud also has a web upgrader accessible from the admin panel. It's almost certainly based on the same code.
I don't know why they go about it in such a manual way. If you don't like the web installer, there's a command line version that does everything for you (upgrader.phar).
> I don't know why they go about it in such a manual way.
Because I don't generally give the code permission to modify itself. Principle of least privilege and all that.
Outside of this one specific situation (upgrades) it's not needed, the rest of the time it's just one more layer of security in the way of various forms of exploit. (Maybe it's just trauma from dealing with the 8,000 forms of wordpress exploits back in the day and dealing with finding half of wordpress having random code added to it to persist exploits/randomly redirect people to scam sites/etc)
In the end it adds like 5 minutes of inconvenience to my upgrade process.
Yeah, I was assuming it was either that, but I do notice that "backup" takes a long time. As soon as backup is done it is on the order of 4-5 minutes. But then again, I store something like 5 TB worth of files on my Nextcloud, so it could be me as well to.
> I store something like 5 TB worth of files on my Nextcloud
Ah, that might be it.
IIRC there's a database entry for each file, if you've got a lot of files it might take a while since on upgrade it also run database migrations to adapt to the new schema, that might take a while.
> 1) I upgrade from 17->18->19->20->21 and hope nothing breaks!
I've done this since about version 11. And I usually only get around to upgrading every few versions so it's been like... 11->12->13->14, 14->15->16, 16->17->18->19.
I do each upgrade one by one. Upgrade, login, check system status and resolve any additional steps it suggests (e.g., adding indices/columns, etc) then jump right into the next upgrade.
I've never had one fail on me. Even doing 3-4 major versions at a time it's usually less than a half hour problem.
You're missing php-curl (or it's installed but the module is disabled). I'd double-check that you have all of the dependencies of NextCloud installed[1], because php-curl is one of the required dependencies.
I agree - I wish it was more stable and a little less promiscuous. Having your instance have to access the cloud for apps and updates is sort of counter to the "control your own server" sort of mentality.
Sort of like docker - do you have to go through their root namespace for everything?
I don't mind doing low risk patches every few months or weeks, but I don't want to do a major version upgrade every 4-6 months.
I did my last major version upgrade only 15 months ago, and I am now 4 major versions behind, which means:
1) I upgrade from 17->18->19->20->21 and hope nothing breaks!
2) I either start over with the latest version
I like that open source moves fast, but at some point, I just want to stop fiddling with it and let it run with minimal maintenance.