Hacker News new | past | comments | ask | show | jobs | submit login
Nextcloud Hub 21 (nextcloud.com)
208 points by threatofrain on Feb 23, 2021 | hide | past | favorite | 150 comments



I started running a self hosted Nextcloud instance last year, and I couldn’t be happier with it! This release sounds exciting, guess it’s time to go upgrade :)

For those looking to ‘de-Google’ their lives, and control their own data Nextcloud is one of the best options out there.


Very easy to setup and maintain with a dedicated unraid box. Grab an old dell enterprise server like the r210 II and put some WD reds in raid + zfs, install unraid, and it’s good to go.

I actually virtualize unraid within esxi so that one small 1U box can be my router / firewall and an unraid machine serving home services. Best setup I’ve ever had and learned so much along the way!


This sounds interesting, might have to look into it. Running a physical home server would be awesome, but it currently sounds above my skill level as far as hardware and networking :)

I run a cheap EC2 instance, and plug it into an S3 bucket for file storage, and my RDS MySQL database.


How do you deal with the high costs of running an RDS instance? I think almost $30 p/m is a bit high for a small single family NC instance, when the EC2 instance running the actual service is going to a fraction of that.


That's a good question. Yeah, maybe not the best setup for everyone. You are right, the cost of a t3.small on-demand is close to $30/month, just for your NC instance alone that's not worth it.

In my case, for purposes of hobby projects and various self hosted services, I keep both MySQL & Postgres RDS instances running in perpetuity, both t3a.micro. On demand pricing is roughly $13/month, but since I plan on keeping them running 'forever' I purchase reserved instances. For a 3 year plan, 'no-upfront', this brings the cost down to about $8.75/month. Much more palatable if you ask me :)

Also, I use them for multiple projects, so the convenience factor is worth it for me. For your NC alone, I imagine it would be good enough to just run you DB server on the same EC2 instance. I doubt the database storage would eat up much disk space.

You could however, rip through a ton of disk space from file storage, so I feel like S3 buckets are a must, and cheap anyway.


My physical home server is a nuc. Could also be a Raspberry Pi 4, little hardware skill required :)


Same here. Happy user of self hosted nextcloud through the nextcloudpi project. It's been so care free I don't remember the setup details any more :)


To echo what the other replies are saying: mine has been running on a DigitalOcean droplet since early 2019 and I only had to reboot it once.

It syncs everything, the iOS app and web dashboard are adequate. I would recommend it (but I haven't tried anything else, other than Google Drive or Dropbox, of course)


Haven’t used droplets, do you have to manage backups yourself or is it part of the service?


Droplets are great, and I like the ease of use of Digital Ocean. But, as far as server backups go, I've never liked managing these, so I use an external data store and DB server. In my case, my instance is wired up to an Amazon S3 bucket, and an RDS database. If you set it up this way, there is no need to worry about backups of the application server.

I could nuke the app server, change hosting providers, or if there was a hardware failure or whatever, it won't matter. I can always spin up a fresh server, and plug back into my external DB and data store.


You can add disk-level backups to droplets, IIRC it will keep four weekly backups, for +20% price to the droplet


Its a paid add-on iirc


I think my instance is 3-4 years old at this point and I am impressed by how little work I had to put into it over the years. I set it up using Snap and it auto updates so the whole process is quite carefree.


Are there recommendation for hosts which offer pricing comparable to Google One[1], has backup & trust in the community?

[1]https://one.google.com/about/plans


Hetzner offer managed Nextcloud instances for quite cheap. It works well.

https://www.hetzner.com/storage/storage-share



I trialed setting up my own nextcloud instance a while back. It's still very complex to get working in docker. From memory, the card/caldav traefik rewrites are still not working. SSL was complex to setup with Collabora, and still required manual GUI steps to link into Nextcloud (my biggest pet peeve). I also remember getting the initial setup wrong a few times in the initial setup wizard, which required me to delete my whole local config.

Performance was a little slow, but that could be down to my own hardware. It was just consumer grade i5 cpu and a basic SSD, in docker.

The examples they provide are good, but you cant really provide for every different config. I wanted to use traefik, so I brought the complexity on my self.

Heres where I got too, eventually stopping my trial of Nextcloud. https://gist.github.com/francis-io/935be5679b3308f5fbc3fe1bb...

My wishlist for future effort by the devs would be:

- Fully configured via env vars (and in Collabora too). - I would rather any config or state be kept in the db. It makes backup and restore easier. Env vars could be set in the db, and any restart, has the set env vars overwrite anything in the db. I want to have confidence that I can restore a db + files and have a working service come back up. At the moment, I don't trust Nextcloud to always come back up. - Keep config separate from user files. - Focus on improving speed (which it looks like they are adressing with this post). - Focus on more app usability. I remember in portrait it being hard to use.

Overall, the software is great and I'm looking forward to the future, but to store my personal data I will need to have a little more confidence.

(I can't seem to make a bullet point list on HN)


Wow this could not be more different from my experience trying the same.

I ran nextcloud in docker-compose for 2 years, with nginx doing SSL termination in front. Granted I wasn't using the official image; I use the linuxserver.io releases for all my other services so I use them for this, too. Nextcloud's config is all in the DB, except for database and cache connection information in a single config file. PHP's config is in a separate file and some env vars (eg timezone).

I've recently moved it into my home k3s cluster (yeah, i'm one of those people), which means traefik is my new reverse proxy. Works fine. I found I can get traefik to do the DAV redirects at least with the k8s Ingress config, but I don't need to since the linuxserver image includes the redirects in its' nginx configuration.


I think you might be overcomplicating this, because the Docker setup of Nextcloud is one of the easiest and most streamlined I've seen on Docker Hub. Including the proxy, all you need to give it is the DNS name, the ports you want open, and where you want the data stored. Traefik is also huge overkill for a personal server, IMO. jwilder/nginx-proxy is braindead simple and has a companion container that will automatically get you LetsEncrypt certs when you make a new container that asks for it. The only thing the default Docker install is missing is a TURN server for group voice/video calls.


> The examples they provide are good, but you cant really provide for every different config. I wanted to use traefik, so I brought the complexity on my self.

I am with you. But. It's incredible how so many open source projects keep on delivering docker-compose files that either are not compatible with a reverse proxy or bundle a reverse proxy themselves.

It seems like the use case of having traefik/ngninx as a RP which does the SSL termination over how many services you want is fringe practice. Most of the apps/services I encountered could be blind to a RP but I often have to play around it.

> I want to have confidence that I can restore a db + files and have a working service come back up. At the moment, I don't trust Nextcloud to always come back up.

Well. Today OVH tried to upgrade things and it broke my VPS AND my owncloud db. Hopefully I had some sql dump backup but the DB was so borked I couldn't login in it even from root inside the container or in any other way.

I mean: don't trust the app provider to do the backup, set something up yourself.


Although I use docker for most projects, for Nextcloud I decided to go with the snap version, which was very easy to use.[1]

[1] https://github.com/nextcloud/nextcloud-snap


Thanks for sharing your experience, given how I treat software it sounds like I would extremely frustrated with some things that "the average user" doesn't mind at all. Sounds like I should give it another year or two before considering Nextcloud (because her, I assume they're working on it!)

> (I can't seem to make a bullet point list on HN)

For short points: indent with two spaces (longer become horrible on mobile). Or just do double newlines between the points like a normal person (;))


Hmm, my Docker-Compose file is much less complicated:

https://www.pastery.net/zykzva/

Though I do have a 4-line Caddy config and a Postgres server on the host.


GP is using a Traefik reverse proxy, which is where the extra stuff comes from.


Plus, OP is using Collabora and caldav/cardav which needs some special consideration when reverse-proxying.


Both of those work out of the box for me on my reverse proxy. I use the built-in Collabora install though so maybe that's where the difference comes from.


I'd have to double check but I think I had to tweak some things regarding caldav (but it may have been years ago).


they don't even have a decent CLI client for file syncing, I know you can use any webdav client but the GUI client seems more efficient than anything else I've tried.


I really wish there was an LTS release that was supported for at least 2 years (just bugfixes, no new features). I self host my own instance, and I really just want to set it and forget it.

I don't mind doing low risk patches every few months or weeks, but I don't want to do a major version upgrade every 4-6 months.

I did my last major version upgrade only 15 months ago, and I am now 4 major versions behind, which means:

1) I upgrade from 17->18->19->20->21 and hope nothing breaks!

2) I either start over with the latest version

I like that open source moves fast, but at some point, I just want to stop fiddling with it and let it run with minimal maintenance.


> 1) I upgrade from 17->18->19->20->21 and hope nothing breaks!

I did a similar path (started from 18 iirc) and nothing broke.

But there's a catch, because I have some safeguards in place:

1. Nextcloud has its own dataset in a ZFS zpool. I take snapshots hourly, and I took a snapshot just before upgrading

2. I run nextcloud and its own postgreql via docker-compose. the docker-compose file along with the configuration and data are stored in nextcloud's own dataset. This means that os-level dependencies are not a problem for me. this also mean that reverting the whole thing to before-upgrade is very easy: just rollback to the before-update snapshot.

3. (unrelated) snapshot are replicated to another location, which means that I might perform the upgrade on that other site and switch the dns when it's done and if i'm satisfied. I don't do that, for my personal use 1-2 hours downtime it's okay.

4. I'll let nextcloud perform its auto-upgrade procedures, take a snapshot after every upgrade, and at the end I'll perform the tasks suggested in the self-assesment page (adding indexes, changing columns types etc).

You don't have a nextcloud problem, you have a system administration problem.


That's very true. I run quite a few services on my local network for my family (wireguard, nextcloud, homeassistant, frigate, pihole, jellyfin, bitwarden, ...).

While I enjoy setting up and playing with these service, I need to think about managing all these services as little as possible as I don't want to spend all my free time being a system admin.

Also, often a new release is not just a system admin task. Sure, it may not be _that_ hard to do a full backup, pull new docker images, spin them up and verify everything. The time sink comes from keeping track of all the releases of all the different projects, reading up about changes, how the upgrade process works, and so on.

On top of that, my family has become reliant on several of these services, especially nextcloud and bitwarden. The last thing they want are major changes to it. Long term stability with minimal changes can be a feature!


I am exactly in the same situation as you (I did not know frigate, but I do not have cameras either - otherwise you listed my main systems).

I managed to reduce administration to a minimum by using watchtower to automatically upgrade my containers and using mostly the :latest label.

This bit me only twice in a few years:

- with the 19-20 migration of Nextcloud, I had one big blank screen when logging in but the synchronization was working. Turns out it was a new default app (something about dashboarding) that was causing it. Googling an fixing took an hour.

- with one upgrade of Home Assistant where my devices were not available anymore, there was a problem with the upgrade which they fixed quickly but I have already upgraded. Reading the docs/forum and fixing took an hour.

I can live with these two hours across two or three years.

I backup /etc on my server with Borg and I know that, worst case, I will recover. I tested this DRP two weeks ago bare metal (recovering to an empty VM from scratch, that is an ubuntu ISO and ultimately getting my encrypted backups from a friend's system -> it really helped to highlight what I was missing)


I'm currently testing a new appliance setup with Nextcloud which includes the ability to use containers as a default for everything, if your container can be moved to an empty VM, nothing gets deleted as I didn't touch it. I would be really happy if this helped.


Could you please elaborate a bit on the appliance?

I use a home-grade PC with Ubuntu LTS on which there is nothing except for:

- docker

- borg (backup program)

- wireguard (VPN)

- sshd

I then copy /etc/docker from backup, mount some external disks with the data (either backed up or not for things I do not care about), reboot and I am done.

My recovery lasted one hour from starting the download of the ISO to being back on line.


I don't disagree but as a hobbyist I don't really want system administration problems. Well and I was mostly interested in Nextcloud as a possible alternative to Dropbox/Google drive with versioning and, I hoped, backups.

However the only proper backup solution that I could confidently state would allow me to recover should disaster strike was the one you just explained e.g. putting everything in docker and snapshotting the entire filesystem. At which point I'm basically running 3 virtual file systems on top of each other just to have a better UI, which seemed a bit silly.


First things first: don't get me wrong, I do understand your point.

The thing is: you have a system administration problem, whether you want or not (that is a big part of what you're actually paying for when you buy Dropbox or when you let Google feed on your data).

Now, as an hobbyist, when you start depending on services you set up and manage yourself, it would be a good idea to take some time to learn additional tools to enjoy your hobbies more.

Think about this as in "leveling up" your hobby.

---------------------------------------------------------------------

Now on a lighter tone, there are simpler ways to have a backup strategy, as long as you are okay with lower guarantees.

You might not use zfs, and use simple LVM snapshots. You might want to use no snapshotting at all and just do a nightly backup via a cronjob: at 3AM you just switch everything down (docker-compose down if you're using it), do a rsync to another host, start it back up. It's way simpler but you'd only get a yesterday's copy in case of problem.

But then again, that would safeguard you when doing upgrades: disable backup, perform upgrade, test everything, re-enable backup, resume operations. Worst case scenario you rsync back the yesterday's data and you resume normal operation.


"It's way simpler but you'd only get a yesterday's copy in case of problem."

I have a restic backup running on that plan instead of rsync, which means I get true backups. The nice thing about that it is that this can be integrated into any "docker compose" pipeline that you like. I'm generally not as hot on Docker as a lot of people but it does do a nice job of containing household services into a text file that can be easily checked into source control, and easily backed up, as long as it can be run in docker.

It's a pity that Sandstorm started before Docker was a practical option for most people. There's probably some room for a Sandstorm 2.0 that "just" uses Docker and provides some grease around setting up this stuff on a system from a top-level configuration file or something. It would go from a massive project in which you have to "port" everything to something some hobbyists could set up. It wouldn't be as integrated, but it would work.


Wasn't Sandstorm a bit incompatible with Docker? Notably it didn't just containerize apps, it communicated over a custom protocol to fully isolate and limit their permissions. Eg network/disk access was tightly controlled.

Though perhaps there was a shim layer? Eg over normal containers, it shimmed network/disk from the container over the Sandstorm RPC buffer?

Really cool tech regardless, but it had a big tech maintenance burden. That's my fear in all these self hosted apps. Everything needs to be maintained for it to feel good to the user, and that seems like such a tall ask.



> I have a restic backup running on that plan instead of rsync, which means I get true backups.

yeah, yeah, absolutely. rsync is the first thing that came to my mind, but any tool that does a similar/equivalent job is fine here :)


> you have a system administration problem, whether you want or not

Right. You can pay people to do things for you, or you can do them yourself, but either way the things have to be done, and they should be done by someone who is good at it and has a contract with you -- employment or otherwise.


> should be done by someone who is good at it

I'm not 100% okay with this statement.

One has to be able to start somewhere. How do you "get good at it" ? You proceed via steps. you challenge yourself, you reach an improvement, enjoy that improvement for a while, then you challenge yourself again when you see room for improvement.

But just saying "nah let somebody else do that" is not what we want here. We're hobbyist, we want and enjoy doing stuff ourselves. Doing a sub-optimal work is okay, we will improve over time :)

sharing our experiences and procedures here is part of that


This is true to a point. But eventually, you've gotten all you can from learning and managing a new thing. You can't reasonably make it more efficient and there are no benefits to spending more time learning it. This is when it shifts from a hobbyist's exploration to a routine, mundane task that requires time and attention while offering no _new_ benefit.

For some hobbyists there's comfort in this repetition; for others, it's just a time sink with high opportunity cost.


There is a middleground, imo. The way apps are designed massively impacts the general requirements of system administration.

What we're seeing is largely centralized applications and the work it takes to manage them. Ignore UX for a second, and imagine you wrote a database on top of a distributed system - ala IPFS - and all modifications were effectively pushed into IPFS. This suddenly boils the system administration tasks down to:

1. make sure my IPFS node is up to date

2. make sure my computer is online

And even those can be heavily mitigated with peers who follow each other.

Now we're not there yet, i'm not advertising a better solution. I'm simply saying that part of the administration is a heavy lift simply because of how these apps were written. I think we can do better for the home user.

Secure Scuttlebutt is a lot easier to maintain, for example. The most important thing with that is that you simply connect to the internet and publish your posts/fetch other posts. In doing so, other people make backups for you and you of them. Backing up your key seems like the highest priority.. and even that could be eliminated i imagine, in the P2P model at least. Very low maintenance.


>You can pay people to do things for you, or you can do them yourself, but either way the things have to be done

Nah. I had an elaborate home setup for a while as a hobby and the ongoing hassles (including NextCloud upgrade complexities) just led me to turning it all off and making do with simpler or no solutions.

I’ve learned my lesson about mixing hobbyist tinkering with something your family comes to expect as an everyday convenience - that while you on a random Saturday morning might be hyped about deploying the latest self hosted cool stuff, the other you on some random Thursday at 10pm when everything malfunctions is gonna hate past-you’s guts for putting you in this position.


> You can pay people to do things for you, or you can do them yourself,

or be a parent of a geek and have it done, with 24/7/365 support and training, and remote support of some magical things like "hey! I had a button appearing and I pressed it and now I am not sure I have internet anymore". Of course said "customer" has no idea about what was on the button. Etc. etc.

I am the geek and I love my parents :)


Maybe their hobby is not tinkering with Nextcloud and they would rather put that limited time/energy into setting up k8s clusters or developing a web app. Who knows? The point is with limited time one has to pick their battles and maybe setting up zpools and a full next cloud docker compose isn't what they want to spend their time on.


Again, I see your point, because I've been there :)

But you're missing an important point of view: do you rely on that data?

If it's a toy project, don't even bother, just ignore all my replies.

If you do rely on nextcloud and the data stored there, having a backup procedure and safeguards for the upgrade process helps a lot.

Next time you perform an upgrade you can proceed without fears and stress, and way faster (if you run on docker) and that frees up time to play with kubernetes clusters and webapp development :)


Right but I think the original point was that it would be nice not to have to do that.

An LTS connected to a NAS would avoid all of that. Lol.


Except it's not your call to make, or OP's call to make.

You're already getting quite a piece of software for free, demanding extended long-term support isn't really fair, expecially if you consider that they offer a simple update procedure.


There was a wish. Not a demand.

Many software has it so it's not unreasonable to simply discuss something that would be nice


> The point is with limited time one has to pick their battles

Yeah, that's why I pay for a managed K8s instance for my toy projects but do my own sysadmin work on various self-hosted things. The former is not my hobby so I'd rather pay someone else to do it.

This is an inherent limitation of our current tech stack, and unfortunately the cheapest mitigation we have is "take full system snapshot a.k.a. do your sysadmin work". The alternative (LTS release etc) all cost much more money.


Numerous pieces of software do LTS with no additional cost or supported via other avenues.

It's literally just branching at one release and fixing bugs in that release for a few years, which also benefits upstream branches.

That way people may lose new features but gain stability.


> It's literally just branching at one release and fixing bugs in that release for a few years

This takes engineering time, i.e. money. It may also benefit upstream branches, but again porting patch between branches takes time, especially after massive refactoring happened on latest branch.


I agree that the problem of storing data securely is a problem that you have whether you want it to or not, but I was mostly lamenting that Nextcloud does preciously little to help help you to solve this problem, as it suffers from the same problem itself (possibly worse because now you've got a data durability problem with more moving parts).


> At which point I'm basically running 3 virtual file systems on top of each other just to have a better UI, which seemed a bit silly.

This sounds like a system administration problem.

Why, exactly, did you jump to docker/etc instead of what everyone (including NextCloud) recommends which is basically "keep a copy of your nextcloud folder and a dump of your database"?[0]

If you're not confident you can properly recreate your nginx config, then keep a copy of that too.

At that point you're literally like four steps to restore from a blank slate:

  pkg install nginx php74 php74-extensions mariadb105-server
  mysql -e 'CREATE DATABASE nextcloud;'
  mysql nextcloud < backup/nextcloud.sql
  rsync /path/to/backup/ /
It sounds like most of your pain comes from trying to optimize the long tail here (recovering from a backup) at the cost of normal operation.

(FWIW, my backup strategy is cron running a shell script that "rsync/mysqldump to second disk; rclone off-site". I've recovered from this successfully (from my local copy, no transfer times) in about a half hour.)

[0] https://docs.nextcloud.com/server/latest/admin_manual/mainte...


HRCloud2 has built in backup capability. https://github.com/zelon88/HRCloud2

Full disclosure, I'm the developer.


Have you thought about using something like https://www.hetzner.com/storage/storage-share

Pretty cheap, it takes away the administration burden and you are the one in control :)


Then maybe it'd be less expensive (money and time) to pay for a netxloud account ?


> You don't have a nextcloud problem, you have a system administration problem.

Those aren't mutually exclusive. Sure, better dev ops would make major upgrades safer and easier. But for a hobbyist self-hosting their own instance, a LTS release would be a godsend to save them hours of unpaid work.


An good hobbyist should challenge themselves from time to time ;)


Who said it was a challenge? When does grunt work move beyond challenge to the point of not being worth it? I got out of self hosting because my time is too valuable. It did teach me lots of new skills, so that was great! However, somebody not wanting a time sink, is not them avoiding challenge.


This is the boat I'm in. And even if you do "everything right" and have snapshots before & after every update, you still need to actually debug why the update failed in the first place. So even then, LTS releases would be a greatly appreciated feature.


Maybe they're challenging themselves on things that interest them more...and just want a functioning Nextcloud instance?


We wouldn't tell Google engineers to mess with his Google drive in prod... why should he sacrifice data availability and integrity?


> the docker-compose file along with the configuration and data are stored in nextcloud's own dataset.

What a great idea!


As someone who hosted his own as well, I agree with your sentiment exactly. I've taken down the server that I had hosting my own instance before this, and I am delaying setting up a new one simply because of what you've said here.

I imagine that those of us that want that kind of stability are encouraged to go with their hosted offering, but hopefully they'll see the value in having a slower and/or more stable release process.

For what it's worth, the upgrade process for the last few major versions went mostly without a hitch for me. I do have to give them credit for that. The only thing I continue to struggle with is the encryption design. I always end-up with some odd state for some files I cannot recover from.


see my sibling comment for an idea on how to set nextcloud up for easy maintenance.

disclaimer: i have updated several version, but haven't upgraded to version 21 yet (it just got released)


I am a huge fan of Nextcloud and I couldn't agree more. My upgrade path is to just start a new instance with a fresh sync, because I was traumatized by a turbulent and uncertain upgrade on all of my instances once about two years ago. This is a product I love and choose to rely upon for my data, every day. I'm interested in the bells and whistles and I want the platform to succeed - my preference would be an LTS for my critical data, and the option to spin up newer features separately to test before adoption.


This has spawned a huge thread that I honestly didn't read all of, but someone else mentioned to me they use the 'Community' Snap package.

I did not set mine up with this, but it apparently requires a lot less hands on maintenance. In your case you might be interested.

https://docs.nextcloud.com/server/21/admin_manual/installati...

https://snapcraft.io/nextcloud

Apparently it auto-updates for you, but I'm not sure if it will upgrade major versions, or only security patches.


The snap does upgrade major versions, although from my experience it tends to be on a delay to ensure stability.


Makes sense. Maintenance of self hosted services can be quite annoying, but I guess that’s the price we pay for taking control from the overlords :)


The answer to that ought to be `apt-get install nextcloud-server` and let the distro maintainers step in, really. Unfortunately because you can't skip versions on upgrade, it's not clear how to cleanly do that.


That’s handled by the package manager.


The package manager would need to have access to the code of all the intermediate versions to run the upgrades safely. That might work for some situations, but it's a hell of an overhead in general.


They also raise PHP version requirements. To keep my NextCloud on supported version, I had to update the Linux distribution on my server (was not EOL or anything) to get a PHP that supported versions of NextCloud support...

I just wanted to keep getting bug/security fixes for NextCloud.


If you are running Debian or Ubuntu use https://deb.sury.org/ for PHP.


In my experience of running my own Nextcloud instance for over 4 years, I've never had an upgrade break my instance. Caveat: I'm on the stable channel and I only update when the client prompts me to update, which is a few point releases into a new release.


That's been my experience as well. I have run Owncloud -> Nextcloud (when it was first released) since at least mid-2015, and I am on the same instance I first built.

I stay on the stable channel, and I get a notification if an app or nextcloud itself has an upgrade. The biggest issue is that the "Security & setup warnings" sometimes tells me I need to upgrade my database (and gives me the exact commands to do it) after an upgrade.

I will note that the upgrade has taken longer over the years (it used to take 5 minutes, now it can take over 30 minutes), and I think there is an issue with the backing up stage.


Also started with OwnCloud and moved to NextCloud. If I'm not mistaken I've been upgrading the same NextCloud install since version 11 or so. Now on 19.

Every time it's basically:

  mv nextcloud nextcloud.r19
  mkdir nextcloud && pushd nextcloud && tar -zxf ../nextcloud-r20.tgz
  cp nextcloud.r19/config/config.php nextcloud/config/config.php
  # set permissions
  sudo -u php php occ upgrade
Then just log into the web UI and check everything's still sane and follow any upgrade suggestions it has (frequently to run commands to add columns/indexes to the database).

The instructions they provide for a manual upgrade have never failed for me: https://docs.nextcloud.com/server/latest/admin_manual/mainte...

As far as software that needs upgrades, NextCloud has definitely been one of the least annoying things I have to deal with.


uhh.. that sounds awful. owncloud just has me click a button.


NextCloud also has a web upgrader accessible from the admin panel. It's almost certainly based on the same code.

I don't know why they go about it in such a manual way. If you don't like the web installer, there's a command line version that does everything for you (upgrader.phar).


> I don't know why they go about it in such a manual way.

Because I don't generally give the code permission to modify itself. Principle of least privilege and all that.

Outside of this one specific situation (upgrades) it's not needed, the rest of the time it's just one more layer of security in the way of various forms of exploit. (Maybe it's just trauma from dealing with the 8,000 forms of wordpress exploits back in the day and dealing with finding half of wordpress having random code added to it to persist exploits/randomly redirect people to scam sites/etc)

In the end it adds like 5 minutes of inconvenience to my upgrade process.


Yep, wordpress too has taught me this lesson. I now understand why we have so much tooling to lock down processes.


> I will note that the upgrade has taken longer over the years (it used to take 5 minutes, now it can take over 30 minutes)

In their defense, the software has grown a lot and does a lot more things nowadays, it's understandable that the upgrade process takes more.


Yeah, I was assuming it was either that, but I do notice that "backup" takes a long time. As soon as backup is done it is on the order of 4-5 minutes. But then again, I store something like 5 TB worth of files on my Nextcloud, so it could be me as well to.


> I store something like 5 TB worth of files on my Nextcloud

Ah, that might be it.

IIRC there's a database entry for each file, if you've got a lot of files it might take a while since on upgrade it also run database migrations to adapt to the new schema, that might take a while.


Yeah that really wouldn't surprise me. In the end, the upgrade works, so I really haven't looked into what causes the problem.


> 1) I upgrade from 17->18->19->20->21 and hope nothing breaks!

I've done this since about version 11. And I usually only get around to upgrading every few versions so it's been like... 11->12->13->14, 14->15->16, 16->17->18->19.

I do each upgrade one by one. Upgrade, login, check system status and resolve any additional steps it suggests (e.g., adding indices/columns, etc) then jump right into the next upgrade.

I've never had one fail on me. Even doing 3-4 major versions at a time it's usually less than a half hour problem.


Haha thanks to your comment I noticed I'm using nextCloud 16. I'm going to make a few upgrades now and I'll tell you how it went.

Edit:

Miration 18->19 is now stuck on

Step 4 is currently in process. Please reload this page later.

which is downloading zip with new version...

Edit2:

I restarted installation multiple times, increased pfp-fpm and nginx timeout to 660 seconds and still getting this error.

Not today...


The issues with timeouts can be avoided if you use the command line upgrader:

  % php /var/www/nextcloud/updater/updater.phar


Yea, it didn't work as expected

sudo -u nginx php updater/updater.phar

Nextcloud Updater - version: v18.0.9-8-g27dac77

Current version is 18.0.14.

PHP Fatal error: Uncaught Error: Call to undefined function NC\Updater\curl_init() in phar:///home/owncloud/updater/updater.phar/lib/Updater.php:455

Stack trace:

#0 phar:///home/owncloud/updater/updater.phar/lib/Updater.php(119): NC\Updater\Updater->getUpdateServerResponse()

#1 phar:///home/owncloud/updater/updater.phar/lib/UpdateCommand.php(147): NC\Updater\Updater->checkForUpdate()

#2 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Command/Command.php(256): NC\Updater\UpdateCommand->execute()

#3 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(820): Symfony\Component\Console\Command\Command->run()

#4 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(187): Symfony\Component\Console\Application->doRunCommand()

#5 phar:///home/owncloud/updater/updater.phar/vendor/symfony/console/Application.php(118): Symfony\Component\Console\Application->doRun()

#6 phar:///home/owncloud/updater/updater.phar/updater.php(10): Symfony\Component\Console\Application->run()

#7 /home/owncloud/updater/updater.phar(10): require('...')

#8 {main}

  thrown in phar:///home/owncloud/updater/updater.phar/lib/Updater.php on line 455


You're missing php-curl (or it's installed but the module is disabled). I'd double-check that you have all of the dependencies of NextCloud installed[1], because php-curl is one of the required dependencies.

[1]: https://docs.nextcloud.com/server/21/admin_manual/installati...


I agree - I wish it was more stable and a little less promiscuous. Having your instance have to access the cloud for apps and updates is sort of counter to the "control your own server" sort of mentality.

Sort of like docker - do you have to go through their root namespace for everything?


They can offer that with a subscription.


> The High Performance Back-end for Files in Nextcloud is an optional, binary component developed in Rust. It is capable of maintaining a direct connection with desktop and web clients, providing file change and notification updates to the clients.

petty as heck but nextcloud being entirely php (afaik) until now has been a huge turn off. Moving some critical online bits to rust is a huge indicator to me that the team is taking resource consumption & performance optimization seriously.


When it comes to self-hosting, there are 2 key components: the service software itself (ie Nextcloud), and the network plumbing to connect everything together. The networking has gotten quite complex due to NAT, HTTPS, DNS, IPv4 exhaustion, etc.

I maintain a list of software to help simplify the networking bits:

https://github.com/anderspitman/awesome-tunneling


Thanks for the reference. Spinning up individual containers has become quite easy these days, but agree networking still takes some work to get everything playing together nicely.


I'm not sure there is such a thing, but I would like to see some CRDT format being adopted as a first-class data structure inside of nextcloud. This could be built upon for things such as the Whiteboard, but also note-taking applications (Carnet, nextcloud notes...), contacts, and more.

Also, I wish nextcloud talk was using Matrix, there seems to be much duplicated effort between the two, and I am not even sure Nextcloud Talk federates.


To the people who have been using Nextcloud successfully for years: is your usage mainly PC-PC or PC-iOS synchronization? Is anyone here running PC-Android synchronization with files that change more often than once a day?

My experience with the Nextcloud Android app is that the automatic sync is quite limited (eg. https://github.com/nextcloud/android/issues/757, https://github.com/nextcloud/android/issues/19). Every change has to be manually synced by opening the app and navigating to the Sync option for each file. This is pretty much a dealbreaker for me, but it looks like a lot of people are using Nextcloud successfully. So I'm curious how your usage differs from mine - do you only use it for static unchanging files that don't need to be synchronized that often, or is the sync situation smoother on other devices?


I use it daily for syncing among multiple Linux PCs and two Android/Lineage devices. I actually like that mobile sync is manual because my usage is so heavy it would involve moving around a lot of data unnecessarily.

Setting up my wife with NC on mobile, however, reminded me of lots of ways in which I've accustomed myself to some pretty weird behaviors, like manual syncing, the built in text editor that doesn't load without being online.

I love NC (I use it both for personal needs and with students in my lab) but there are definitely UX issues that present a barrier to new users.


I use it for the automatic photos upload primarily. But anything else that changes rapidly, I use a dedicated app. I've never had major issues with the core nextcloud app, but I also don't use it for anything before the photo upload.

DAVx5 for caldav stuff, Nextcloud Notes for notes.. These apps seem to handle the sync separately on their own.


I'm a little bit worried with the shift from a 'cloud' storage solution to a groupware software... I only need the storage bits but it seems they are focusing on the groupware thing lately...


This is my problem with it as well. I used to have a self hosted nextcloud instance, but my main usage was for the file syncing. Nextcloud seems to be poor to decent at everything it does, but never great. So unless your goal is to have a suite of mediocre appliances that do the bare minimum, nextcloud is good. But all I wanted was a nice and quick way to sync all my files (I'm talking 500k files here) and have some sort of versioning in case I fuck up, so I moved to syncthing.


I moved from syncthing (and seafile) to nexcloud because I was missing one key thing: the ability to share files (by providing an URL, or to a group (think common files with spouse)).

Otherwise I completely agree with the sentiment.


I use Seafile and it has the feature to share to other users on your Seafile instance and to create a public link for uploaded files requiring no account. Is that what you're speaking about? I tried Nextcloud about a year ago, I spun up a Nextcloud and Seafile instance and Nextcloud was much slower for uploading and downloading files.


Syncthing is awesome for being a dropbox-like service for computers. I've setup a syncthing share as a folder inside of nextcloud which is enabled as "External Storage." This gives me the best of both worlds. Sharing between computers is rock solid. The mobile use cases is a lot more reasonable and I can share files.

I don't like syncthing on mobile because it needs to maintain its connection to sync and therefore drains battery. Also, there isn't a way to have less than 100% of a particular share local to the phone. This isn't usually waht I want on my phone.


It does work with Nextcloud, though. [1] is the Nextcloud logo linkes from my instance, [2] is the direct link.

Or am I misunderstanding your point?

1: https://cloud.zwog.org/index.php/s/TmKoyWqxXaGAnqo

2: https://cloud.zwog.org/index.php/s/TmKoyWqxXaGAnqo/preview


Yes, it does work with Nextcloud - and this is the reason I moved to Nexcloud from Syncthing (and previously - Seafile).

I was just commenting on your migration to Syncthing, which is a superior syncing app IMHO. It is just that when I was using it I realized that I am missing the share ability, which is avalable in Nextcloud, though my (somehow unhappy) travel the other way round from Syncthing to Nextcloud.

I think that Nextcloud is trying to cover too much things, with half-baked apps.


I presume that is where the money is.

Either independent contributors who make money as consultants, or a foundation that gets sponsoring, or a commercial company behind the project: enterprise has the money. So inevitable, it will gravitaye towards more enterprisey features.

I'm not saying that I have knowledge about what happens here with Nextcloud. But in FLOSS this has been seen often: from Drupal to LibreOffice: it moves away from 'consumers with simple needs' and towards 'heavy users'.


I feel precisely the opposite. Replacing Dropbox is fine, but replacing like the majority of Google's services is waaaay more useful.


They are focusing on entreprise features, because that's where money is.

I also wish they had a separate "light" offer with just the storage and a few basic apps. As it is, I think they are stretching their resources and some part of their offering is going to suffer as a result (we already saw quite a few severe bugs in the past year and some basic functionalities, like file locking or caching, is still not right). Personally I'm only staying with Nextcloud because there's unfortunately no good alternative for now.


Actually there is quite a ton of self hosted cloud storage project but very little those that provide the other services Google has the biggest lock-in on - calendar, contacts, notes, galleries, bookmarks, collaborative editing, etc.

So personally I very glad they are not just trying to be yet another cloud storage tool but also working on these IMHO more important cloud services.


The project is great and I made simple setup in docker to play around with it. There is official docker image you can use https://hub.docker.com/_/nextcloud.

The problem I see with similar services is they all trying to pack everything. You can also install external components into your system.

What it means in practice is huge area for security vulnerabilities, challenge to host/upgrade it at home on weekends and very complex user interface (easy to mess up with privacy settings).

I really scared to host such systems because of all related issues. Maybe it isn't big deal at all.

Probably, most of home use cases can be resolved by simple XMPP server (video calls, group chat, image/links sharing) plus some shared folder across the network to store some files/photos.


I haven't used Nextcloud before, do you happen to know if there's an easy way to just want the file sharing?

I don't care for whiteboards or collaboration, I just want a Dropbox equivalent where I can upload files and give other people public or one-time or expiring links to download/wget.


When you set up Nextcloud, it has a wizard prompting you for "apps" to install. Can't remember what the choices are exactly, but there's a "simple" choice that is just file sharing.


Yes, you can disable the internal apps it ships with if you don't want to use them and just not install any third-party ones either. Only caveat is during one upgrade I noticed some of them had been re-enabled so now I make sure to check each time.


Probably the most simple way would be to rent S3 compatible storage at Lidonde/DO and use client like https://mountainduck.io to mount it in your system.


I use Nextcloud for almost all the stuff I do in day to day life. I run it in docker swarm mode on a 5yr old pc running Debian @home. Freemyip for updating my dynamic IP address

What I use it for ? 1. Notes (Use FSnotes and sync md files) 2. Keypassxc for passwords (sync it using Nextcloud) 3. Photos upload (From Amazon & Google) 4. My recordings & videos 5. Documents (Moved from G Drive) 6. Bookmarks

Where I would like to see improvements? Photos - badly want this to be usable on mobile phones

I am happy overall with Nextcloud. The only time I screwed up is when I didn't know about the upgrade process. Tried moving from 18→20 and totally gone wrong.


Could you point me to resources on the upgrade? I'm at 20 now (what i originally installed) and am a bit miffed at the process.

I use docker-compose and nextcloud is much different than all my other containers.


Personally, I love NextCloud as a contacts/calendar storage. I have an instance from a cloud provider, I use DAVx5 [1] to sync with my Android phone, and I set up a CalDAV account on my MacOS, so I can see nextCloud calendars on Calendar.app. Sadly, NextCloud's CardDAV doen not seem to work on macOS, but that's a relatively minor issue.

[1]: https://www.davx5.com/


Big kudos to davx5, it helped me set up calendar integration on android.


I have it working on macOS for me.


I’m in awe of something like Debian where entire mirrors have been served on ancient computers with reasonable performance. Perhaps there is a configuration issue, but at my work it is one of the slowest services aside from jira. I actually try to avoid opening jira and next cloud because it’s frustratingly slow to browse.

Edit: I was eager to see the link with the 10x performance number. I do hope it improves because we are in need of a service like that.


"I’m in awe of something like Debian where entire mirrors have been served on ancient computers with reasonable performance."

Static file serving is easy. If you don't even need SSL because it's all signed content, it's really easy. Linux has a syscall [1] where you can tell the kernel "ok, now, send this file through this socket without bothering userspace anymore", meaning you get full kernel-mode file transfer without even context switching. I've got static file servers serving similar types of content shipping out dozens to hundreds of megabytes per second that barely hit 3% of one CPU usage.

[1]: https://man7.org/linux/man-pages/man2/sendfile.2.html


Browsing a directory of essentially static artifacts is really slow in nextcloud. Git isn’t the best place to store binaries and assets and we tried nextcloud as an alternative since we are already hosting it.


Nextcloud isn't serving static files, it was serving a database hit in a PHP environment throwing away a lot of stuff on every connection and doing all sorts of things. Presumably this newer backend does less stuff (as that is the key to performance). Debian serves static files.


I don't think there was any doubt that it was an architectural question. I think the essence of what's being asked is that when jira and nextcloud should be doing next to nothing (based on the inherent complexity of what's materially being done), they seem to have to do quite a lot.

> Presumably this newer backend does less stuff

Presumably not in terms of removing features, but in terms of having been refactored.


Does that mean it's now reliable when putting it in a public-facing place? An orga that shall not be named used nextcloud for various important things and had it connected to the Internet, which for modern open source software is usually okay. But then a friend found that you can take the whole system down from a 56k modem (pre-auth) and it had to be recommended the Orga keep it internal, which was an issue because they iirc also used it for file sharing with externals.

As far as I know it's very rare that someone bothers with exploiting denial of service bugs, but given how trivial (triggerable by hand) this was, it's still a bit risky.

The bug was of course reported to them but closed as wontfix dontcare because there were too many other ways of taking it down already. Php was blamed iirc (which really isn't the culprit).


I'm really not sure why you are asking this question? Nextcloud is used by thousands of enterprise level & small private users on public facing servers.

Can you be more clear about what you mean by "a friend found that you can take that whole system down from a 56k modem"?

I have no idea what you mean by that. You mention denial of service. Are you claiming a Nextcloud instance can be DoS'ed by a single computer with a 56k internet connection?

Respectfully, that is quite a sensational claim/ stance to take.


Yeah I'm being a bit more vague than I'd like, I should have taken the effort of going to my pc (am on phone) where I have a password manager to login to the account under my real name. I don't want to connect this one too much.

Without posting the specific exploit, the issue is with the server-side sleep() in the login system. If you spawn enough threads, which you could easily do in the given time from even a 56k modem, it will for some reason crash the whole thing. Tested with a couple friends and all the instances had to be restarted manually, none of them (running on different web servers) withstood it. It's not clear why as the sleep should simply run through and then unblock the threads; for some reason that's not what happens.

Again, this was reported and they don't care. If you want more info, this should be enough to reproduce it without much effort and/or ask them about it (not sure if they made the ticket public, initial report probably was presumably private due to the pre-auth/unconditional nature).


Fair enough, no need to give any up any identifying information :)

That doesn't sound good. I guess as a personal user I'm not too worried about being DoSed, but that would certainly be more of a concern for a large organization evaluating the software.

If that is the case, then I certainly have an 'eyebrow raised'.


I tried picloud which packages nextcloud up for the raspberry pi 3B+. It really wasn’t able to handle even a single user but maybe I had something misconfigured.


If you check my earlier comments, I often praise Nextcloud and the team behind it, but this is even more crazier by their own standards!


I'm still not convinced this is better than a shell account with a c-git and prosody instance.


what would be great is to allow a client to connect to more than 1 nextcloud instances.

For example, from my machine, i can connect to my nextcloud, and also to some folders shared from my group's nextcloud.


You can connect multiple accounts from the desktop client if that's what you mean... If you mean nextcloud to nextcloud there's also federation, but haven't really tried that as I've never needed it.


What client are you using? I have that capability on my Android client, Linux client, and windows client, and it works extremely well.


The Linux client can connect to multiple nextcloud instances. Its been that way for years.


nextcloud is awesome. I have been using it on my self hosted cloud and it's been fantastic. some features are better than the cloud providers


I wish companies would stop using emojis completely. Its just weird



Looks to me more like Reddit hug of death: https://www.reddit.com/r/rust/comments/lpusc7/nextcloud_is_n...


A /r/rust reddit thread is nothing compared to front page HN.


To be fair, at the time I wrote my comment the Reddit post had 400+ votes and the HN one was in the early double digits IIRC.


anyone remembers the term "slashdotted"?


Oh to be young again


No, where did this come from?


https://slashdot.org/ was quite a popular source of tech-related news back in the very late 20th and early 21th century

https://en.wikipedia.org/wiki/Slashdot





Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: