"I’m in awe of something like Debian where entire mirrors have been served on ancient computers with reasonable performance."
Static file serving is easy. If you don't even need SSL because it's all signed content, it's really easy. Linux has a syscall [1] where you can tell the kernel "ok, now, send this file through this socket without bothering userspace anymore", meaning you get full kernel-mode file transfer without even context switching. I've got static file servers serving similar types of content shipping out dozens to hundreds of megabytes per second that barely hit 3% of one CPU usage.
Browsing a directory of essentially static artifacts is really slow in nextcloud. Git isn’t the best place to store binaries and assets and we tried nextcloud as an alternative since we are already hosting it.
Nextcloud isn't serving static files, it was serving a database hit in a PHP environment throwing away a lot of stuff on every connection and doing all sorts of things. Presumably this newer backend does less stuff (as that is the key to performance). Debian serves static files.
I don't think there was any doubt that it was an architectural question. I think the essence of what's being asked is that when jira and nextcloud should be doing next to nothing (based on the inherent complexity of what's materially being done), they seem to have to do quite a lot.
> Presumably this newer backend does less stuff
Presumably not in terms of removing features, but in terms of having been refactored.
Static file serving is easy. If you don't even need SSL because it's all signed content, it's really easy. Linux has a syscall [1] where you can tell the kernel "ok, now, send this file through this socket without bothering userspace anymore", meaning you get full kernel-mode file transfer without even context switching. I've got static file servers serving similar types of content shipping out dozens to hundreds of megabytes per second that barely hit 3% of one CPU usage.
[1]: https://man7.org/linux/man-pages/man2/sendfile.2.html