I have to say that even as a person who strongly prefers Debian, one of the greatest things about Arch is its wiki. In terms of reference material to many common daemons and linux related subsystems, it's second to none.
I would recommend that everyone maintain an Arch virtual machine and know at least its basic differences (vs debian or centos, etc). By doing so, you can can make the most effective use of the wiki, to understand what material in it is specific to Arch, vs specific to the individual daemon being documented.
Hear hear! Arch's wiki is the top resource I'd bring with me to a deserted island that only has unix computers on it and I could only bring one resource.
Not sure if there is anything on there is actually specific to Arch Linux, as everything you can run/is run on Arch Linux can be run elsewhere. Even pacman/articles about packaging is not specific to Arch but specific to pacman and it's packaging. You have any examples of Arch-specific articles on there?
Mostly I would say the arch specific material is related to how arch's package management system updates daemons, and any arch specific configuration for systemd vs how systemd is implemented on a more 'mainstream' linux like Debian stable. Generally the arch specific stuff is less than 5% of the content vs general configuration info about the daemon.
And other fundamental system level stuff like how grub2 and booting is handled on Arch, or full disk encryption, etc.
I have also seen some instances where the arch maintainers who package some daemons choose the installation location and configuration location for stuff in a different file tree structure. And cases where the default out of the box configuration for the daemon is different on Arch than on another Linux.
Yes, love the wiki! I'd say that's one of the reasons I (and I'm guessing others) decided to try Arch. If that's where you are getting the best info, it's a good indication of the quality of the distribution and active community.
I know Arch users at times have a reputation of being not very newbie friendly, but when you have such a great resource like the wiki, it really does answer just about anything. I've used it to set up full disk encryption, resize partitions, and fix many random quirks in all sorts of programs. Happy to run Arch as my only OS on a desktop, laptop, and Pi.
Arch is supposedly a good distribution to learn Linux under the hood, but I learned a lot about Linux and the CLI when Arch did not even exist (back then you had Slackware which was all about DIY, later on Gentoo had all the rice, ...). I frankly don't understand the hype. If you want a rolling release, you can also run Debian Testing or Unstable or any BSD-CURRENT. From a technical PoV I don't see what Arch does different (in contrast to an OS like NixOS, QubesOS, Tails, or Kali). The Arch wiki is excellent, so perhaps that is what Arch brings to the table, AUR being the other (its huge).
Fragmentation, when it makes sense (such as the examples I mentioned) is OK but all these thirteen in a dozen distributions won't be missed by me. Isn't it bloody annoying already when you're a Debian person and work at a RedHat shop? Why would I add another factor to that? And a rolling release OS is great on your main primary desktop. It isn't on a server, nor on a secondary computer on the side.
Hence instead of Arch I recommend to play around with both Debian and RedHat based OSes to get familiar with both. As both of these are the standard in the industry (not Arch). That and a distribution tailored for a purpose, such as the ones I mentioned. The other distribution I never understood the popularity about is Manjaro. It used to be top on Distrowatch (its #2 now). Why though? I don't know anyone who uses that. It has no market penetration in the server space AFAIK.
That being said, we are talking in a thread for and by Arch users. It ain't nice to break their party. I'll end my post wishing all to enjoy Arch Conf 2020!
> From a technical PoV I don't see what Arch does different
I can't speak for everyone, but here are some things I see:
- Arch minimizes modification of code in packages, which is nice in reducing bugs or idiosyncrasies introduced via distros.
- Pacman (and its surroundings) is far more convenient, far more robust, and generally far less of a headache to deal with than apt. Hell, even figuring out what flags apt-get install accepts is an ordeal (--help? you must be joking?), let alone the nightmare it is every time you need it to do a serious job.
Not sure if these qualify as "technical" to you, but they're important. (Unless they don't and you consider anything nontechnical to be unimportant.)
I first started using Debian with 1.3 and even admined a substantial number (hundreds) of Debian systems in the early part of my career. I still find it very comfortable.
Up until a few years ago I'd never even looked at Arch, when I got into it, I went pretty deep and mostly used it for a year or so.
Then recently I tried going back to Debian for a while. I didn't even bother with stable, and installed a testing snapshot, because I like new software and Arch hooked me on rolling releases.
I was doing all this on a quite new Thinkpad which required some tweaks to get things working. I found myself ending up on the Arch Wiki over and over. The Debian wiki does have some good Debian specific stuff, but it's also full of really old and maybe outdated information.
And then Debian didn't have some needed audio firmware packaged, and Arch did, so I made the switch back to Arch. I'd at first resisted it, because Arch has such a higher upfront cost to get installed and going. Especially with my more complex btrfs on LUKS2 (no LVM) setup. (I also had this working on Debian, it took a bit of doing).
And now I'm probably going to stick with Arch (on my laptops), though I remain wistful about Debian. It was my first real Linux experience and a lot of my early career.
The things that strike me about Arch vs Debian:
They have comparable numbers of packages available, and if you count AUR, Arch has basically everything. But Arch has FAR fewer people working on it.
By my count Arch has:
57 trusted users (they do packages in the community repo) https://www.archlinux.org/people/trusted-users/
28 develpers (they do core and extra package repos, and other things) https://www.archlinux.org/people/developers/
85 people in total
So Debian has over 10x more people working on the project.
Arch is generally more up to date than Debian testing and unstable (a lot more than stable, but that's obvious) but not by a ton. Generally on Arch for popular packages you'll see a new version show up within 24-48 hours and you'll be rebooting into new kernels very often.
Latest kernel.org stable kernel is 5.9.6
Arch is on 5.9.4 https://tracker.debian.org/pkg/linux
Debian is on 5.9.1 https://tracker.debian.org/pkg/linux
I don't know the specifics, but Debian is probably even more conservative with the Kernel than they are with other things. That said, you can also run LTS kernels on Arch.
Arch rarely changes upstream defaults, so sometimes you have to tweak and configure things. Debian generally has sensible defaults and takes care of things for you, but does modify upstream more frequently.
I find apt, apt-get and dpkg to be a bit more intuitive to use. But under the hood Debian's package management system is a lot more complex than Arch's relatively simple PKGBUILD.
My last point, and something I've been pondering, is the results from their different philosophies. As an outsider observing and using the results of each communities efforts, these are some things I think.
Arch has a pretty extreme view on simplicity. But because of that, they have a rich, and fast moving ecosystem, with many many packages available.
Some of those choices and results:
rolling release - don't bother with releases, freezes, promoting from one branch to another, etc. This saves a TON of effort, and for desktop / personal use, works great. It's by far what I prefer. Even when I was using Debian, I just would run testing or unstable anyway, so would be rolling there too. I haven't come up with a non-insane way to make this work for production systems, but still pondering that.
no real installer - arch just gives you a boot image that basically lets you manually partition your drive and then some tools (pacstrap, arch-chroot) to get things setup and going. This is radically more simple, this also saves a ton of effort for the developers. So much less to test and maintain.
avoid modifying upstream - Also a massive time saver. Just package up what upstream provides and constantly track that.
wiki instead of doing any real setup or defaults - this is also a huge time saver for the developers. Just let people document how to do things and provide the most bare building blocks. Most arch packages just put files in place, and leave all the rest up to the user.
don't split packages - Arch generally doesn't split packages out like Debian does with the -dev packages, etc. This seems simpler and easier to deal with to me. At the cost of a bit more disk space for users.
All that said, everything I've listed above basically offloads a ton of work to each individual user. That may be the fundamental trade-off here. Debian tries to do a lot more for it's users. But they need 10x more developers to do it.
Arch simplicity and top performance as well as its rolling release, high quality packages and community (yes, docs too!) definitely makes me more productive every day.
As a happy user since 2015, thanks for such amazing distro and experience.
I use a btrfs root volume with btrfs subvolumes instead of separate partitions. I have snapper setup with a pacman hook to take a snapshot before and after every pacman run. So worst case, if something goes horribly wrong, I can boot from an Arch install image into a previous snapshot and unbreak things.
I'm using Arch Linux on a VM since 2015. On that VM I host some services for personal projects, gitolite, Grafana, InfluxDB, wireguard and some self baked daemons (they are also deployed as Arch Linux package, it's not too hard and much cleaner and easier to deal with in the long run).
I update weekly to daily, but sometimes also a month goes by (e.g., if I'm on vacation or the like), never had an issue, never had any breakage. I reboot on kernel updates, downtime is a few seconds which I can deal with.
The VM is only single core, 4GB memory, 40 GB disk space - chugging along just fine.
Personally, I'd always feel safe and good with choosing either Arch Linux, Debian or Alpine Linux as VM/CT distribution (my underlying hypervisor would be Proxmox VE, which derives from Debian).
I do it weekly and it's fine for me. Of course evaluating first the nature of the upgrade. You know, Kernel or Arch base related upgrades specially.
A different case for certain third-party packages or AUR ones, that usually they can be upgraded ASAP without friction.
I'm giving my personal experience here, of course depends on the case, but even kernel or Arch related upgrades that I do weekly are also smoothly.
I'm an Arch newbie (started using it in January this year). Never had any issues with rolling updates. A few times the update required manual intervention, but that was documented on the Arch website and took about 30 seconds to fix.
Historically, I've had more pains with upgrading on Ubuntu-based distros (but in all fairness I've been using them for much longer).
About 15 minutes every 2 months when my daughter updates her Arch desktop. All because of her Nvidia GTX480 (I know, she needs to inherit something newer already...) that has been deprecated from the official drivers so needs manual intervention.
It's cool to see them self-host. I'd love to see a Peertube instance for tech conferences though, mainly for content discovery and comments.
Sidenote: I barely watched any conference talks this year because the video and especially audio quality is so atrocious for many of them. A decent streaming/recoding setup on PC/laptop is surprisingly hard.
80% would have had much better quality were they shot on a recent smartphone instead.
But they're not self-hosting? They're using media.ccc.de, which is a well known community streaming/hosting platform for tech conferences around the CCC/hacker scene [1].
Unless anything that's not YouTube or Twitch is seen these days as self-hosting...
You do know what the ccc is, right? That's about as close as one can come to self hosting these days, from an organization that won't use Youtube or similar on general principles.
Seems "self-hosting" is another term that is going into widespread usage to mean something else than it's original term.
The way I understood it (before today I guess) is that if I'm not CCC and I'm hosting something on CCC's infrastructure, then I'm not self-hosting. If I'm organization XYZ and I'm hosting something on XYZ's infrastructure, then it is self-hosting.
One would think the name makes it obvious, but maybe the term has been skewed like many other terms today.
The closest one can come to self-hosting today is hosting it yourself, not hosting it on others infrastructure.
I get the distinction, but in terms of entity size and willingness to 'self host' their own servers, the ccc is pretty tiny and very much a labor of love, compared to any randomly chosen top 30 size commercial, for-profit video streaming platform on the Internet.
the ccc is even its own ISP, it has its own ASN and you can peer with it. They're self hosting right down to almost the most fundamental levels of what putting content on the internet is.
Sure, I agree with everything you have written here and I'm familiar with the CCC.
But that doesn't make "Arch organization hosting content on CCC's infrastructure is self-hosting" a true statement. Self-hosting is hosting it yourself. It doesn't depend on the entity size, the willingness nor if you run your own ASN, self-hosting simply means what it says on the tin. And if you're not hosting it yourself, you're not self-hosting.
Technically yes, then they're not 'self hosting'. But say they rented a dozen powerful 1U servers on a fast pipe in colocation somewhere, and did it all themselves, would they be self hosting? Or no? Because they wouldn't own/run the datacenter itself, or the ISPs and IX point that were their upstreams. Or would it be self hosting only if they owned the bare metal versus leasing it?
Anecdotally I have seen a lot of instances where medium to large sized non-profit, open-source project related organizations become the umbrella organization containing a number of smaller community initiatives within them.
Think it's important we set the context to be around Arch Conf, where we're talking about hosting recordings from the conference.
In this case: Yes, if they do run their own servers for storing the content, they are self-hosting. No, doing your own peering is not needed to be classified as self-hosting. No, you don't need to run your own data center to be self-hosting. Unless you want to self-host the servers themselves, then yes, running your own data center is needed.
I think the most important point is "hosted" vs "self-hosted" here, where hosted is letting someone else run the servers where you put the content, while self-hosted is you running the servers where you put the content.
While this is hosted by the ccc, they also have the content on their own server, including not only the talks themselves but also the presentation as a pdf, notes, music: https://static.conf.archlinux.org/2020/archconf/
> because the video and especially audio quality is so atrocious for many of them
Of the few ones you watched, which had atrocious audio quality? I've watched many of them, and the video has been fine for all the recordings, and the audio quality obviously depends on the speakers setup, not the processing/encoding/quality of the video. But of the ones I've seen, the audio has been perfectly fine to be able to understand what's being said.
CCC has been perfecting the art of "recording, streaming and storing conference talks" for many many years, so it would be weird if it suddenly took a dive in quality. Would like to see what videos you're talking about here.
Edit: I just quickly went through all the videos on the page from the submission, to check the video/audio quality. All of them have perfectly OK video/audio quality while 2 videos could have been better mixed/mastered (volume too low) but nothing raising the level of your speaker wouldn't be able to fix.
Seeing trying to save some data on PGP keys in pacman db while removing differential packages makes me somewhat angry. Really? That is the a plan of priorities for pacman?
Well, wasn't huge fan of previous diff implementation either - making diffs of already compressed data was a big stopper for significant reduction of bandwidth. But still 20-30% on average for rolling distro is significant improvement. And I bet that with proper unrolling compressed files then making diff and then getting it back together it would bring another "extra" 20-40% savings.
Who is the target audience of this conference? Of course, any maintainer or people who run arch in production but I’d assume these groups are very small.
One day I’d like to be more of a Linux hobbyist. I already use Arch but now that I have a working install my interest has died down a lot. What else am I supposed to do besides endlessly customize my window manager? I feel like I’m missing the plot here.
My next plan was either to do LFS or install another interesting distro (leaning towards Void) but I don’t really understand the mindset of the typical hobbyist here.
The goal was to have an even split of talks from Arch contributors and external community. Have a motivational conference for team members, but interesting for the wider community. Both in terms of more technical talks and some higher level talks.
>What else am I supposed to do besides endlessly customize my window manager? I feel like I’m missing the plot here.
As far as I understand the point of Arch in the first place is that you want to set things up yourself. That's changed wildly though, with things like Manjaro and the deceased Antergos. If you want to use Arch but don't care for the installation process, installing Manjaro or EndeavorOS isn't any harder than installing Ubuntu.
They don't really "make it challenging", they just didn't bother to create a GUI or ncurses installer.
If you know a few things about Linux systems and can read instructions, it's really straightforward to install without those, so Arch maintainers are free to spend their time on other projects.
It's something of a meme that the installation is some kind of hazing ritual, but I don't think of it that way. There are plenty of other distributions with GUI installers for the people who can't or don't want to spend some time understanding how they work.
I would recommend that everyone maintain an Arch virtual machine and know at least its basic differences (vs debian or centos, etc). By doing so, you can can make the most effective use of the wiki, to understand what material in it is specific to Arch, vs specific to the individual daemon being documented.
Somewhat randomly chosen daemon example:
https://wiki.archlinux.org/index.php/Postfix