Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Doesn't mention if they're running Linux/what flavor they're running. I personally would be wary of running OpenZFS in production on Linux, especially ZFS on root. It has bit me in the ass too many times on Debian with an update breaking DKMS and rendering my system unbootable.

Also, it's very, very strange/worrying to see no mention of disk encryption anywhere in the post or the tuning guide. For a company with encrypt in the name, that is responsible for the majority of trust on the internet, WTF? That should be highlighted in their benchmarking. ZFS supports native encryption, MariaDB does encryption, how are they encrypting at rest/transit/use?



Given that they're using a HSM (actually several), there's really not much that needs protection via FDE. The certs are obviously public and the domains are in the transparency log.

On the ZFS note: it's been rock solid for me with Ubuntu but a living nightmare with Arch. My Arch update would upgrade the kernel but then OpenZFS would semi-routinely incompatible resulting in an unbootable system.


I had the same issue. I went from Ubuntu -> Arch -> NixOS looking for a distro with well-supported ZFS. Finally found one (the last one).

This is the magic line from my declarative configuration that ensures I never get a kernel update that is incompatible with my ZFS-on-root config:

kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;

Been running it for 2 years now; quite happy with it, and this is probably my "final distro hop". Once you go declarative (and climb the learning curve), you are done with everything else I guess.

Put it this way, I can switch to an entirely different window manager by changing 1 configuration line. And it works every time. I've tried that on other Linux distros, and it always borks something or other. I can also change my GRUB theme and boot animation via Plymouth, something I would have NEVER risked on ANY other Linux distro due to the risk of modifying boot configuration... but since it's declarative (and validated) on NixOS, I've had no issues (just tweaking). If I manage to bork something, which is rare, I just reboot into a previous update of my OS and fix it and then try again.


+1 to this, ZFS with NixOS is quite manageable.

One thing I'm not a fan of though is the guide on the NixOS wiki was removed and instead points to this guide in the OpenZFS docs: https://openzfs.github.io/openzfs-docs/Getting%20Started/Nix...

It does not have much in the way of rationale - I.e. they say you need minimum 1GB reserve space but don't say what reserve space is and why you need it.

I recall the NixOS wiki used to explain options and trade offs in a lot more detail.

Nice tip re: latestCompatibleLinuxPackage! Had not seen that anywhere yet (probably because I haven't had any issues yet).


Consider giving FreeBSD a shot, ZFS support is excellent


Yeah, FWIW my first production ZFS deploy was on FreeBSD over 10 years ago. It was rock solid (though low-demand).


I started with Solaris, moved to OpenIndiana, then to FreeBSD (the last over ten years ago). FreeBSD is hands-down the best. (I have a few Debian servers for other purposes but contemplate moving them to FreeBSD as well.)


I have experienced too many other benefits to NixOS (and learned enough Nix to get by along the way) to turn back now.

It now seems like the only way to do Linux sanely to me.

It’s perfect for tinkerers like me due to having more supported packages than any other distro, plus instant rollbacks.

It would be like asking a functional programmer to go back to C. “But… I’d lose all my guarantees, and spend too much time troubleshooting crap that just would never happen in [functional language/NixOS] again… No thanks”

To some things, there’s simply no turning back… and Nix is (slowly but steadily) gaining marketshare and mindshare for this reason.

If the BSD’s want future relevance, they better steal some ideas from Nix quick, and get cracking with declarative immutability and deterministic builds


Those things are cool... but I don't need them. I suspect many of us don't.

I like BSD's simplicity, consistency, and its native integration with ZFS. I don't know about deterministic builds but I suspect it does that without making a big deal, since everything can arrive via ports.

Linux, as much as I love it, is a rat's nest of different ideas and the inconsistency between distributions makes me want to pull my hair out. I care less about how it does things, and more that I only have to learn one system to understand each component -- as opposed to the mess that is init.d vs systemd vs upstart or whatever.

man pages are good but the FreeBSD Handbook is better.


> I suspect many of us don’t

One thing I’ve learned after crossing the 50 year mark is that time is actually the most valuable thing we have (and, most egalitarianly, everyone starts out with more or less the same amount of it left). So while troubleshooting can be enjoyable to many, time spent troubleshooting any issue that would simply be impossible to encounter in an alternative system is expensive indeed.

Consider reading this very understandable paper, which I consider at least as important as the Bitcoin paper: https://edolstra.github.io/pubs/nspfssd-lisa2004-final.pdf


Underrated fact: The Nix repo has the largest number of supported packages of any Linux distro (my guess is that since everything is deterministically built "from the metal" more or less, that this results in far fewer needs for support and troubleshooting)


But everything else is not. There are so few packages released for *BSDs it's a constant game of searching, compiling dependencies, etc.


Not sure what apps you are running but everything I have wanted to install on it either compiled from source, compiled from ports, or has a package, without too much hassle. (Worst thing was having to type 'cmake' instead of 'make' for building llama.cpp)

BSD has its issues, for sure, but on the question of ZFS support it is stellar.


ZFS has been very reliable for me since 2006 or so when I first started using it on SPARC hardware with the Solaris 10 beta; I assume that since they have a backup server and a primary server, they don't update and reboot them both at the same time.


You can always boot off ext4 and then just run data off OpenZFS pools. The benefits of booting off ZFS are extremely minimal compared to having your working data on ZFS.


Usually you aren't updating production servers unless it's a security patch, fixes a problem you have, or adds a feature you want/need. Even then, usually you have a test environment to verify the upgrade won't bork the system


Disagree, you have to continuously update production servers (not daily, but ~weekly/monthly). The more often you do it, the more automated and risk-free it is, and the smaller the version gap for when that critical security vulnerability hits that needs to be patched ASAP.


ZFS on Linux is commonly used in HPC these days. E.g. https://computing.llnl.gov/projects/openzfs


> I personally would be wary of running OpenZFS in production on Linux

A ton of people in the enterprise have been doing this for years without issue; https://openzfs.org/wiki/Companies

> especially ZFS on root

I've been running ZFS on root on NixOS since this excellent guide (https://openzfs.github.io/openzfs-docs/Getting%20Started/Nix...) for about 2 years. Zero issues. (Actually, I see they've updated it, I need to look at that. Also, they default to encrypted root. I turn it off, because the slight hit to performance and extra risk of recovery impossibility is not worth it to me.)

> It has bit me in the ass too many times on Debian with an update breaking DKMS and rendering my system unbootable

Well, I think you've found your problem, then. (That also might be why FreeNAS, which I also have running as my NAS, switched to Linux from Debian when they re-branded as TrueNAS Core/Enterprise.) Come over to NixOS, where you can simply reboot into any of N previous instances after an update that borks something (which almost never happens, anyway, because you can actually specify, in your machine configuration, "use the latest kernel that is compatible with ZFS"). No USB boot key needed. Here's the magic line from my own declarative configuration:

    kernelPackages = config.boot.zfs.package.latestCompatibleLinuxPackages;
Aaaaand... DONE. ;)

> Also, it's very, very strange/worrying to see no mention of disk encryption anywhere in the post or the tuning guide. For a company with encrypt in the name, that is responsible for the majority of trust on the internet, WTF?

You're assuming they're not doing it, without evidence. Also, if they're already managing the security around their certs and cert generation properly, they might not need FDE. FDE is overrated IMHO, frankly, and also incurs a performance cost, as well as an extra risk cost (try recovering an encrypted drive to know what I mean). In short, religions are bad, even in technological choices; there is no single technological configuration choice that is 100% better than all possible alternative configurations.

> That should be highlighted in their benchmarking. ZFS supports native encryption, MariaDB does encryption, how are they encrypting at rest/transit/use?

Multiple layers of encryption incur an extra performance cost with almost no gain in extra security.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: