I'm a fan of XFS. I've used it for over a decade for all systems that don't need ZFS.
In fact, due to the lifespan of the headless box under my desk (ie, predates bootable ZFS root partitions under Linux), its root partition is still XFS, while the actual real storage is ZFS managed
I think the main issue with ZFS is that you need to learn a lot of concepts to actually understand how to do things correctly and don't fuck up.
To this day I've written some personal notes as warning reminders to be careful about certain operations. I can't remember the exact terminology (I believe these are called "attributes" or "features" in ZFS), but what happened to me not long ago was that if I created a ZFS filesystem (along with a pool) from Ubuntu it was creating some of these features that ZFS on FreeBSD didn't properly recognize and didn't allowed me to successfully mount it.
The other way round creating it from FreeBSD and then mounting it on Ubuntu worked as expected. The thing with these "features" is that they can be "promoted" or "upgraded" and in this case if you do so will again render the ZFS unavailable to be mounted in FreeBSD. I spent a whole day on this until for some reason I decided to print this list and effectively compare and do some online search about it.
> if I created a ZFS filesystem (along with a pool) from Ubuntu it was creating some of these features that ZFS on FreeBSD didn't properly recognize and didn't allowed me to successfully mount it.
I mean... shrugs ...you could format it as XFS and not be able to mount on FreeBSD either? Seems like feature flags is a good way to solve this problem.
I was actually going to ask that... Is XFS a good "portable" filesystem across GNU/Linux and different BSD flavors? Native support or through FUSE? The use case are USB drives. I don't mind losing visibility on Windows or MacOS, just to work flawlessly between these without major efforts.
I could be wrong but I didn't go with UFS because apparently there are significant implementation differences among BSDs. I was kind of surprised to discover that.
FreeBSD and NetBSD essentially derive from the same source, 4.4BSD-Lite from March 1994. OpenBSD is a fork of NetBSD from 1995.
There's been a lot of divergence, and there's not really anything pushing towards convergence. FreeBSD modified UFS to meet their needs and desires and other BSDs went in other directions. There wasn't a lot of clammoring for a disk based data format for data interchange, so there's not a big reason to keep the divergent UFSes compatible. Exchange data via networks or tape archives or tars written to block devices or tarfiles on a widely recognized filesystem (msdos fat will do).
This pattern of a shared source and divergent development isn't super common. It's pretty rare commercially, and few opensource projects have forks that diverge and stay active for decades.
As a BSD user I think the divergence is a good thing. it means there are substantial differences which allows you to pick an OS that's really tailored to your usecase.
In the Linux world there's mainly just userland differences. I like it the BSD way. Of course Linux sees much more commercial input which I consider a bad thing and one reason I use FreeBSD so much. But for commercial interests it's a good thing to have as much in common as possible to have more potential customers.
You would probably not encounter that issue these days since FreeBSD 13 switched out the old FreeBSD ZFS tree in favor of the same OpenZFS you'd get on Linux https://cgit.freebsd.org/src/commit/?id=9e5787d2284e
The other comments already covered this, but yes, that is arguably the correct use of features: only enable the features (many of which can only be set at file system creation time; others can be upgraded to in a one way direction) that are supported by your target hosts.
At the time you last ran it (also said by sibling comment), FreeBSD still had not yet ran upstream OpenZFS, and still ran their own older code. To have had FBSD compatibility at the time, you would have had to only enable the features FBSD supported.
Generating the file system on FBSD, and not running "zfs upgrade" on Ubuntu would be one way of doing this, to avoid having to manually set features at all.
Today this is not an issue, everyone runs the same code.
Ages ago XFS had a rather nasty behavior in case of power failure - any files opened for write would be deleted after restart. From what I remember this was by design. Has this changed?
What you describe was common on the SGI systems we used at work. Some setups had a configuration file which was constantly written to and read from, and that file would be (most of the time) empty if there was a power failure (I don't btw know why the SGI systems didn't have a power-failure-emergency-shutdown mode, the power supplies kept power for several seconds. But anyway).
However: This _never_ happened with XFS on Linux systems. Exact same software. I don't know why. But XFS has been incredibly stable for not only my personal boxes but also for everything we have provided to customers at work. We need non-varying sustained write rates for huge amounts of data, and XFS is smooth, much better than when tested against e.g. ext4 (the tests we did were done years ago, we haven't retested as XFS just works.
I stayed away from XFS because it had another bad behaviour: after a crash it will do a replay of the log and happily continue. After a couple of crashes the filesystem became so corrupted that even the replay of the log failed and fsck was useless.
I tried also the option of running fsck after every crash but this also did not help (some crashes seems to mess up the filesystem badly). At the end i stayed with JFS ( which i was also testing at that time together with Reiserfs) because it was the best balance between speed and CPU power at that time.
Dynamically created inodes (might be useful if you work with a very large number of files — ext4 might run out of them and refuse to create new files even if you have lots of free space), more stable performance under load (standard distributions of latency and throughput are typically lower), reflinks. In return you lose BSD and Windows compatibility, if you ever need those, and performance averages are somewhat lower (used to be a lot lower but XFS has caught up very close to ext4).
Doubtful. Although they've both improved over the years, the conventional wisdom was that XFS' benefits were seen on very large volumes, and ext4 was more efficient for small reads/writes and metadata operations. This explains why XFS is more niche now that btrfs and zfs are around.
RHEL is the only distro I know that defaults to XFS.
How large is "very large", though, and has that changed with time? The last time I used XFS, I would have considered 500GB to be pretty large. Nowadays that's kinda mediocre. I have a 24TB RAID5 that's still on ext4 though; I imagine that qualifies as at least "fairly large"?
The last time I had to participate in a RHEL install, the installer would do ext4 if <16TB, xfs if >16TB.
I find this size unusually arbitrary, but I suspect Red Hat found unwanted behavior in some ext related code. This was after the known issue in e2fsprogs that was fixed around a decade ago preventing fscking of >16TB; RHEL of a decade ago was either "xfs by default" or "xfs if >2TB" or similar, and the installer clearly changed since then.
Casual Googling also says my experience with "RHEL says XFS > 16TB" is out of date, and its now "XFS > 100TB". And like, look, if you're doing 100TB, use ZFS, stop fucking around and do it right.
One thing that ext4 has and XFS does not is extremely delayed writes. Ext4 can postpone writes for tens of seconds, essentially minutes.
This has various fun implications for software that cares about durable writes, like databases.
In fact, due to the lifespan of the headless box under my desk (ie, predates bootable ZFS root partitions under Linux), its root partition is still XFS, while the actual real storage is ZFS managed