Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
ZFS on Linux 0.8.0 (github.com/zfsonlinux)
177 points by grhmc on May 23, 2019 | hide | past | favorite | 43 comments


Finally native encryption! Might still be a bit of a dance to boot - but I'd much rather have small ext3 /boot and let zfs do disk/volume/encryption/compression on the rest. Oh, and while swap on a zvol is possible - I regret setting that up on my laptop. Traditional encrypted swap makes more sense for hibernation.

In ideal world, zfs would do all, and we'd boot straight in - but as far as I can figure out that'll require a new bootloader project. And I'm not sure how I feel about (full) zfs support in my bootloader anyway.


I currently have all my ZFS drives on top of LUKS for my storage disks. I don't have the disks to shuffle things around at this point, but when I need to expand, I'm sure I'll use the native encryption on new disks! This is pretty big.

On my boot volume, I run full disk encryption (luks+ext4 for everything including /boot). Grub has built-in support for luks type 1 (do not use luks version 2! Grub can't unlock those yet. I learned that the hard way :-P).

If you have a signed Grub EFI loader, remove the default secure boot keys and add in just the CA/certs for your system and password your BIOS/setup, you have the potential for a very secure system (ignoring the Intel/AMD management systems that are difficult or impossible to disable).


> If you have a signed Grub EFI loader, remove the default secure boot keys and add in just the CA/certs for your system and password your BIOS/setup, you have the potential for a very secure system (ignoring the Intel/AMD management systems that are difficult or impossible to disable).

Or just dump Grub altogether and boot kernel directly as an UEFI image. No need for middle-man!

(Instructions vary by distro but see this for example: https://wiki.gentoo.org/wiki/EFI_stub_kernel)


But then you're loading the kernel from unencrypted fat32 partition?


If it's signed with Secure Boot keys it's no different for loading signed Grub image. Grub would also need to be unencrypted to work.

As for signing kernel: https://github.com/andreyv/sbupdate


You're right, it's been a while since I set up a luks+ext3/4 system - I'd forgotten about grub luks support. Certainly better than an unencrypted boot volume - but I'm uncertain if I'll consider it worth the extra hassle (I mostly view fde as a means to safeguard data on a machine that lost or stolen while off; the bar for meaningful improvement on that is pretty high, especially with Intel backdoors in the form of ime etc).


We at Datto [1] are all very proud of our very own Tom Caputi and all the hard work he and the ZFS team have poured into the encryption at rest feature. Well done and thank you for an amazing feature!

Also, we're hiring [2] ;-)

[1] https://www.datto.com/

[2] https://www.datto.com/careers/job-board, esp. https://www.datto.com/careers/job-board/post/1639738


I didn’t know sequential scrub was in this release. Really excited about that one.


Why is that? I’ve not read up on that particular feature.


Sequential scrub turns "let's read all the blocks in the order we see them in the metadata" into "let's separate this into a read and a dispatch phase, so we can group things we dispatch into sequential regions and make spinning disks less sad at the amount of random IO needed".

It's quite a dramatic improvement for pools on hard drives.


Oh nice. Thank you for the explanation.

Does that have an impact on how long a scrub takes? I’ve always loved the ability to do online scrubs but they can take a long time


Very much so. Spinning disks are bad at seeking randomly, and turning lots of random IOs into a smaller number of relatively sequential IOs is a substantial win.


The his is huge: directed-IO and trim being sent to underlying drives. This is a massive update. Oh and encryption to boot!


Yes! 0.8 is a pretty amazing release, can't wait to run it on Proxmox.


Me as well :-)


Hmmm I wonder why they don't support SIMD instructions on 5.x kernels. I couldn't find any information on what might have changed that would cause that to be an issue.


Linux 5.0 changed exports that ZFS needed to be GPL-only: https://marc.info/?l=linux-kernel&m=154714516832389&w=2


Well that's positively pleasant... I really appreciate the link though. Checksumming happens quite frequently I wonder how much the removal of these instructions will impact the performance...


How does the exports work that the ZFS folks cannot compile their code with the SIMD instructions? Is there some docs on how this works?


The x86-64 FPU state is quite large so normally the user task's FPU state is only saved/restored on a task switch.

This means that if kernel code wants to use the FPU (which includes the SIMD instructions), it has to explicitly request access and return it when it's done. The functions that do that are the exports being referred to here.


This is big: it finally mainlines the native encryption work.


And TRIM for SSDs.

I have been running RCs for some time and both TRIM and encryption have worked fine.


Do you know/notice the performance difference from encryption?


I haven't measured it, but it will use AES-NI instructions when available.

I don't notice a difference in daily use.


Sounds really good. Particularly looking forward to native encryption and TRIM.

I couldn’t see it on the list, but I believe I’ve heard it mentioned before... Right now ZoL maintains its own cache, ARC and other things, besides what the Linux-kernel already provides, causing excess RAM usage.

Any news on that situation improving?


>Any news on that situation improving?

This situation "improving" would be a performance regression, an afaik, no one is really looking to do anything about it. Managing its own caches are a feature, not a bug.

That memory is freeable if an application needs it so there's no harm in it being used, either.


It's still not great. Linux native caching keeps fighting with the ZFS cache and it's not quite as freeable as native cache. If you try to launch some applications that allocates big chunks of memory quickly, it may fail due to ZFS not freeing its memory fast enough. I had this issue before limiting the max ZFS cache size. If ZFS could integrate the caching better with Linux it would be perfect to me.


How does FreeBSD handles it?


But is it double buffering pages?


No, ZoL bypasses the usual page cache.


Is it doing it for mmap though? 2 years ago, it didn't double-buffer apart from mmaped files. Here's a note about priorities: https://news.ycombinator.com/item?id=11697901

Also seems to be solved in this port: https://www.crossmeta.io/another-zfs-port-on-linux/


Given that said port predates feature flags and has no source posted, making any assertions about what it actually does versus claims to do seems premature.


My problem is exactly that; something is periodically dropping a 16G cache...


Excellent. I have been running RC5 for a week for compatibility with a feature in FreeBSD 12’s build of OpenZFS. So I’ll be building this release tonight.


Great work. Finally no more LUKS inside my ZVOLs :)


Any word on when zstd support will get added? Feels like I've been hearing about it for two years now.


There is a pull request for it, but seems like it's post-0.8.0:

https://github.com/zfsonlinux/zfs/pull/8044


Have releases generally been solid? Is this okay to install on my server today, or is it like Ubuntu, where you're supposed to wait for the first point release because .0 is actually a bit of a beta?


There was a data corruption issue in recent history ( https://github.com/zfsonlinux/zfs/issues/7401), but otherwise I’ve found ZoL releases to be very stable.


I’ve been running the RC5 - so last release before this one - for a week and found it to be stable enough. But I might have just been lucky.

Ultimately there is a risk with any file system- if not a software one then a risk of hardware failure. So the advice will always be the same: make regular backups


Debian Buster packages when?


Since [1] packages through -rc4 and [2] already has rc5 and 0.8 final in the upstream branches, I would suspect you'll be able to build a package for buster Soon(tm), though I do not personally know how creation of -backports works for testing during a freeze.

[1] - https://packages.debian.org/source/experimental/zfs-linux

[2] - https://salsa.debian.org/zfsonlinux-team/zfs


When will this be part of FreeBSD?




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: