Finally native encryption! Might still be a bit of a dance to boot - but I'd much rather have small ext3 /boot and let zfs do disk/volume/encryption/compression on the rest. Oh, and while swap on a zvol is possible - I regret setting that up on my laptop. Traditional encrypted swap makes more sense for hibernation.
In ideal world, zfs would do all, and we'd boot straight in - but as far as I can figure out that'll require a new bootloader project. And I'm not sure how I feel about (full) zfs support in my bootloader anyway.
I currently have all my ZFS drives on top of LUKS for my storage disks. I don't have the disks to shuffle things around at this point, but when I need to expand, I'm sure I'll use the native encryption on new disks! This is pretty big.
On my boot volume, I run full disk encryption (luks+ext4 for everything including /boot). Grub has built-in support for luks type 1 (do not use luks version 2! Grub can't unlock those yet. I learned that the hard way :-P).
If you have a signed Grub EFI loader, remove the default secure boot keys and add in just the CA/certs for your system and password your BIOS/setup, you have the potential for a very secure system (ignoring the Intel/AMD management systems that are difficult or impossible to disable).
> If you have a signed Grub EFI loader, remove the default secure boot keys and add in just the CA/certs for your system and password your BIOS/setup, you have the potential for a very secure system (ignoring the Intel/AMD management systems that are difficult or impossible to disable).
Or just dump Grub altogether and boot kernel directly as an UEFI image. No need for middle-man!
You're right, it's been a while since I set up a luks+ext3/4 system - I'd forgotten about grub luks support. Certainly better than an unencrypted boot volume - but I'm uncertain if I'll consider it worth the extra hassle (I mostly view fde as a means to safeguard data on a machine that lost or stolen while off; the bar for meaningful improvement on that is pretty high, especially with Intel backdoors in the form of ime etc).
We at Datto [1] are all very proud of our very own Tom Caputi and all the hard work he and the ZFS team have poured into the encryption at rest feature. Well done and thank you for an amazing feature!
Sequential scrub turns "let's read all the blocks in the order we see them in the metadata" into "let's separate this into a read and a dispatch phase, so we can group things we dispatch into sequential regions and make spinning disks less sad at the amount of random IO needed".
It's quite a dramatic improvement for pools on hard drives.
Very much so. Spinning disks are bad at seeking randomly, and turning lots of random IOs into a smaller number of relatively sequential IOs is a substantial win.
Hmmm I wonder why they don't support SIMD instructions on 5.x kernels. I couldn't find any information on what might have changed that would cause that to be an issue.
Well that's positively pleasant... I really appreciate the link though. Checksumming happens quite frequently I wonder how much the removal of these instructions will impact the performance...
The x86-64 FPU state is quite large so normally the user task's FPU state is only saved/restored on a task switch.
This means that if kernel code wants to use the FPU (which includes the SIMD instructions), it has to explicitly request access and return it when it's done. The functions that do that are the exports being referred to here.
Sounds really good. Particularly looking forward to native encryption and TRIM.
I couldn’t see it on the list, but I believe I’ve heard it mentioned before... Right now ZoL maintains its own cache, ARC and other things, besides what the Linux-kernel already provides, causing excess RAM usage.
This situation "improving" would be a performance regression, an afaik, no one is really looking to do anything about it. Managing its own caches are a feature, not a bug.
That memory is freeable if an application needs it so there's no harm in it being used, either.
It's still not great. Linux native caching keeps fighting with the ZFS cache and it's not quite as freeable as native cache. If you try to launch some applications that allocates big chunks of memory quickly, it may fail due to ZFS not freeing its memory fast enough. I had this issue before limiting the max ZFS cache size. If ZFS could integrate the caching better with Linux it would be perfect to me.
Given that said port predates feature flags and has no source posted, making any assertions about what it actually does versus claims to do seems premature.
Excellent. I have been running RC5 for a week for compatibility with a feature in FreeBSD 12’s build of OpenZFS. So I’ll be building this release tonight.
Have releases generally been solid? Is this okay to install on my server today, or is it like Ubuntu, where you're supposed to wait for the first point release because .0 is actually a bit of a beta?
I’ve been running the RC5 - so last release before this one - for a week and found it to be stable enough. But I might have just been lucky.
Ultimately there is a risk with any file system- if not a software one then a risk of hardware failure. So the advice will always be the same: make regular backups
Since [1] packages through -rc4 and [2] already has rc5 and 0.8 final in the upstream branches, I would suspect you'll be able to build a package for buster Soon(tm), though I do not personally know how creation of -backports works for testing during a freeze.
In ideal world, zfs would do all, and we'd boot straight in - but as far as I can figure out that'll require a new bootloader project. And I'm not sure how I feel about (full) zfs support in my bootloader anyway.