Interestingly, Solaris had a UI component for ZFS snapshots built into Gnome's file explorer. Not sure if it's still there though. IIRC, OpenSolaris had it.
I couldn't say what Oracle's been doing, but writing this comment from OpenIndiana (Illumos) the feature is still here and works fine. It's integrated into MATE's Caja (forked GNOME/nautilus) and AFAICT it's the same exact feature and it still just works; you click the little clock button in the toolbar, click to the snapshot you want, and browse the filesystem. Actually looking more closely, it looks like it's a pretty thin veneer over the .zfs/snapshot virtual directory so it should be dead simple to build into any file browser that supports extensions (because all you need to do is provide a list of directories under $FS/.zfs/snapshot, and then when the user clicks one change path to $FS/.zfs/snapshot/$SNAPSHOT). I mean, there's a little more integration than that (some context menu items to snapshot and view old versions, an extra column in the detailed view) but that's the main part and I suspect the whole thing could be reimplemented in a day or two.
I have no specific knowledge but suspect that it just never cross-pollinated; illumos is a somewhat separate community from the Linux-sphere. For all I know, you can build the same code on Linux with OpenZFS and trivially switch it to BTRFS by replacing the paths.
This looks like a brilliant low hanging fruit to go after. In BtrFS, iirc, snapshots work a bit differently than they do under ZFS in that you can only snapshot a subvolume and not a folder, but, if you make every home a subvolume, it should work. All it'd need is some advanced planning and some extra work during system installation.
I'm pretty sure that's how it works in ZFS too? The `zfs snapshot` command takes a filesystem as an argument, which is analogous to a BTRFS subvolume AIUI. And while I personally think every home directory being a filesystem/subvolume makes sense, I don't think you need that actually, it just affects where you splice the path; for example (using ZFS because I know that), if you just stuck everything on the root filesystem you would just browse to /.zfs/snapshot/mysnap/home/myuser instead of /home/myuser/.zfs/snapshot/mysnap and for read-only stuff there would be no difference. I grant that it does affect rw stuff - you may or may not want to let arbitrary users create and destroy snapshots of the root FS or /home.
Also, some obscure yet very helpful ZFS feature is that all your ZFS datasets have hidden (e.g. not visible via `ls -a`) `.zfs` directory in their roots. This directory contains all snapshots of a given ZFS dataset mounted as directories.
For example, if your home is a separate dataset, then `/home/operator/.zfs/` will contain all snapshots automatically mounted for your convenience.
I have this enabled too, but httm seems like an improvement over this since there's lots of times where I need to restore something from a snapshot, but since I have to look it up based on time, there's a lot of guesswork to find the right snapshot that has what I need. This looks great since I think it lets me do lookups via file, so I can see all the snapshots available for a given file and I can instantly see what all my restore options are.
While I don't disagree this tool would be better, if you don't have that tool, I frequently used something like:
ls -lart /vol/.zfs/snapshots/snap-*/file/path
Would list the files sorted by change date so then I could just easily scan through the list for the changed versions.
Exactly. You get it. If you take 100 snapshots but the file was only modified 5 times, then you see 5 file versions. No more digging through snapshots ever.
Huh, this is new to me. How does zfs handle regular directories that happen to be named .zfs?
Might this be a security issue as well? What permissions are needed to access this path? If I somehow have a webserver serving static files from a zfs dataset, might someone use this to access old or deleted files?
I imagine that you will have issues when you enable the setting to make the snapshot directory visible, or it'll behave like a folder that has things in it when you mount something onto the same place (you think you just overwrote all the files, but then you unmount the second thing and it's fine).
Looks like by default the `.zfs/snapshot` folder is owned by root, but has 777 permissions. Then inside the snapshots themselves they have the same permissions as the main pool.
Not sure if there is a way to configure permissions on it, but if security is a concern, you could leave it disabled with `snapdir=hidden`, then just set it to `visible` if you need in there. I think it's disabled by default, so if you haven't gone out of your way to turn it on you're probably good. If you want to leave it visible all the time, you probably want apparmor or something to manage access.
Related fun fact: if you have +x but not +r on a directory, you can traverse it (access things whose names you know), but not view its contents (`ls` won't work).
Fun fact: When Apple first built Time Machine, the goal was that ZFS would be the root filesystem. That's why the TM UX fits so nicely with ZFS. Sadly the licensing issues scared Apple off from adopting ZFS.
> Sadly the licensing issues scared Apple off from adopting ZFS.
It was not licensing. ZFS is licensed the exact same way as Dtrace, and Dtrace is part of Mac OS / macOS.
It was signing a 'support' contract with Sun: terms couldn't be agreed to. From Jeff Bonwick (co-creator of ZFS) on the zfs-users list at the time:
> Apple can currently just take the ZFS CDDL code and incorporate it
> (like they did with DTrace), but it may be that they wanted a "private
> license" from Sun (with appropriate technical support and
> indemnification), and the two entities couldn't come to mutually
> agreeable terms.
I cannot disclose details, but that is the essence of it.
It probably would have happened, and my understanding (from contacts at Apple) is that they had macos working on top of zfs internally and were gearing up to launch it.
Then Oracle bought Sun, and thats when everything fell apart. Presumably Apple deemed it too much of a legal risk to use zfs even with the CDDL, because of how litigious Oracle is.
Sadly, it's typical of Apple to want special treatment from vendors. In a way, I'm sad that we didn't get a MacOS based on ZFS. That would've been nice.
If you read further in the thread it's hinted at that Apple mainly wanted indemnification from legal action and the Net App saga was ongoing at the time.
Considering that Apple would be rolling it out to many users through high margin computers this is a reasonable concern.
I've used OpenZFS on OSX (https://github.com/openzfsonosx/openzfs#readme) and it's been better to me for cross-os drive sharing than NTFS or UFS, despite their warnings about using it on USB devices
> Sadly, it's typical of Apple to want special treatment from vendors.
If you're going to put a third-party's technology into your products (i.e., the file system that everything is built on), having extra assurances with regards to support and development is not crazy.
Apple tends to buy companies that make the technology they based their products on, but wasn't really going to happen with Sun. At best they'd have to poach the entire ZFS team, but there could still be things like patents and such (which they're allowed to use via the CDDL).
Yes, they incorporated Dtrace, but that wouldn't be a big deal to rip out if things went sideways legally-speaking.
I've used APFS on my Mac, and it worked well enough that I didn't have to think about. Now I've been using ZFS on my daily driver Linux PC and I don't have to think about it either.
I actually prefer ZFS for being able to set up transparent compression. But other than that, I'd be hard-pressed to pick a favorite.
I believe they wanted indemnification from any negative outcomes of the NetApp trial, and Sun wouldn't give it. Or at least I remember seeing a lot of speculation about that.
if only ZFS hadn't fallen through! Speaking of which, are there any public plans to do this with APFS or are they just trying to push everyone into the cloud?
I wrote the Time Machine interface and was sweating like mad sitting in the third row or so during the demo. It was a lot of fun getting all the animations and UI elements sliding around smoothly.
One interesting fact about the entire presentation of all the new features is that all of the backup Macs but one had crashed by the end. If you watch closely you can see the mirrored demo machines switching in when the live demo box locks up. Whee!
That's awesome! Well done! I loved that interface (and still do). That's funny about the backup machines. I always knew they had backups, but didn't realize it was that many.
The video is grainy, but isn't that still the TimeMachine UI since then? And with APFS at least it now also integrates snapshots, not just external backups?
From what I'm reading, Time Machine has been using APFS snapshots for local snapshots since High Sierra, and since Big Sur for the primary backup (assuming target drive is APFS formatted).
I remember the WWDC that seemed to have a very “solar” graphical motif and my friends and I were all freaking out with speculation that Solaris and OS X were going to merge in some meaningful way.
That unfortunately happened in the good timeline where Scheme in the browser also happened.
I think it was the year or so after Time Machine launched that a bunch of banners up in Moscone Center had us convinced of a grand Sun / Apple tech marriage.
Re Scheme: It was the first choice of Javascript’s inventor, Brenden Eich, but the suits didn’t go for it. If you want to have a good cry, look up DSSSL and imagine the alternate timeline with S-expressions instead of HTML, DSSSL instead of CSS, and Scheme instead of Javascript.
Pretty sure there’s also a robust open source OpenStep or NeWS ecosystem instead of X11, Al Gore was a two termer, and 9/11 never happened.
I thought it was Jonathan Schwartz embarassing Steve Jobs by announcing it was coming to macOS himself. Which was a big deal at the time.
I always thought that was a petty reason but I didn't really consider it beyond Steve Jobs to do that to be honest. He really took such matters very personally.
But your explanation sounds more reasonable. I really thought this until now though.
This is great, I've often wondered why so little focus is spent on tools like this. Feels like low hanging fruit given the great usability improvements they can bring.
All the talks about backups but the reason people don't do it is partly because the tooling is so bad. (no, snapshots is not a proper backup by itself, but a great addition to).
> I've often wondered why so little focus is spent on tools like this. Feels like low hanging fruit given the great usability improvements they can bring.
100% agree. Definitely a QoL tool for the ZFS lover. The zsh key bindings open up uses I never really even dreamed up, because it's just so quick to get to a usable interface.
btrfs makes this exact thing harder, but, cross fingers, the btrfs devs are listening, because I think btrfs folks would love a ".btrfs" directory like the ".zfs" directory.
I at least, and I suspect a lot of desktop btrfs users, keep all my snapshots in a specific "folder" I keep mounted in ./.snapshots. It's sort of a hack, but so long as the file structure stays mostly the same it shouldn't be too difficult to whip up something like this for btrfs. Definitely not as good, but hey, it's btrfs! If I wanted good I'd be on zfs anyway (I kid, I kid)
I'm keeping my eye on it. Definitely less appealing when I can't make it work in all use cases.
If you wanted to file an issue and you could perhaps explain (I need your expertise!) how to parse the correct sub volume mount points from `mount` or `btrfs` or whatever command to make this work consistently for all cases, I'd jump right on it.
It would surprise me very much if there was a way to automatically grab the necessary info. I'll try and whip something up which just takes user config, but I doubt it'll be good enough for general consumption (or if good enough for general consumption is possible on btrfs). Like I said lol, a hack of the crudest order
My theory based on reading comments from ZFS devs is that as resources are somewhat limited and most people who use it are a bit technical these kind of things are low priority so they go slowly.
There is timeshift for btrfs. Sadly it only works for certain very standard btrfs subvolume layouts and if you deviate only slightly from that it is no longer supported..
It's not super hard to hack it if you need to, but yeah zfs exposes exactly what's needed without any global configuration. In exchange btrfs gets to be a little more flexible, but honestly idk how useful it is to treat snapshots as normal directories anyway. That's my only real complaint about btrfs on the desktop.
I'm using snapper (https://github.com/openSUSE/snapper) as a daemon which snapshots periodically and it's easy to search files with some fzf-like fuzzy-searching shell tools. But now checking its explanation, I must say it doesn't look polished.
Very cool! ZFS is a fantastic filesystem. Very easy to use and effective at preventing things like bit-rot which can plague large infrequently accessed datasets that are stored on cheap HDDs
Most interesting is how snappy the thing looked, especially combined with compiz. Looks like we lost our way somewhere between 2008-2012 or these effects are really only impressive for a few hours. Nevertheless, I'm pretty satisfied with my desktop nowadays.
A long time ago, I set up some zfs-in-a-box-OS thing (some solaris fork), and hooked to active directory (i think?) and samba, and windows explorer had a built in context menu to restore files to older versions.
Yep - this is the VFS shadow copy extension for samba, works great! Actually it doesn't require ZFS, just a method to create snapshots and find them - example smb.conf :
Only thing with VSS is that ransomware can delete all your snapshots. Samba shares on ZFS via TrueNAS or the like with VSS support mitigates that attack vector.
Speaking of, any stats out there on which proportion of ransomware is targeted at which distro for linux, freebsd, windows and mac os?
A long time ago, I set up some zfs-in-a-box-OS thing, and hooked to active directory (i think?) and samba, and windows explorer had a built in context menu to restore files to older versions.
I would note -- one advantage of being a file level tool is `httm` will deduplicate file versions by size and modify time giving you only unique file versions. If you take 100 snapshots but the file was only modified 5 times, then you only see 5 file versions, so no more digging through snapshots ever.
The ideal file-level Time Machine-like tool for ZFS already exists and it is Windows Explorer. Not even kidding. Right click on any file or folder residing on a ZFS share and click on Properties, then click the Previous Versions tab. You can restore the file or folder from any snapshot, or just open it and look at it.
Ahh, interesting, I never thought about that. So that means you have a share mounted probably via NFS/SMB, Windows is able to read the zfs snapshots? Or are the Windows previous versions snapshots separate from the zfs ones?
If you use ZFS a bunch, like I do, you will use this little tool more than you may imagine.
Also, FYI, for the trivia/history buffs -- cool bit of happenstance -- 20 years ago yesterday Jeff Bonwick filed the PSARC case for the ZFS filesystem. See: https://illumos.org/opensolaris/ARChive/PSARC/2002/240/onepa...