This is not definite. The feature has to go through a bunch of hoops first[1][2]. It was only conditionally approved in the meeting yesterday, assuming everything works by the time F16 comes around. Previous big features have been pulled at the last minute (notably systemd in F14 was pulled right before release[3], and only made it into F15).
as someone with a btrfs partition containing a handful of broken inodes (according to btrfsck), I'd say wait. btrfsck takes ages during each boot AND cannot fix problems. I'll switch back to ext4 when I find the time (and a plug compatible for my external drive.)
Last time I checked, btrfs had a read-only fsck implementation, i.e. it could find errors but not fix them. I hope fsck is finished and stable now (the project page still says fsck is not available).
Edit: this also means Fedora doesn't use LVM by default anymore
It's the "by default" that's important. You can still use lvm to resize your partitions. But resizing volumes within the btrfs partition will be done by the btrfs tools.
The comments, so far, seem to perceive this change as negative. I, myself, think it is something really good for the overall innovation int the FOSS ecology to have a distro comited to testing new features.
Well the thing is that Fedora is the testing platform for RedHat.
Fedora is for the desktop users, it is used to test new features, and what RH finds interesting, they add it to their server oriented distro, Red Hat Linux.
So, the fact that Fedora is not server oriented is by design.
"It will probably not be known until August or September whether Fedora 16, which is planned for the end of October, will actually use Btrfs as the standard, because testers and developers need time to gain additional experience with the upcoming alpha and beta versions."
I can't wait. Btrfs has been lingering in "almost ready" state for a while and it seems like the only thing that will kick things to the next level is the threat of it actually being used.
Filesystems lose on a big opportunity by supporting hardlinks. If every file had only one name, files on the same directory could be stored closely, much improving locality.
It's a tradeoff. On the other hand, you can more easily implement backup systems that copy directory trees, using hardlinks for files that haven't changed. For example.
I don't know the details of how specific filesystems are implemented, but it seems that if it's reasonable to achieve the locality you want, then it can be done for the first name a file has. Subsequent names wouldn't have good locality, but second and third links to a file are much less common. If you want other links to the file to have good locality, then simply make a copy, doubling your space requirements.
Hard links are useful, and you don't necessarily need to sacrifice locality in the common case, and in the uncommon case you can still choose between good locality, or good use of space and the sometimes useful semantics of two names by which to read or write a file.
The value of locality in this context assumes you use many of the files in the same directory at the same time, or enumerate a lot of directories and read the files. Your home directory has different access patterns than /lib or /bin.
That'd only really be true if a significant portion files were hardlinked. My root filesystem (Debian GNU/Linux testing) has 393,410 files, of which 825 have more than one link. My /home has 0 out of 156,273 files hardlinked.
Having 0.2% (or less) of files hardlinked shouldn't prevent storing files in the same directory near each other.
The point is that just the possibility of having more than one link kills your ability to assume that there is only one. It has nothing to do with whether or not they actually have more than one link.
Right, but this bumps into a spatial-locality analogue of Amdahl's law. If you optimise case that shows up 1% of the time, then you can only get a total gain of 1%.
Why not? Choose parent at random and place the file near children of that parent. In 99% of cases it's the only parent, so you get the result you aimed for. Why would it be prevented by possible other names?
ouch... i'm conservative when it comes to filesystems; i had to revert from ext4 to ext3 on Amazon's EC2 because ext4 had more trouble with EBS... every other time I've tried a new filesystem i've had trouble with wrecks and data corruption... making a whizzy new filesystem the default will cause a lot of pain
Hey, I'd really appreciate it if you could send a report of your experiences to linux-ext4@vger.kernel.org. This is the first I've heard of issues with ext4 with EBS, and I'd love to know more. (Also, please mention the kernel version you are using; one possibility is that EC2 kernels tend to lag upstream kernels, and ext4 has had a lot of bug fixes since a RHEL or SLES kernel of 2.6.32/2.6.34 --- upstream is at 2.6.39, and we're about to release version 3.0. :-)
I agree that the change will cause a lot of pain, however early adopters tend to have a high tolerance for pain. If you don't want to deal with the pain or are running a server (your EC2 mention sounds like a server), you probably want to avoid cutting edge distros like Fedora.
[1] http://fedoraproject.org/wiki/Talk:Features/F16BtrfsDefaultF...
[2] https://bugzilla.redhat.com/showdependencytree.cgi?id=689509...
[3] http://article.gmane.org/gmane.linux.redhat.fedora.devel/139...