Here's GNU coreutils rm [0] calling its remove() function [1] itself using fts to open, traverse, and remove each entry[2], vs rsync delete() [3] calling {{robust,do}_,}unlink() function [4] [5].
Now a little profiling could certainly help.
(damn gitweb that doesn't highlight the referenced line)
Thanks. Was on a phone last night and wasn't able to find the sources easily. But there's still a lot of unknowns in the article to try to repro.
So one thing that's interesting is that both rsync and rm stat every file in each directory to determine whether to use rmdir or unlink, and to perform a depth first removal. I wonder if it would be faster to skip the stats, just call unlink on everthing and check errno for EISDIR (not POSIX, but returned by Linux), then descend as needed and use rmdir on the way back up.
FWIW, a directory with millions of files is likely to be quite large (I'm referring to the directory inode itself, which contains a mapping of filenames to inodes). Depending upon the file system, reclaiming the space used by all those millions of mappings might require creating a new directory into which to move the remaining files.
BTW, having millions of files in an ext3 directory in the first place is probably a bad idea. Instead, layer the files into two or three directory levels. See here:
As I remember it, HP-UX had some very poor performance characteristics once a single directory started to get into the thousands of files. It slow down for all read / write operations in that directory. We are talking multiple seconds to read a small 1K file much less write.
This has got much better over time. A filesystem should be a database (for larger blobs of data) so it should work, but scalability is limited still. Newer fs may be better.
Details will vary depending on the filesystem. Bad old filesystems are O(n^2) in the # of files in a directory. ext3fs is fine. Also tools like find and and rm often do more work on a file than strictly nececssary. I'm curious why rsync would be better myself; on first blush that'd be the worst choice!
For anybody who might try to copy and paste from this article it is actually "rsync -a --delete empty/ your_dir". The dashes are improperly encoded for a copy/paste.
I was just looking at that. The authors use of 'a' in the command (for directory) made is confusing. Thanks for making your clearer to read at a glance.
No mention of filesystem. As it's RHEL 5.4 I'm going to guess ext3, which uses indirect blocks instead of extents for large files (which a directory containing millions of files surely is). Would also be useful to confirm that dir_index is enabled.
Also, the storage is RAID-10, so striped and who knows what kind of caching goes on in the hardware controller.
The numbers are not that useful. It's notable that rsync:rm went from 1:12 in his old test to 1:3 in his new test, but we really don't know anything about why.
FWIW (very little), I did a similar test on a convenient OSX box (HFS+, 1000000 zero-byte files, single spindle), and rm won. rsync was next (+25%), straight C came in a little higher, then ruby (20% over rsync). Maybe BSD rm is awesome.
There is an error in the blog posting. If you look at the original output from the rsync command, you will see that the elapsed time should be 12.42 seconds and the system time should be 10.60 seconds. Elapsed time is a third that of rm -rf and system time is 70% as much.
It would certainly be nice if there were a specialized function to do this - no need for a million context switches, and the filesystem code can probably delete things more intelligently than individually removing each file, with accompanying intermediate bookkeeping.
I'm guessing it's because rsync collects a list of files first and then deletes them rather than interleaving the two operations. This would mean latency not getting in the way as much as the read head switches between the filesystem index and the inodes themselves.
Firstly delete is a very expensive operation in EXT4 due to clearing of the file contents bitmaps among other thing.
rsync definitely builds a list of files to delete first. so that will help.
perhaps it also puts the files in to inode number order which would also help since its related to directory hash order and order of inodes in the inode table.
That is very odd. Hard to answer w/o looking at the code or using strace. On some filesystems, the directory entry is a linked list of names with pointers to inodes. Maybe rsync optimizes in such a way as to require traversing the list the least amount of times? I'm just thinking out loud here...
Even faster:
mkdir ../.tmp${RANDOM} && mv ./* ../.tmp[0-9]* && rm -rf ../.tmp[0-9]* & #or the rsync trick
As long as ../ is on the same device, that should clear the directory instantaneously. It is the point, right? Of course, if you want an rm for lower IO-wait or lower CPU, use the rsync method, but if you want something that clear a directory as fast as possible, this is fast. Tested with
for I in `seq 1 1000000`; do echo ${I} > ./${I};done;sync
#^ much faster than "touch"
A) You're cheating by using the tmpfs filesystem!
B) Your directory names are numeric, not alphanumeric and not randomly long
C) Your computer specs are missing
Now can you please explain why you think that this is faster?
I believe that if I write a minimal tool in C it would be much faster than rsync.
I would have to read the fs index if existing, otherwise create a list of directories/files then unlink it in parallel in the inode order. Later optimize ops based on the fs.
I don't think tmpfs is involved at all; it's just moving things out of the way instead of deleting, first. The actual delete runs in the background, so you can get an interactive shell back and keep working while the delete happens without blocking you. I usually do an approximate equivalent of just renaming the directory itself and making a new one in its place (and removing the renamed one in the background).
I stumbled over it few months ago and the issue was that readdir(), used by rm on the box I was using, by default alloc'd a small buffer (the usual 4KB) and with millions of files that turned in millions of syscalls (that's just to find out the files to delete).
A small program using getdents() with a large buffer (5MB or so) speeds it up a lot.
If you want to be kind to your hard drive then sorting the buffer by inode before running unlink()s will be better to access the disk semi-sequentially (less head jumps).
perl -e 'chdir "/var/session" or die; opendir D, ".";
while ($f = readdir D) { unlink $f }'
It is very efficient memory-wise compared to the other options as well as being much faster.
It is also easy to apply filters as you would with -mtime or such in find, just change the end statement to:
It's not that I'd use Perl for the task, just that I don't think rsync is special. Indeed, rsync has the overhead of forking a child and communicating empty/'s bareness.
Interesting. Some time ago we had to regularly clear a directory with many files in an ext2 file system. We ended up mounting a separate small volume at the point in the VFS. When we needed to clear it, we would just make a new file system on the volume.
Yes, that's for sure the smartest way to deal with it, if you have such a specific requirement. New filesystems (zfs, btrfs) with their sub-volumes also make that much easier, because they can dispose of subvolumes, and recreate them very fast.
if you need the same folder emptied, but can accept background process deleting in background, you could rename the folder, create empty one with the old name, and run something to delete in background.
Here's GNU coreutils rm [0] calling its remove() function [1] itself using fts to open, traverse, and remove each entry[2], vs rsync delete() [3] calling {{robust,do}_,}unlink() function [4] [5].
Now a little profiling could certainly help.
(damn gitweb that doesn't highlight the referenced line)
[0]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...
[1]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...
[2]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...
[3]: http://rsync.samba.org/ftp/unpacked/rsync/delete.c
[4]: http://rsync.samba.org/ftp/unpacked/rsync/util.c
[5]: http://rsync.samba.org/ftp/unpacked/rsync/syscall.c