Hacker News new | past | comments | ask | show | jobs | submit login
A faster way to delete millions of files in a directory (linuxnote.net)
135 points by bluetooth on May 31, 2013 | hide | past | favorite | 33 comments



It's sad to see so much guesswork around here...

Here's GNU coreutils rm [0] calling its remove() function [1] itself using fts to open, traverse, and remove each entry[2], vs rsync delete() [3] calling {{robust,do}_,}unlink() function [4] [5].

Now a little profiling could certainly help.

(damn gitweb that doesn't highlight the referenced line)

[0]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...

[1]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...

[2]: http://git.savannah.gnu.org/gitweb/?p=coreutils.git;a=blob;f...

[3]: http://rsync.samba.org/ftp/unpacked/rsync/delete.c

[4]: http://rsync.samba.org/ftp/unpacked/rsync/util.c

[5]: http://rsync.samba.org/ftp/unpacked/rsync/syscall.c


Thanks. Was on a phone last night and wasn't able to find the sources easily. But there's still a lot of unknowns in the article to try to repro.

So one thing that's interesting is that both rsync and rm stat every file in each directory to determine whether to use rmdir or unlink, and to perform a depth first removal. I wonder if it would be faster to skip the stats, just call unlink on everthing and check errno for EISDIR (not POSIX, but returned by Linux), then descend as needed and use rmdir on the way back up.


FWIW, a directory with millions of files is likely to be quite large (I'm referring to the directory inode itself, which contains a mapping of filenames to inodes). Depending upon the file system, reclaiming the space used by all those millions of mappings might require creating a new directory into which to move the remaining files.

BTW, having millions of files in an ext3 directory in the first place is probably a bad idea. Instead, layer the files into two or three directory levels. See here:

http://www.redhat.com/archives/ext3-users/2007-August/msg000...

(Git for example places its objects under 1 of 256 directories based on the first hex byte representation of the object's SHA-1.)


As I remember it, HP-UX had some very poor performance characteristics once a single directory started to get into the thousands of files. It slow down for all read / write operations in that directory. We are talking multiple seconds to read a small 1K file much less write.


This has got much better over time. A filesystem should be a database (for larger blobs of data) so it should work, but scalability is limited still. Newer fs may be better.


Details will vary depending on the filesystem. Bad old filesystems are O(n^2) in the # of files in a directory. ext3fs is fine. Also tools like find and and rm often do more work on a file than strictly nececssary. I'm curious why rsync would be better myself; on first blush that'd be the worst choice!

I've salvaged an unwieldy directory by using Python to directly call unlink(2). Details: http://www.somebits.com/weblog/tech/bad/giant-directories.ht...


For anybody who might try to copy and paste from this article it is actually "rsync -a --delete empty/ your_dir". The dashes are improperly encoded for a copy/paste.


I was just looking at that. The authors use of 'a' in the command (for directory) made is confusing. Thanks for making your clearer to read at a glance.


No mention of filesystem. As it's RHEL 5.4 I'm going to guess ext3, which uses indirect blocks instead of extents for large files (which a directory containing millions of files surely is). Would also be useful to confirm that dir_index is enabled.

Some useful background material:

http://computer-forensics.sans.org/blog/2008/12/24/understan...

http://static.usenix.org/publications/library/proceedings/al...


Also, the storage is RAID-10, so striped and who knows what kind of caching goes on in the hardware controller.

The numbers are not that useful. It's notable that rsync:rm went from 1:12 in his old test to 1:3 in his new test, but we really don't know anything about why.

FWIW (very little), I did a similar test on a convenient OSX box (HFS+, 1000000 zero-byte files, single spindle), and rm won. rsync was next (+25%), straight C came in a little higher, then ruby (20% over rsync). Maybe BSD rm is awesome.


There is an error in the blog posting. If you look at the original output from the rsync command, you will see that the elapsed time should be 12.42 seconds and the system time should be 10.60 seconds. Elapsed time is a third that of rm -rf and system time is 70% as much.


It would certainly be nice if there were a specialized function to do this - no need for a million context switches, and the filesystem code can probably delete things more intelligently than individually removing each file, with accompanying intermediate bookkeeping.


rsync is an order of magnitude faster than rm -rf. Why would be? (ok, I'm being lazy).


I'm guessing it's because rsync collects a list of files first and then deletes them rather than interleaving the two operations. This would mean latency not getting in the way as much as the read head switches between the filesystem index and the inodes themselves.

But that's just a guess.


Firstly delete is a very expensive operation in EXT4 due to clearing of the file contents bitmaps among other thing.

rsync definitely builds a list of files to delete first. so that will help.

perhaps it also puts the files in to inode number order which would also help since its related to directory hash order and order of inodes in the inode table.


That is very odd. Hard to answer w/o looking at the code or using strace. On some filesystems, the directory entry is a linked list of names with pointers to inodes. Maybe rsync optimizes in such a way as to require traversing the list the least amount of times? I'm just thinking out loud here...


Even faster: mkdir ../.tmp${RANDOM} && mv ./* ../.tmp[0-9]* && rm -rf ../.tmp[0-9]* & #or the rsync trick

As long as ../ is on the same device, that should clear the directory instantaneously. It is the point, right? Of course, if you want an rm for lower IO-wait or lower CPU, use the rsync method, but if you want something that clear a directory as fast as possible, this is fast. Tested with for I in `seq 1 1000000`; do echo ${I} > ./${I};done;sync #^ much faster than "touch"


How is that mv ./* not going to blow through the argv limit with millions of files?


A) You're cheating by using the tmpfs filesystem! B) Your directory names are numeric, not alphanumeric and not randomly long C) Your computer specs are missing

Now can you please explain why you think that this is faster?

    mkdir ../.tmp${RANDOM} &&
    mv ./* ../.tmp[0-9]* &&
    rm -rf ../.tmp[0-9]* &

I believe that if I write a minimal tool in C it would be much faster than rsync.

I would have to read the fs index if existing, otherwise create a list of directories/files then unlink it in parallel in the inode order. Later optimize ops based on the fs.


I don't think tmpfs is involved at all; it's just moving things out of the way instead of deleting, first. The actual delete runs in the background, so you can get an interactive shell back and keep working while the delete happens without blocking you. I usually do an approximate equivalent of just renaming the directory itself and making a new one in its place (and removing the renamed one in the background).


also no need for seq(1) binary, in Bash for i in {1..1000000}..


I stumbled over it few months ago and the issue was that readdir(), used by rm on the box I was using, by default alloc'd a small buffer (the usual 4KB) and with millions of files that turned in millions of syscalls (that's just to find out the files to delete).

A small program using getdents() with a large buffer (5MB or so) speeds it up a lot.

If you want to be kind to your hard drive then sorting the buffer by inode before running unlink()s will be better to access the disk semi-sequentially (less head jumps).


This Perl beats rsync by quite a margin here.

    perl -e 'opendir D, "."; @f = grep {$_ ne "." && $_ ne ".."} readdir D;
        unlink(@f) == $#f + 1 or die'
It goes a bit quicker still if @f and the error handling are omitted.

The original article is comparing different things some of the time, e.g. find is having to stat(2) everything to test if it's a file.


I use perl for this as well.

  perl -e 'chdir "/var/session" or die; opendir D, ".";
      while ($f = readdir D) { unlink $f }'
It is very efficient memory-wise compared to the other options as well as being much faster. It is also easy to apply filters as you would with -mtime or such in find, just change the end statement to:

  { if (-M $f > 30) {unlink $f} }
to affect files modified more than 30 days ago.


It's not that I'd use Perl for the task, just that I don't think rsync is special. Indeed, rsync has the overhead of forking a child and communicating empty/'s bareness.


More along these same lines:

How to delete million of files on busy Linux servers ("Argument list too long")

http://pc-freak.net/blog/how-to-delete-million-of-files-on-b...


Interesting. Some time ago we had to regularly clear a directory with many files in an ext2 file system. We ended up mounting a separate small volume at the point in the VFS. When we needed to clear it, we would just make a new file system on the volume.


Yes, that's for sure the smartest way to deal with it, if you have such a specific requirement. New filesystems (zfs, btrfs) with their sub-volumes also make that much easier, because they can dispose of subvolumes, and recreate them very fast.

# btrfs subvolume create foobar Create subvolume '/btrfs/foobar'

### now do all kinds of atrocities in this filesystem

# btrfs subvolume delete foobar Delete subvolume '/btrfs/foobar'


Brings back memories of an in-house correspondence application I once encountered - 16 million TIFFs in a single directory.

The lead dev responsible for the app was also fond of hard-coding IP addresses and wouldn't even entertain talk of doing anything differently.

I got out of there ASAP.


Another excellent resource is serverfault question http://serverfault.com/questions/183821/rm-on-a-directory-wi...


if you need the same folder emptied, but can accept background process deleting in background, you could rename the folder, create empty one with the old name, and run something to delete in background.


I had to delete a few million files in bash once. 'find' didn't work. I used perl to overcome the issues.

opendir D, "."; while ($n = readdir D) { unlink $n }


The results are statistically insignificant.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: