Excellent writeup. Computer systems are discoverable. That attitude, along with some of the basic tools (such as strace, ltrace, man pages, debuggers and a compiler) and a willingness to dive into system source code go a long way. If your tracing leads you to a library call and you don't know what's going on inside, find the source code. If it's the kernel, load up lxr (http://lxr.linux.no/).
Very interesting. I spotted two minor problems with the posted code.
Doing this:
#define BUF_SIZE 1024 * 1024 * 5
to define a numerical constant is a bit scary, since depending on how the symbol is used it can break due to dependencies. It's better to enclose the value in parenthesis. Personally I would probably write the right hand side as (5 << 20).
The first time the modification to skip entries with inode 0 is mentioned, the code is wrong:
I did this by adding
if (dp->d_ino == 0) printf(...);
This should use !=, not == (it's correct the second time, but this adds confusion).
On second thought (too late to edit), it's quite likely that I would not even bother with a define for this, but instead just set the size directly in the code, e.g. char buffer[5 << 20], then rely on sizeof buffer in the call where the buffer's size is needed. I prefer sizeof whenever possible.
Using (5 << 20) instead of (5 * 1024 * 1024) is premature optimization. Modern C compilers will take the expression (5 * 1024 * 1024) and turn it into the combination of shifts and additions that are appropriate for your architecture.
And I definitely prefer defined constants for two reasons. One, it's likely you'll have to declare multiple such buffers in different places. Two, if I want to tune the parameter, I'd rather do it at the top of a source file with other such defined constants than hunting for the declaration in the code. I do agree that sizeof() is preferable when it's an option.
Correct, to me it's idiomatic. I realize it's not for everyone, so the ultimate best code is probably the way it was written ... On the other hand, sometimes it's fun to push the envelope just a tiny bit, and hope that readers will actually learn something from the code. But that's a thin edge to be balancing on.
I suspect the author is incorrect in his claim that reading in 32k chunks is responsible for the slowness. Due to read ahead and buffering, Unix-like systems tend to do reasonably well on small reads. Yes, big reads are better, but small reads are not unreasonably slow.
To test this, he should try "ls | cat". On large directories that often runs many orders of magnitude faster than "ls". This is because, I believe, ls by default on most people's systems, want to display information such as file modes or type via coloring or otherwise decorating the file names, and getting that information requires looking at the inode for each file.
It's all those inode lookups that slow things down, as the inodes are likely to be scattered all over the place. When you do "ls | cat", ls noticed output is not a terminal, and so turns off the fancy stuff, and just lists the file names. The names can be determined entirely from the directory, and so performance is much better.
The original onus for the post was python's os.listdir() which as far as I know doesn't stat(). ls, just made the blog post more interesting :-).
I was surprised that the 32K reads were taking so long. It's possible since it was on a virtualized disk ("in the cloud") that something else was slowing down disk IO (like Xen).
But I can assure you that a larger read buffer performed much better in this given scenario.
This is just a hypothesis based on very little actual knowledge, but perhaps a very long scheduling interval is responsible for the slowness with smaller reads? Consider this scenario: the virtualization hypervisor is on a reasonably loaded system, and decides to block the virtual machine for every single read. Since the physical system has several other VMs on it, whenever the VM in question loses its time slice it has to wait a long time to get another one. Thus, even if the 32K read itself happens quickly, the act of reading alone causes a delay of n milliseconds. If you increase the read size, your VM still gets scheduled 1000/n times per second, but each time it gets scheduled it reads 5MB instead of 32K.
I just tried this on non-virtualized hardware with 10 million files in a single directory using the zfsonlinux native zfs implementation for linux. It took a little over 4 minutes to do a "\ls -f | wc -l" so this might very well be something to do with virtualization.
I'll try an ext3 file system just for giggles and post the results.
Edit:
Didn't have an ext3 file system with enough free inodes handy so I used an ext4 file system. It takes too long to create 10 million files so I cut the test off early. It took about 7 seconds to complete a "\ls -f | wc -l" with 6.2 million files in a single directory.
You are right that 32k buffers should be more than enough to read 5MB in a reasonable time, regardless of disk/VM architecture, but I don't think stat was the problem either. My guess would be readdir or possibly getdents is probably O(n^2) somewhere.
[just noticed it was 500M (oh wow), but same difference]
Really surprised by all the high-fiving and positive excitement going on about this article.
Putting eight million files in one directory level aside, the whole basis for this event - using the filesystem as a storage layer for a k/v 'database' - is just twisted.
Systems people tend to do the simplest, reasonable thing that works, then forget about it until it doesn't work anymore. This means that sometimes you'll make design choices that look ugly, but really, they don't matter.
Absolutely. The sad reality is that, to a lot of linux users right now (and I'm talking about people that know they're running linux, not android users), it's a black box.
Software is "packages" that you install with "synaptic".
This seemed like a developer-targeted article, though. I mean, who is having trouble with even 1 million directory entries, let alone 8 million, who is afraid of a compiler?
Both valid perspectives; it depends on your priorities. Some people want computers to be a black box, because when the black box works, it means less cognitive load to use.
Well, once the hardware started working and the graphical toolkit options expanded beyond Tcl/Tk / Perl/Tk...
Ubuntu accelerated the process. Despite being a long time Linux user and programmer, I'd rather know the machine is goin' to work when I haven't done anything to mangle the beast.
I've spent hundreds, if not thousands of hours in the past just getting networking drivers to function. The brave new world of Linux is a good thing, the Interp-Only volken serve only to bolster the ecosystem, not harm it.
Throw a novice Ubuntu user into a FreeBSD system, and tell him to install a port, and 9 times out of 10 they'll freak out once they see GCC output on the screen.
Nothing against Ubuntu (RedHat, SuSE, Debian, Arch, et. al.), but source compilation is something they all have been letting their users avoid for a long time. The target audience is different.
That's because linux commands are generally quiet unless something goes wrong. ./configure, make and gcc can produce pages of output even when nothing is wrong.
Even when there's a compilation step it can be hidden. Homebrew on OS X does compile each package, but produces no actual compiler output. VMWare Player regularly recompiles kernel modules when it detects a kernel upgrade invalidating them. To the user it looks like an bullet list installation step.
Nope, he does not mean open a port. Ports is the name of the packaging system FreeBSD uses, and a port is the equivalent (more or less) of an RPM or deb source package.
If your server had a network connection problem and you needed to open up a port, we're already talking about network ports, not software, so you would try to diagnose your network.
If you asked me how you could get the old game `rogue` on your system, I would tell you to go install the freebsd-games port, which has nothing to do with networking, and so it clearly means that you need to look in your ports tree.
Arch has a Ports-inspired system called Pkgbuild. Given the level of competence the distro expects of the user to start with, I doubt most Arch users would have a great deal of trouble adapting to FreeBSD.
I've rarely seen macports, FreeBSD ports, or NetBSD ports used as a harness to install modified versions of software. Hack jobs are almost always manual, so this purported benefit is a canard.
Are there folks out there that are afraid to compile code and modify it?
Some developers (myself included) may have a general preference for sticking with the "official" packages to avoid extra work when bringing up a new machine or migrating to a new distro version.
Those arguments will remove the need for find to stat each directory entry. Regardless, this is a nice walk through of low level details often overlooked.
Actually 'find' will also stat each entry no matter what.
Many of the standard-tools that most people would intuitively expect to be rather optimized (find, rsync, gzip) are embarrassingly inefficient under the hood and turn belly up when confronted with data of any significant size.
That probably stems from the fact that most of the development on these tools took place during a time when 1GB harddrives were "huge" and SMP was "high end".
The only issue I'm aware of with gzip is actually in zlib, where it stored 32-bit byte counters, but those were strictly optional and it works fine with data that overflowed them. The zlib window size may be only 32k, but bzip2 doesn't do that much better with 900k and a better algorithm, so I wouldn't consider it embarrassingly inefficient.
How do you tell a file from a directory without stat()ing it? The d_type field is not portable. Since find and other tools like it need to recursively descend a directory tree, a stat() for each file to determine its type is unavoidable.
But times have changed, and development isn't dead. Why haven't they been updated? The optimizations you imply are often straightforward and well-understood; not major undertakings to implement.
"But times have changed, and development isn't dead. Why haven't they been updated?"
Maybe because listing 8M files is not a common use case, and there just isn't the motivation to update otherwise perfectly working code. It's not an itchy problem.
H tried using find, but, in this case, the libc readdir function was the bottleneck, so find doesn't help.
Often, when ls is being slow, you can speed things up drastically by disabling the things that require a stat (colors, -F, etc.) that are often added by distro's shell files (invoking ls with a full path is an easy way to disable shell aliases.) Also, sorting is on by default, which slows things for obvious reasons.
When all you need to do is kill the stat overhead for small dirs on slow file systems, "echo *" is beautifully simple.
The man pages really do try to put you off using the syscalls. I only used getdents for the first time when I was writing a syscall interface for Lua https://github.com/justincormack/ljsyscall - not tried it on 8m files but you can set the buffer size so it should work.
The native aio interface has advantages over the posix wrapper too, and some of the other interfaces. On the other hand the futex interface is not nice to use, as it requires architecture specific assembly which seems undocumented...
In a similar fashion, something that is reading directories as files on Windows: https://gist.github.com/1148070 - it allows me to get faster some things (it's very crude, rough code, written for exploration, rather than real usage).
I hit this with a log trimming script I wrote in perl a few months back. We had one directory with about 4 million files.
Just in case you're curious I left it run and it took about 25 hours to get all of the directory entries. One with about 3 million files took 12 hours, so i'm not sure how long it would have taken if you'd have let ls run on its own accord.
Only 200k files afterwards though. :D
I think i'll poke around with this after I get a coffee, I remember stracing the interpreter and getting annoyed at its 32k or bust behavior. But since I didn't have a time limit I didn't much care about runtime. Thanks for the write up!
They were never meant to have anywhere near that number of files in the first place. The article mentions that the cleaning up of old files failed - presumably they didn't bother with subdirectories because under normal circumstances the single directory approach was working fine.
My impulse would be to modify libc's readdir() to use a larger buffer size instead of using my own C program. Would that much stupider for some reason (besides packaging/dependencies)? Do libc clients expect 32k buffers and die if they receive something else?
I ran into this problem before. One of our engineers had a script that kept some temporary data in his home directory, which went crazy at one point and generated millions of files (no clue the reasoning for this).
Anyways, this was HELL on our backup solution, which ended up failing several nights in a row on that area. Luckily, since the files were largely useless, this left our options open. I think the kicker was the 'rm' command not working (same issue as listing? This was a Solaris 8 system I think). I believe we ended up moving all of the other data off of that filesystem and newfs'ing it.
Having a fan-out directory instead of putting all files at the root would have helped to avoid this problem altogether.
Here's an example. The .git/objects/ directory can grow to have up to 256 sub-directories. If an object hashes to 0a1b2c3d then it gets written to objects/0a/b2c3d. Lookups are still fast and navigating can be done without resorting to writing an 'ls' replacement.
ext3 and ext4 both support b[h]tree directories, which I guess is what you're thinking of, but that would only matter for creating/deleting/looking up a particular file, not listing them. The fact the system didn't slow to a crawl creating the directory suggests that's not the problem.
Article illustrates numerous manner of ways of doing things incorrectly.
The premise of the article is a bad precedent for stable environments: let's bend the OS so that it plays nicely with what's clearly misuse and misunderstanding of filesystems.
The only way eight million files should ever end up in a single directory level is by accident, and that's not the case in the scenario outlined here.