Hacker News new | past | comments | ask | show | jobs | submit login
How statically linked programs run on Linux (thegreenplace.net)
137 points by ch0wn on Aug 13, 2012 | hide | past | favorite | 23 comments



One thing many people are not aware of: statically linked programs run much faster than dynamically linked ones!

Or to be specific, they start up much faster, fork()/exec() are much slower for dynamically linked programs, while for statically linked programs it is much faster than most people think.

There is a myth that forking is slow, and that caused people to abandon very simple, elegant and unixy solutions like CGI.

I wrote a whole web "framework" in rc shell ( http://werc.cat-v.org) and the reality is that if you statically link your programs you can use shell scripts that do dozens of forks per request and still provide better performance than something like php with fcgi.

(Another great thing is that shell scripts and pipes naturally and automagically take advantage of multi-core systems, Unix once again beautifully shows how simple and beautiful concepts like fork and pipes have unforeseen benefits many decades after they were invented.)


On the contrary, fork() has scalability trouble as memory grows. Even with copy on write, copying page tables is still O(1) with respect to address space (granted, with a significant divisor). This overhead becomes apparent as programs grow to gigabyte size -- a fork which before took microseconds can begin to take milliseconds. Forking is slow in many situations.

The issue described above can be avoided by using posix_spawn(3), which on linux uses vfork(2).


Do you mean O(n)?


Sorry, yes. That's what I get for posting before coffee :)

The cost of fork() is linear with respect to memory size.


To the contrary, using dynamic libs allows the OS to cache commonly-used libs. Almost every program uses libc. When your process does a dyld load path as part of loading the ELF binary, the OS has the option to load a cached copy of the library; and possibly not even allocate an extra page of memory for it.

Using shared objects has a number of pitfalls: wasteful duplication on your hard disk, wasteful copying of the ELF binary into memory when you could tap the OS library cache for your dependencies (including extra page allocations), and the inability to upgrade a dependency of a binary without recompiling the binary.


If you're anyway fork/exec'ing a program you'd run before (which you have, if your're in an environment where this would matter), the binary is anyway cached in memory by the filesystem cache. But you don't have the processing overhead of doing the dynamic linking and possible relocation, nor do you pay the overhead of calling functions in a shared library. If the library is relocated, you don't even save memory.

For overly large programs, statically linking in e.g. an X11 enviroment, it might matter.


On the other hand, you'll never find that one of your programs has stopped working because Ulrich Drepper changed libc behavior again.


Ulrich Drepper no longer works on glibc, last I heard he was working for Goldman Sachs


|Another great thing is that shell scripts and pipes naturally and automagically take advantage of multi-core systems

What about the sharing of state?


"What about the sharing of state?"

There isn't any shared state between components of a pipeline.


They don't naturally and automagically take advantage of that, no.


> Unix once again beautifully shows how simple and beautiful concepts like fork and pipes have unforeseen benefits many decades after they were invented

Multiprocessing and using multiple processes (instead of threads) to take advantage of them predates UNIX, by a lot.


That's not the point. Unix invented the idea of a simple system call to create a process by forking, and (more importantly) the idea of a "pipe" syntax in the shell to connect data streams between processes in a natural and intuitive way. These were usability and elegance enhancements, not performance things.


If one is interested in exploring more about linking and loading here is an excellent paper on shared libs and dynamic loading:

The Inside story on shared libraries and dynamic loading :http://www.dabeaz.com/papers/CiSE/c5090.pdf


"Linkers and Loaders" by John R. Levine is referenced in your paper, and is a great resource: http://linker.iecc.com/

The beta site includes the book in draft form, for those with the patience to deal with the occasional typo...


This is one of my favorite books (review - http://eli.thegreenplace.net/2010/01/25/book-review-linkers-...). I must have read it more than 3 times by now, counting all the partial re-reads.


I'd recommend it too. I used to hang out on comp.compilers back when John was drafting it so, thanks to vetting the AIX bits, I got my name in the final book's acknowledgements (alongside dozens of others). Small thing, I know, but still pleases me when I think of it. :-)


Very nice article, clear explanation but if I may suggest something: please mention the user space trick LD_PRELOAD so users could see what changes when they actually point to another libc version. not 100% necessary but still fun and will complement nicely this article.

One thing would be worth mentionning: do_execve has it's a most commonly know wrapper for sys_execve.

I'm happy to see this type of article very useful indeed I can't wait for the follow-up


LD_PRELOAD belongs to the follow-up which discusses dynamically linked programs. It's a variable used by the dynamic linker, which isn't getting called for statically linked programs.


Ian Lance Taylor (author of the gold linker) wrote a wonderful 20 part essay to how linkers work: http://www.airs.com/blog/archives/38

I'd also recommend reading the rest of his blog, at least the programming section. It's very insightful.


Ian Lance Taylor is definitely a person I admire. Researching into these topics invariably brings you to his posts on binutils which are very helpful.

Moreover, he's written "gold", which does what "ld" does but has an actually comprehensible source code :), and speed, or so I heard.


Great article.

But I gave up on static linking in Linux after trying to static link a distributed network client we build in a University project that makes heavy use of C++11 std::thread and Boost. It compiles fine, but segfaults on startup. This is a known issue[1] but I did not investigated further.

It appears for me as an complete outsider to glibc/gcc Linux development that static linking is discouraged in Linux[2].

1: http://gcc.gnu.org/bugzilla/show_bug.cgi?id=52590

2: http://sourceware.org/bugzilla/show_bug.cgi?id=10652


Static linking has its problems, especially when shared data and multithreading are involved. Or memory management. Dynamic linking has its problems though (DLL/DSO hell is just one). It's a tradeoff. But yes, it is also my understanding that dynamic linking is preferred. It's also what I usually do unless I have a good reason not to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: