Hacker News new | past | comments | ask | show | jobs | submit login

Forgetting about Rust for a second. Talking about dynamically linked libs in general.

Dynamic linking was an optimization which came about when memory was expensive. Memory is no longer expensive.

Is 1.3GB (or even 13GB) a lot on your hardware[1]?

According to "Do not prematurely optimize": Pretending we never had dynamic linking, and given today's hardware constraints[2], as a community would we choose to reimplement and universally adopt this optimization?

[1] Keep in mind that a single "modern" application, on average, weighs in the 10s of MBs or GBs.

[2] I'm talking about the general case, for the majority of OS distributions, ignoring the relatively exceptional case of embedded systems, which do in-fact still need it.




Main memory is cheap but slow. Having a frequently called function in a shared library vs. statically linked code could mean the difference of the code executing from CPU's cache or from main memory.

Even latest desktop processors have a L3 cache of only a few MB.


But static linking could mean that function will be inlined.


Inlining saves time lost due to jumping about, but it can cost time if it causes code replication (same as loop unrolling), because it can bloat the hot code to larger than the smallest cache.

So the arguments against inlining apply even more strongly when talking about every program being statically linked, the same code (standard library) will exist in memory in many places, and will get dumped and reloaded to L2/L3 every process swap. Nothing slower than having to wait for something to be faulted in.


> Nothing slower than having to wait for something to be faulted in.

There is something slower. When your executables are so large that you have to hit the swap drive on process swaps.


It means that function might be inlined.

And sufficiently aggressive inlining will increase the program size further. This might or might not be compensated for by the increase in instruction-pointer locality.


> Having a frequently called function in a shared library vs. statically linked code could mean the difference of the code executing from CPU's cache or from main memory.

I am under the impression that when process switches the CPU caches are flushed.


No. Only TLBs are flushed (and probably only partially). TLBs are used to associated virtual addresses with physical addresses and memory maps are different per processes.

(That's one reason why it's beneficial to schedule a process on the same CPU if possible - the data is still in the cache)


Dynamic linking offers modularity and separation of concerns.

I don't really care which point-release of zlib my program is linked with, I just want to decompress stuff. If someone finds a bug (or exploit), I am not the best person to quickly realize it and release an update -- the maintainer of zlib, and the packagers, and OS distributions, and sysadmins are in a much better position. But if it's statically linked, then developers have to be involved.

You could say that we could invent a mechanism to allow sysadmins to rebuild with patched libraries, but then we'd still need to reinvent all of the versioning and other headaches of dynamic libraries.

I think dynamic libraries are kind of like microservices. Sure, they can break stuff, but they allow higher degrees of complexity to still be manageable.


Memory is terribly expensive and I have to fight all of the other developers/product folks/upper management for every byte in my environment (hundreds of thousands of servers).

I have no choice but to use DSOs for our Rust code.


Dynamic linking also lets you update libraries due to things like security issues, it's not just a memory thing. Kinda agree on the space thing too (plus much less chance for things like buffer overflows..)


FWIW: I think everything has its place, and everything has tradeoffs. I can definitely see a lot of usefulness for dynamic linking. The point you raise probably being the best current reason.

... but since I'm already playing devils advocate :)

Dynamic linking also lets you update libraries ... and cause security issues simultaneously across all applications. Increasing the number of possible attack vectors to successfully utilize that vulnerability.


Actually, it's a wash. If all we had was static linking, people would statically link the same common libraries. So you'd have to update multiple binaries for a single vulnerability.

I've seen this in my day job at Pivotal. The buildpacks team in NYC manages both the "rootfs"[0] of Cloud Foundry containers, as well as the buildpacks that run on them.

When a vulnerability in OpenSSL drops, they have to do two things. First, they release a new rootfs with the patched OpenSSL dynamic library. At this point the Ruby, Python, PHP, Staticfile, Binary and Golang buildpacks will be up to date.

Then they have to build and release a new NodeJS buildpack, because NodeJS statically links to OpenSSL.

Buildpacks can be updated independently of the underlying platform. The practical upshot is that anyone who keeps the NodeJS buildpack installed has a higher administrative burden than someone who uses the other buildpacks. The odds that the rootfs update and NodeJS buldpack are updated out of sync is higher, so security is weakened.

Dynamic linking is A Good Thing For Security.

[0] https://github.com/cloudfoundry/stacks

[1] https://github.com/cloudfoundry/nodejs-buildpack


this makes the false assumption that updating dynamic libraries never introduces any new bugs.


No, it makes a trade-off. Especially on the stable channel of Debian/RHEL/SLES, an update will most of the time fix more bugs than it introduces.


This was a much more powerful reason before things like docker became common, and methodologies adapted to provide updates for docker images, which for this purpose are functionally identical to a static binary.

At least I hope "methodologies adapted", I don't use docker images, so that's an assumption on my part, but I feel it's a fairly safe bet.


Docker images don't have a nice way of updating without "rebuilding everything". There's a tool called zypper-docker that does allow you to update images, but there's no underlying support for rebasing (updating) in Docker. I was working on something like that for a while, but it's non-trivial to make it work properly.


Hmm, I assumed it would be something along the lines of the images being fairly static, and updated as a whole, and you just apply your configs and data, possibly through mount points.


I was responding to the comment that security updates to libraries make it harder to update static binaries. Docker has revived the problem, and there isn't a way of nicely updating images without rebuilding them (which in turn means you have to do a rollout of the new images). While it's not a big deal, it causes some issues that could be avoided.


Yes, but presumably you're running far fewer docker images than you have binaries that would be affected if you statically compiled everything. For example, I assume in a statically compiled system, an update to zlib will likely affect a lot more packages than docker images you are running (on a server I admin, there's 3 binaries in /bin that link to zlib, and 374 binaries in /usr/bin, which will condense down to some smaller, but still likely quite large set of OS packages). It's easier in a dynamically linked system, where you can just replace the library, but it's not that much better for the sysadmin, as if you want to make sure you are running the new code, you need to identify any running programs that have linked to zlib and restart them, as they still have the old code resident in memory.


Yes, we would. Not because it's 'premature optimization', but because it's easy optimization.

"Do not prematurely optimize" is not a software design rule, it's a time-management rule. Dynamic linking has software design impacts.


> "Do not prematurely optimize" is not a software design rule, it's a time-management rule.

No, it's both. Optimization often affects the cost of later decisions, and the reason not to prematurely optimize is because it can easily take you to a local optimum which is not very optimal at all. This is a perfect example of that, as the GP comment point out. If memory is not constrained as it was when this trade-off became common, it may not have become prevalent. Static binaries are faster (the degree to which depending on a lot of factors), while dynamic binaries are smaller on disk and in memory, if the shared libraries are already used elsewhere. Modern optimizations at the OS level for forking and threading should make consideration of those negligible.


Dynamic linking wasn't invented as a premature optimization though. And if it didn't exist today, it would still not be premature to invent it, because dynamic linking does not only concern how large and fast your program is, but also how it is interacted with.

So here is my point: optimization that can affect the relevant interfaces of your software is not premature because deciding on the interfaces your software exposes is not premature.


You are choosing to focus on the "premature optimization" wording, which is fair, it was said. I'm focusing on "as a community would we choose to reimplement and universally adopt this optimization?" (emphasis mine). I think it would be implemented, I do not know of any evidence that makes me believe it would become universally adopted given modern resources.


What do you mean by 'universally adopted?' Available on all platforms? Default linking strategy? All code is dynamically linked?


I'm not sure exactly what was originally meant, I interpreted it as how currently dynamic linking is the norm, is used in every mainstream OS and most the applications run on them, and all the major mobile platforms. If we had to make a choice right now without the history of dynamic linking behind us, would we still choose to use it for the majority of platforms?


FWIW: You appropriately articulated how I meant 'universally adopted'.

Nit:

"Use it for the majority of platforms [and/or for the majority of applications]?"


Is it really easy? It brings with it many versioning problems.


> Is 1.3GB (or even 13GB) a lot on your hardware

On my phone or tablet, that is most definitely a lot.


A library update would require rebuilding every application that uses the library. A change to libc would require effectively reinstalling the OS.

At that point, an extra 1.3GB would just be icing on the download.


Dynamic linking certainly has no technical advantages like lower memory/disk usage or faster processing. Its main advantage, which has been cited before, is that it forces cohesion in the Linux community.

E.g if an author of a program finds a problem in the dynamic library he or she is using, the problem is forced to be solved upstream, benefitting all users of the library. If instead static linking was the norm, it is much more likely that the author would just solve the problem for him or herself and the solution would never reach the wider community.

In the best of worlds, we would have static linking everywhere but the "social contract" of dynamic linking would be enforced just as strongly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: