> then you can fix it centrally if you use shared libraries, as opposed to finding every application that uses the library and updating each separately.
This comes up a lot, but how often do you end up in a scenario where there's a critical security hole and you _can't_ patch it because one program somewhere is incompatible with the new version? Maybe even a program that isn't security critical.
Plus what you mentioned about testing. If you update each application individually, you can do it piecemeal. You can test your critical security software and roll out a new version immediately, and then you can test the less critical software second. In some systems you can also do partial testing, and then do full testing later because you're confident that you have the ability to roll back just one or two pieces of software if they break.
It's the same amount of work, but you don't have to wait for literally the entire system to be stable before you start rolling out patches.
I don't think it's 100% a bad idea to use shared libraries, I do think there are some instances where it does make sense, and centralized interfaces/dependencies have some advantages, particularly for very core, obviously shared, extremely stable interfaces like Linus is talking about here. But it's not like these are strict advantages and in many instances I suspect that central fixes can actually be downsides. You don't want to be waiting on your entire system to get tested before you roll out a security fix for a dependency in your web server.
> This comes up a lot, but how often do you end up in a scenario where there's a critical security hole and you _can't_ patch it because one program somewhere is incompatible with the new version? Maybe even a program that isn't security critical?
Talking about user usecases, every time I play a game on Steam. At the very least there are GNU TLS versioning problems. That's why steam packages it's own library system containing multiple versions of the same library -- thus rendering all of the benefits of shared libraries completely null.
One day, game developers will package static binaries, and the compiler will be able to rip out all of the functions that they don't use, and I won't have to have 5 - 20 copies of the same library on my system just sitting around -- worse if you have multiple steam installs in the same /home partition because you're a distro hopper.
There's a big difference between production installs and personal systems or hobbyist systems. Think, there are lots of businesses running Cobol on mainframes, there are machine shops running on Windows XP, there are big companies still running java 8 and python2. When you have a system that is basically frozen, you end up with catastrophic failure where to upgrade X you need to upgrade Y which requires upgrading Z, etc. You'd be surprised what even big named companies are running in their datacenters, stuff that has to work, is expensive to upgrade, and by virtue of being expensive to upgrade it ends up not being upgraded to the point where any upgrade becomes a breaking change. And at the rate technology changes, even a five year old working system quickly becomes hopelessly out of date. Now imagine a 30 year old system in some telco.
These are such different use cases that I think completely different standards and processes as well as build systems are going to become the norm for big critical infrastructure versus what is running on your favorite laptop.
Well, not really. The compiler is able to optimize the contents of the library and integrate it with the program. i.e. some functions will just be inlined, and that means that those functions won't exist in the same form after other optimizations are applied (Like, maybe the square root function has specific object code, but after inlining the compiler is able to use the context to minify and transform it further).
Yes, but LTO doesn't apply across shared object libraries. Suppose I write a video game that uses DirectX for graphics, but I don't use the DirectX Raytracing feature at all. Because of DLL hell, I'm going to be shipping my own version of the DirectX libraries, ones that I know my video game is compatible with. Those are going to be complete DirectX libraries, including the Raytracing component, even though I don't use it at all in my game. No amount of LTO can remove it, because theoretically that same library could be used by other programs.
On the other hand, if I am static linking, then there are no other programs that could use the static library. (Or, rather, if they do, they have their own embedded copy.) The LTO is free to remove any functions that I don't need, reducing the total amount that I need to ship.
Good point (and shows that I am not a video game developer). I had tried to pick DirectX as something that would follow fao_'s example of game developers. The point still holds in the general case, though as you pointed out, not in the case of DirectX in particular.
Even without LTO, linker will discard object files that aren't used (on Linux a static library is just an AR archive of object files). It's just a different level of granularity.
I doubt that you did since OpenGL implementations are hardware-specific. Perhaps you mean utility libraries building on top of OpenGL such as GLEW or GLUT.
Some libraries (OpenGL, Vulkan, ALSA, ..) the shared library provides the lowest stable cross-hardware interface there is so linking the library makes no sense.
> This comes up a lot, but how often do you end up in a scenario where there's a critical security hole and you _can't_ patch it because one program somewhere is incompatible with the new version? Maybe even a program that isn't security critical.
That’s not the point. The point is having to find and patch multiple copies of a library in case of vulnerability instead of just one.
Giving up the policy to enforce shared libraries would just make the work of security teams much harder.
This comes up a lot, but how often do you end up in a scenario where there's a critical security hole and you _can't_ patch it because one program somewhere is incompatible with the new version? Maybe even a program that isn't security critical.
Plus what you mentioned about testing. If you update each application individually, you can do it piecemeal. You can test your critical security software and roll out a new version immediately, and then you can test the less critical software second. In some systems you can also do partial testing, and then do full testing later because you're confident that you have the ability to roll back just one or two pieces of software if they break.
It's the same amount of work, but you don't have to wait for literally the entire system to be stable before you start rolling out patches.
I don't think it's 100% a bad idea to use shared libraries, I do think there are some instances where it does make sense, and centralized interfaces/dependencies have some advantages, particularly for very core, obviously shared, extremely stable interfaces like Linus is talking about here. But it's not like these are strict advantages and in many instances I suspect that central fixes can actually be downsides. You don't want to be waiting on your entire system to get tested before you roll out a security fix for a dependency in your web server.