> Think about it: If you were to force absolutely everything to be statically linked, your KDE app would have to include the ENTIRE KDE library suite, as well as the QT libraries it's based on, as well as the X window libraries those are based on, etc etc.
no, because we had working LTO for a long time - every KDE app would contain only the exact code that it needs, down to the individual member function. Sure, all KDE apps would have a copy of QObject / QString / QWhatever - but most C++ code nowadays is in headers anyways .
> You'd quickly end up with a calculator app that's hundreds of megabytes.
The DAW I'm working on (340kloc), which can be built statically linked to Qt, LLVM, libclang, ffmpeg and a few others, is a grand total of 130 megabytes in that case.
It's actually heavier when distributed as set of dynamically linked things (unless shipped by linux distros of course, and most of that size is useless LLVM things that I haven't found how to disable yet) :
only accounting for the dependencies I have to ship, it would be 176 megabytes - plus 17 megabytes of the actual software. I'd argue that in that case dynamic linking actually takes less disk space overall because most people don't have LLVM installed on their machines and don't need to.
The code needed is more than you might think. The transitive usage graph is surprisingly dense. FYI I'm one of the original Qt developers, and I remember how it was to try to slim down Qt for embedded use.
Java is like that too. The smallest possible program (just a single line to exit) requires things like java.time.chrono.JapaneseChronology. Because argv is a String array, and making the String array requires calling a function or two that can throw exceptions, which requires the Throwable class, which has a static initialiser that requires... the chain is long, but at the end something has a SomethingChronology member, and so AbstractChronology.initCache() is called and mentions JapaneseChronology.
A friend of mine tells a story about how how he accidentally FUBARed the tests, and then discovered that running one unit test gave 50% line coverage in Rails.
We have big libraries nowadays, and we use them...
This is zone of the things I like more about HN. You read a random answer to a comment and end up discovering he's one of the original authors of QT. Thanks for the nice work!
Regular LTO for large sized projects (think Chromium sized), in my experience is by far the biggest bottleneck in the build process. It's partially the reason so much is being invested into development and improving parallel linking as well as techniques like ThinLTO that all aim at reducing the link times since often, LTO linking takes around 60-70% of all build time combined despite the heavy use of C++ code (although with no exceptions or RTTI).
Unless you have build servers capable of rebuilding all Qt, WebKit etc. and performing an LTO link (which pulls in all build artifacts in form of bitcode archives/objects) in a reasonable amount of time (big reason buildlabs exist - it takes a long time), LTO is not likely to be suitable, it's an extremely expensive optimization that essentially defers all real compilation until the link step at which the linker calls back into libLLVM/libLTO and have them do all the heavy lifting.
At the very least you need a workstation grade machine to be able to do that kind of stuff on regular basis, you really can't expect everyone to have that. And there's a reason libLLVM.so is usually dynamically linked, it cuts a massive amount of time spent on builds, which is especially useful while developing and it's a middle ground between building all LLVM and Clang libraries as shared objects and having to wait for static linking of various LLVM modules into every LLVM toolchain binary (which tends to result in the toolchain being much much bigger). The build cache with shared libLLVM.so for Clang/LLVM/LLD builds is around 6-7GB (Asserts/Test builds). Statically linking LLVM modules blows that up to 20GB. God forbid you actually do a full debug build with full debug information with that.
That's a terrible argument against dynamic linking. That's not to say static linking is bad, in fact, recently it's been making a comeback for exactly that reason - LTO and next-generation optimizers. But saying LTO makes static linking viable for everyone including consumers is somewhat far fetched.
> At the very least you need a workstation grade machine to be able to do that kind of stuff on regular basis, you really can't expect everyone to have that. And there's a reason libLLVM.so is usually dynamically linked, it cuts a massive amount of time spent on builds, which is especially useful while developing
I of course do not argue doing LTO while developing, it seemed clear for me that the context of the whole thing is about what's released to users.
no, because we had working LTO for a long time - every KDE app would contain only the exact code that it needs, down to the individual member function. Sure, all KDE apps would have a copy of QObject / QString / QWhatever - but most C++ code nowadays is in headers anyways .
> You'd quickly end up with a calculator app that's hundreds of megabytes.
The DAW I'm working on (340kloc), which can be built statically linked to Qt, LLVM, libclang, ffmpeg and a few others, is a grand total of 130 megabytes in that case. It's actually heavier when distributed as set of dynamically linked things (unless shipped by linux distros of course, and most of that size is useless LLVM things that I haven't found how to disable yet) : only accounting for the dependencies I have to ship, it would be 176 megabytes - plus 17 megabytes of the actual software. I'd argue that in that case dynamic linking actually takes less disk space overall because most people don't have LLVM installed on their machines and don't need to.