Hacker News new | past | comments | ask | show | jobs | submit login

Oh my, each app is going to be 20 mb bigger! This mattered 30 years ago, but now I would say we have a huge problem for end users with all of these "Package managers" and "dependency managers" getting tangled up because there are 5 versions of Perl needing 3 versions of Python and so on... I would be a much more happy Linux user if was able to drag and drop an exe. 100mb be damned



> Oh my, each app is going to be 20 mb bigger! This mattered 30 years ago

CPU caches aren't that big so it still matters today, at least for desktop applications.


This seems easier in both Windows and OS X. On both, a native application gets its own directory, and will first look in there for any shared libraries it needs. It gives you a nice middle ground between static linking everything and dynamic linking everything that still avoids the "we have to choose between pervasive system-wide dependency hell and Dockerizing everything" situation that seems to exist on Linux.


True for Windows, but not true for macOS. See DYLD_FALLBACK_LIBRARY_PATH in https://www.unix.com/man-page/osx/1/dyld/ -- the dynamic linker will look for the library at the proposed absolute or macro-expanded path (specified in the load command), and failing that look for the leaf name under a few fallback paths:

$(HOME)/lib:/usr/local/lib:/lib:/usr/lib

You'll notice that those fallback paths do not include any location relative to the application. It's actually pretty difficult to get the object code to lookup libraries relative to its own directory. Explanation here:

https://birchlabs.co.uk/blog/alex/juicysfplugin/synth/cpp/20...


That's not quite correct.

The program sets LD_RUNPATH_SEARCH_PATHS which bakes a list of paths into the binary. This can include relative paths as well as @loader_path and @executable_path. eg a plugin bundled with an app can specify @executable_path/../../Frameworks to reference the parent bundle's Frameworks directory. Libraries can also add search paths to help locate their dependencies. @loader_path will expand relative to the library that actually has the dependency.

Any linked library with a DYLD name of @rpath/ will be searched in those locations.

At build time the linker checks each library for its DYLD name and bakes that name into the final binary. Making sure all of your non-system dependencies are @rpath and relative to your app bundle is what makes it a "Relocatable" application.


Out of curiosity, what stopped you from setting the -install_name flag when you built the libraries you are bundling with your juicysfplugin plugin? That's the standard mechanism for building app-relative libraries and wasn't listed at all under "Alternatives to manually rewriting dynamic links"


@rpath is pretty standard. It works on both macOS and Linux, although many of the details are different.


"It's actually pretty difficult to get the object code to lookup libraries relative to its own directory."

One could just use a wrapper script that sets the DYLD_FALLBACK_LIBRARY_PATH relative to the binary's directory and run it.


Thats what appimage is for, but if every single thing in /bin/ was 100mb you would have an OS much larger then windows


You're thinking of "application" as a single binary, which is unnecessary. You wrap up all of core-utils into a single environment and launch the shell.


If you want the space saving characteristics of shared libraries but your OS simply doesn't support that, you could probably do the busybox trick of rolling a bunch of binaries into one and changing which codepath gets executed based on which name the monolith binary was invoked with. Of course there are obvious downsides to this approach.


Doubt that, looking at my list of applications - I have 124. So, 100mb * 124 = 12.4 gb and I have half a terabyte of storage... Windows 10 requires 16 gb of storage. Heck most phones have 64 gb these days.

We need to stop living in the past - this very issue has probably contributed to the scourge of Electron crap being thrown at our faces since it doesn't suffer from dependency hell they just throw a bunch of javascript into an encapsulated instance of chrome and call it a day.


I have 971 binaries in /usr/bin alone. If they were 100mb each, I'd be looking at 94GB of space on a 250GB laptop ssd. 94GB that I'd have to re-download every time there's a security patch to a common library (e.g. libc). I'll keep living in the past and use shared libraries until download speeds and disk space increase by a couple orders of magnitude.


1. most files in /usr/bin are much smaller than 100mb when statically linked; 100mb is for e.g. gui applications

2. Even in an absurd world where each coreutils executable required 100mb of libraries, a busybox-like delivery would already shave ~10GB off of that. Other improvements can be made: binary deltas for security updates, performing the final link step at package install time, probably others.

3. libc updates have introduced security issues; shared library updates in general break things. I can take a statically linked executable from 1998 and run it today.

Lastly, this is totally unrelated to the question because 971 statically linked command line applications will be well under 1GB, but a 250GB drive? The last time I had a drive that small was a 100GB drive in a PowerBook G4. Linux (and p9 from TFA) are OSes used primarily from the developers (at least until the mythical year of linux on the desktop). Springing $200 for a 512GB SSD upgrade seems well worth it if you are being paid a developers salary anywhere in the western world.


Too late to edit, but gnu coreutils statically linked is 8.8MB total for 105 executables versus 5.5MB for Ubuntu's dynamically linked version.

The largest executable is ptx at 272kb vs 72kb for the Ubuntu binary.

For the smallest, false is 48k statically linked vs 32k for the Ubuntu binary.

If all 970 executables in /usr/bin average out to 100kb of extra space, that's less than 100MB overhead.

[edit]

Stripping the binaries decreases the size to about 7MB total or byte sizes of 34312 vs 30824 for a stripped false binary and 251752 vs 71928 for ptx.

For download times, a tar.xz is a good measurement and it's 819k for the 105 statically linked files or 1015k for the full installation of coreutils including the info files and manpages.

[edit2]

Some proponents of static linking talk about performance, I think it's a negligible win, but as I have it handy I thought I'd measure:

10000 runs of a dynamically linked "false":

    real    0m4.650s
    user    0m3.602s
    sys     0m1.391s
10000 runs if a statically linked "false":

    real    0m3.025s
    user    0m2.047s
    sys     0m1.287s


>For the smallest, false is 48k statically linked vs 32k for the Ubuntu binary.

Lol, you have to do some kind of stripping or GC sections or what not for this to be a fair comparison. A proper version of false is 508bytes on my machine.


The help message for gnu coreutils false is 613 bytes...


Think of the time savings though of not having to apt-get/configure/make/google-search/repeat for 971 binaries! :P


Clearly most of those 971 binaries wouldn't be that large as they only use a subset of the shared libraries functionality and therefore everything else could be left out.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: