Hacker News new | past | comments | ask | show | jobs | submit login

Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.

Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.




> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?

I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.


I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.


That's the trick. AppImage has a pretty good list of other best practices too: https://docs.appimage.org/reference/best-practices.html (applies even if you don't use AppImages).


If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:

https://github.com/wheybags/glibc_version_header

It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.


glibc 2.34 has a hard break where you cannot compile with 2.34 and have it work with older glibc versions even if you use those version headers. It will always link __libc_start_main@GLIBC_2.34 (it's some kind of new security hardening measure, see https://sourceware.org/bugzilla/show_bug.cgi?id=23323).

Since additionally you also need to build all your dependencies with this same trick, including say libstdc++, it's really easiest to take GP's advice and build in a container with the old library versions. And nothing beats being able to actually test it on the old system.


> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?

When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.

(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)


FWIW, these days Valve tries to solve same problems with their steam runtime[0][1]. Still doesn't seem easy, but looks like almost workable solution.

[0] https://github.com/ValveSoftware/steam-runtime

[1] https://archive.fosdem.org/2020/schedule/event/containers_st...


A multi billion dollar company with massive investments in Linux making an almost workable solution means everyone else is screwed


Nope. Valve has to deal with whatever binaries clueless developers uploaded over the years which they can't update wheres you only need to leant how to make your one binary portable. Entirely different issues.


Well, or we could remember the idea of linux… „IP reasons“ shouldn‘t be an obstacle in the first place… lol


Exactly. I'm so tired of this excuse. They need to fix their own intellectual property problems, not bend the entire Linux ecosystem to their world view.


I feel this is the chief reason why the fabled "Year of the Linux Desktop" will never be a thing.

Microsoft expects Windows to be a means to an end: You run Windows to use your computer.

Linux neckbeards expect Linux to be the end to a means: You use your computer to run Linux.

If Linux is fragmented so bad that it is completely incompatible with the way software development and support work in the real world, the problem is Linux because Windows, Mac/iOS, and Android (incidentally a flavor of Linux) can all deal with it.

Of course, if you're not interested at all in mainstream desktop Linux adoption and are content hacking away at FOSS code while grumbling about the evils of capitalism and proprietary code, then more power to you.


You can absolutely choose Linux for strictly practical reasons - if you do serverside or embedded programming choosing anything else is counterproductive. But for desktop programming Linux is a struggle.


And yet, successful software are developed and run on Linux.


Doing productive distributed computing for about 25 years, most of them not spent on GNU/Linux.

Nowadays, with cloud platforms making the underlying OS transparent, even less.


You run GNU/Linux because you believe in free software. It's not surprising that you run into problems when developing nonfree software for it.


No, you run GNU/Linux for the freedom. Everyone else* runs Linux because it’s cheap, it’s fast, and it works with all their GoRust ElasticDockernetes gizmos.


We want things on our terms, not theirs. They're the ones who need to learn to do things our way, not the other way around.

Are you seriously telling me these billion dollar corporations can't manage to get in contact with some popular distribution's maintainers and work something out?


Why on earth would they ever do that? Nobody in the entire world, to several approximations, would ever know or care. I doubt any of those companies are interested in writing off the wasted dev cost for ideological purity.


>Are you seriously telling me these billion dollar corporations can't manage to get in contact with some popular distribution's maintainers and work something out?

Nope, because those Linux distros aren't making them any worthwhile money while sending over far too many worthless end-user complaints.

If Linux neckbeards truly want to realize the Year of the Linux Desktop, they have to accept how the rest of the world at large works and play by those rules. It's how Android obtained mainstream success despite being Linux, and it's something any other Linux distro can do if they ditched the neckbeard pride.

Or to put it another way: The vast majority of computer users don't care about free-as-in-freedom or open-as-in-auditable source code. The only thing users care about is getting shit done. All other operating systems, including Android, understand and respect this. It's only Linux that chooses to be either willingly naive or in denial.


>If Linux neckbeards truly want to realize the Year of the Linux Desktop, they have to accept how the rest of the world at large works and play by those rules.

Firstly, name-calling does not help in getting your point across.

The opposite argument could easily be made. What would be the point of the "year of the Linux desktop" if Linux is not substantially different from other OSes in the way it treats its users? That's why nobody is celebrating the "era of the Linux palmtop" with Android.

Linux makes different trade-offs from those made by the commercial OSes. The diversity is valuable. That's not to say there isn't room for improvement, but I would be pretty bummed if Linux lost what makes it different.


Linux is more than likely suited for a different sector of the computing market than the desktop, considering its endless failures to breakthrough (Android aside) and the unchanging and fundamentally incompatible-with-desktop ideologies held by the neckbeards who really run the whole show.

The point I want to convey is not so much that Linux should change (though as a desktop user I certainly wouldn't mind), but that anyone within Linux who complains about How The World Is Wrong(tm) needs to wake up from their freedom-infused obsession and smell reality. The rest of the desktop world functions fine, so if it's only Linux that Just Can't(tm) then the problem is Linux.


> Linux is more than likely suited for a different sector of the computing market than the desktop, considering its endless failures to breakthrough

Linux has suited me very well as a desktop for 10+ years. (I'm aware it is probably not suitable for every person or use case.)

To maintain the health of the Linux desktop(s), we do need to be open to new people and ideas from outside. But Linux is, and should remain, different from proprietary OSes. Otherwise, what is the point?


Being different for difference's sake doesn't mean much if it's not useful.

It's like saying a certain screwdriver must be made with a smooth ball point to be different from other screwdrivers. Nevermind that nobody can figure out a use for such a screwdriver and everyone happily (or begrudgingly) goes back to using flathead, Philips, and Torx drivers.

Most people don't care about free-as-in-freedom software or open-as-in-auditable source code, they just want to run Office and Photoshop and maybe play some snazzy games. A tool must first and foremost be useful in order to achieve mainstream appeal, Linux has consistently failed to do so because it is flat out not useful for most people.

If achieving the "Year of the Linux Desktop" is a real goal of the Linux community at large, some fundamental changes in ideology must happen:

* Acceptance or at least tolerance of proprietary source code (eg: Nvidia drivers). Most users don't care what philosophy of code they're running, they just want their computer to work and be useful.

* More emphasis on GUIs and a refined user experience. Neckbeards might only want the CLI and consider anything else below their ability to care, but most users want a good GUI like in any other widely accepted operating system.

* Accept that there can be such a thing as too much choice. Developers don't want to look after their code on five dozen flavors of Linux, some consolidation and stabilization of distros and runtime environments are a hard requirement to beating the chicken-or-egg problem.

* Less hostility to new users and outsiders. The elitism within the Linux community at large is stupid. Being a CLI wizard doesn't give anyone a higher horse to ride on, and it's not going to attract new users anyway.

* Less marketing emphasis on liberal and FOSS ideologies. They're all fine concepts to have, but most users care about free as in free beer, not free as in freedom. Cater to what users want in order to win users over.

Linux can be many great things, but its ideologies and philosophies hold it back in a wider world that values usability and practicality more than freedoms and openness.

If "Year of the Linux Desktop" is not a goal, then Linux can continue as it currently is. The enterprise world will be Linux's stronghold for the foreseeable future as before, and the desktop world will continue moving on with Windows/Mac/iOS/Android like always for better or worse. But Linux can't then complain the desktop world keeps disregarding them, because priorities are different and Linux chose to be incompatible.


Desktop linux is useful to lots of people. It is not a ball pointed screwdriver.

I'm just one linux user. But my impression is that the "the year of the Linux desktop" is a joke even in the Linux community. I don't really care about world domination for Linux (although that's almost what we have on the server side).

I want a healthy enough market share for software availability to be good. For me, it is pretty good. I had to spin up a Windows VM for the first time in about 5 years last weekend to set up my doorbell with the manufacturer's utility. Otherwise there are more than enough games that run well on Linux to occupy the time I have (which isn't a lot). All the basics are covered and everything else is on the web. Dev tooling is second to none.

I want a useful, powerful, general purpose computing environment where I can do what I want without unnecessary obstacles, whether or not that what I want is aligned with the business model of a vendor. The philosophy is abstract, but in my view it has a real impact.

No software is "free as in beer" to create. I really don't know, but I wonder if marketing it as such would be a bad idea. Free software projects often need contributers as much as users. Companies with bigger marketing budgets want their stuff to be valued.

Otherwise, I agree with most of what you said about onboarding new users and less technical people, as well as fragmentation and ABI stability. Some of it is a necessary trade off - if people have the freedom to make changes to their software, you're going to end up with more diversity compared to ecosystems where that's not allowed.

Nvidea have announced the open-sourcing of their driver for Linux compatibility. Perhaps the GPL die-hards were right to hold firm in that...

The term "neckbeard" is an unkind stereotype. Please reconsider your use of it.


I know there are many reasonable and respectable people in the Linux community, a lot of my friends are as such, but they are sadly not the guys at the forefront of Linux development and marketing at large.

I have no plans on reconsidering calling certain parts of the Linux community neckbeards. If they want to force their ideologies down other peoples' throats ("you fix your intellectual property problems", "proprietary code is evil", "use the terminal", etc.), I'll call them how I see fit. Respect is earned, as they say; and they certainly aren't respecting the users nor the world at large.

That aside, objectively it can't be denied that Linux doesn't satisfy most users' needs. If it did we wouldn't be having this discussion nor would Windows have 80~90% of the desktop market. You're fortunate to be someone that desktop Linux can sufficiently provide for, but not everyone is like that.

Windows/Mac/iOS/Android for all their faults can satisfy the needs and desires of almost everyone, it's something Linux critically fails and needs to improve on regardless of worldly aspirations.

Nvidia open sourcing large portions of their driver was a welcome turn of events, though like you I'm not sure if credit for it should be given to the anti-proprietary diehards... :V


Back to the insults again,

Linux is working fine for those who want to use it as such. No neckbeard owes a 'fuckstain' rest of the world a god damn thing.


Actually the only thing Linux Android has is the kernel, nothing on userspace APIs exposes them to app developers.

Any access to Linux subsystems or syscalls on Android, work as matter of luck on specific devices, or is an area Google isn't yet enforcing Android security userspace rules.


> It's how Android obtained mainstream success despite being Linux, and it's something any other Linux distro can do if they ditched the neckbeard pride.

They can do it, but it’s not guaranteed/unlikely to lead to mainstream success. Android had a multi billion dollar corporation behind it, and likely needed that to succeed.

Another Linux distro that freezes its ABI and supports commercial software distribution likely would be irrelevant until it got hundreds of millions of users, and getting there is a big challenge.


FSF: you have to ship your code as GPL if you want to interface properly with the Linux kernel

Software companies: OK then we won't ship our software for Linux

linux users: (surprised pikachu)

like, what exactly is the "excuse" here? FSF set a deliberately onerous license on the assumption that you will either join (because you need to use the GPL codebase) or do it yourself... and companies either do it themselves, or don't release software for the platform. Or they do and it breaks.

Same story as with ZFS basically. The license that is necessary for commercial games/software to run in the necessary ways (that require interaction with the kernel) is incompatible with the license that FSF has chosen. And you can't build an anticheat without poking at the kernel, otherwise it's trivial to hide from it.

Anti-cheat is fundamentally a problem of controlling the code that can run on an end-user's system - looking at memory or packets to scrape out useful data that the game is not presenting to the user - and that's functionally incompatible with a free software system in the first place. And attempting to do so requires interacting with the kernel, and if you're not in the kernel tree then you're chasing the kernel ABI. And anti-cheat rootkits will never be in the kernel tree anyway, period.

Where is the excuse? These are just incompatible products at every level, both conceptually and legally/licensing. That's by design, that's what GPL is intended to do as a copyleft license.

People seem to have this weird idealistic view of GPL, that it's about "protecting user freedoms" and that just makes all the problems of the world go away and everybody happily falls into line, but the mechanism by which it works is by causing problems and removing interoperability for developers of third-party software with incompatible licenses. If you don't do GPL, you can't play with the GPL codebase, and if your kernel is GPL and you need to do kernel things, then as they say - "wow, sucks to be you". But that's working as intended: GPL is a license which is intended to cause breakage and hinder developer freedoms, strategically, in the interests of greater end-user freedom in the long term.

If you just want open-source, the linux kernel should have been MIT/BSD licensed and it wouldn't have been a problem. But GPL isn't about open-source, it's about pushing back on copyright licensing as an ideology.


FWIW, one of the things that's been happening in FreeBSD for the past couple of years is linuxkpi; essentially implementation of various APIs that a Linux kernel would provide, implemented as wrappers around FreeBSD's native kernel APIs. This is being used for graphics drivers - For Intel and ATI FreeBSD uses the drivers from Linux kernel... 5.10 I believe. Linuxkpi, while not ideal, makes maintaining them practical, compared to gazillions of patch collisions on every merge from upstream if they were ported in a traditional way. The same is happening with WiFi drivers, and it's quite obvious that it will get more common.

Which is to say:

freebsd users: wave


Stop damning the GPLv2 no derrivatives. Given the main author, the project leads and the supporting developers and steakholders on the Linux kernel project have defined what they expect the phrase "derrived work" to mean already in extreme detail given past court cases it's closest that is and isn't expected to be allowed the lisence and its scope is now a resolved issue. Unless you plan to prosecute this in a territory in bad faith.

The ZFS situation here is a completely different (as is past bad behaviour by Nvidia/VMware) kettle of fish and doesn't need to be dragged into via a license discussion on an ABI thread.

The kernel keeps an evolving ABI as expected. This is a win pro/con for Linux, but frankly its so rare to hit an incompatability at the kernel layer it's as good as stable. More things break elsewhere typically many times before this is an issue on MacOS or Win1x for that matter...

As for anticheat what's that even got to do with Linux' API situation is beyond me.


.NET5+ its my choice as a SME with this challenge. I run the same code across every device and only support what MS supports. These days u could likely redo this with a webview and wasm... Let the webview handle the graphics abstraction for you!


Hybrid Blazor than.


Was static linking not enough?

I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)


> I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

Could it work with gcompat? Alpine has it in the community repo.

https://git.adelielinux.org/adelie/gcompat


gcompat is roughly the "yeah the plugs look the same so just stick the 120 V device into the 240 V socket" approach to libc compatibility.


that's running it directly on musl. gcompat is more like a passive adapter which works except you need to know if the device you're plugging in actually supports 240V, which most stuff does nowadays but when it doesn't it explodes horribly.


Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.


> Was static linking not enough?

It is a GPL violation when non-GPL software does it.


glibc is LGPL, not GPL, so it wouldn't be a violation as long as you provided its source code and ability to replace it by the user, for example by providing compiled object files for relinking. And musl is MIT, so no problems there either.


How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).


Firefox maintains an Flatpak package on Flathub. Flatpak uses runtimes to provide a base layer of provided libraries that are the same regardless which distro you use.

https://beta.flathub.org/apps/details/org.mozilla.firefox


with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.


I’m happy to blame the OS vendors for not creating a useable base env, I think that’s one of the core tenants for an OS and not providing it is a problem. It may be easier and may push an ideological agenda but I don’t think it’s the right thing to do.


Firefox has a binary they ship in a zip which is broken but they also officially ship a Flatpak which is excellent.


Not sure what you mean; I've been using the Firefox zip for over a decade now with zero problems.


The irony of this comment/response pair being in this thread is delightful.


> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.


So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.


I guess this is another instance of Windows and Mac OS are operating systems. "Linux" is a kernel, powering multiple different operating systems.


It is important to note that this comment is from a time before snaps, flatpaks and AppImages.


Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!

Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).

Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.

These new “solutions” have their own problems.


Gnome's insistence on using web pages for local configuration settings is the dumbest shit ever. It's built on top of a cross platform GUI library but instead of leveraging that they came up with a janky system using a browser extension where you're never 100% sure you're safe from an exploit.


Gnome and making bad choices, name a better combo


OSX and brainwashed plebs.


If snaps or flatpaks are the only future for Linux desktop software distribution then I'm switching to windows+wsl


Snaps are universally hated by the Linux community as they have many problems, but what's wrong with Flatpak?


Instead of fixing fundamental problem, flatpaks/snaps bring the whole new layer of crap and hacks. They try to solve real problem, but the way they do it is like reanimating dead body cells via cancer. But that's not even the worst part.

They can lead eventually to "Microsoft Linux Store". Canonical pushes snaps for a reason and they are partnered with MS. Flatpacks essentially follow the same route and can be embraced, extended and extinguished, and we'll be left with bare kernel+glibc distros and the rest available for 20$ in Steam/MS/you name it store


Yeah. Now people just statically link the dynamic libraries.


>The first approach is a lot of work, and suffers from breakages from time to time

Are there any distros that treat their public APIs as an unbreakable contract with developers like what MS does?


RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.

That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.

The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.

Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.

And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.

This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.

Multiple kernel versions * multiple libc implementations * multiple inits * …

I’d never try to make binary-only software for Linux. Dealing with packaging OSS is bad enough.


> No one in their right mind would run RHEL on a desktop.

glances nervously at my corporate-issued ThinkPad running RHEL 8


> No one in their right mind would run RHEL on a desktop.

I worked somewhere where we ran CentOS on the desktop. That seemed to work pretty well. I don't see why RHEL would be any worse, apart from being more expensive.


I ran CentOS on the desktop for many years. It was a very nice, solid setup that I could rely on updating without sweating about an upgrade breaking something. I've recently switched to fedora in light of recent CentOS 8 shenanigans but CentOS 7 was wonderful at the time.


What did centos 8 do?


It ceased to exist. Redhat stopped supporting the version that was equivalent to RHEL8 and kept CentOS stream for developers targeting RHEL 8 to use.

This came with some changes to open up the developer license program for RHEL so that it could be used for small scale production workloads.

The big problem was that the latter was only hinted at by the time the CentOS EOL was announced and didn't get spelled out in precise language for a few more months, which led to a lot of very angry sysadmins who had been using CentOS in production franticly searching for a new platform in the meantime.


I believe GP is referring to the early termination of CentOS Linux 8 at the end of 2021 rather than matching RHEL 8 to 2029. Red Hat reallocated resources to CentOS Stream for 8+, which EOLs at the end of their respective RHEL major release's Full Support phase (the first 5/5.5 years).

As a result, new rebuild distributions spun up to fill in CentOS Linux's role in the ecosystem as bug-for-bug clones.

CentOS Linux 7 is and will still be maintained until the mid-2024 EOL of RHEL 7.


I ran RHEL (IIRC it was RHEL 6) on my desktop at Amazon from 2013 to 2015, as did all SDEs.


> No one in their right mind would run RHEL on a desktop.

Err.... yes we do? It's a development base I know isn't going to change for a long time, and I can always flatpak whatever applications I need. Hell, now that RHEL 9 includes pipewire I put it on my DAW/DJ laptop.


No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.

Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.


Most Apple-deprecated API stick around rather than “just stop working in newer versions of macOS.” Binary compatibility is very well-maintained over the long term.


Red hat definitely has Kabi and Abi gaurantees.


Are these Linux app distribution problems solved by using Flatpak?


Most of them are, yes. AppImage also solves this, but doesn't have as robust of an update/package management system


AppImage is basically a fancy zip file. It's still completely up to you to make sure the thing you put in the zip file will actually run on other people's system.


Yeah, their Linux guy obviously didn't know what he was doing.


Google Earth was released in 2001, and Flatpak in 2015. That's a 14 year window of time in which this wasn't an option.


Flatpack only came out in 2015


In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?

AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.

[1]: (or dlopening libs you bundle with your executable)


Ricers wanna rice! Can we spin the globe so fast that it breaks apart?

Would the hate mails also have included 'internal' users, say from Chromium-OS?


Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.

A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"


> Due to IP reasons, this can't ship as code, so we need to ship binaries.

Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.


Could you have used something like this:

https://justine.lol/cosmopolitan/index.html


I'd assume not without violating causality?


I took the question to be whether having something like that available, at the time, would have solved any of their problems with distributing Google Earth for Linux


I don't think Cosmopolitan as it currently exists would do it because the difficulty in making graphics cross-platform [0]. Maybe Cosmopolitan circa 2032?

[0]: https://github.com/jart/cosmopolitan/issues/35#issuecomment-....


Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.


And then you have many devs complaining as to why MS doesn’t want to invest time on MAUI for Linux. This is why.


One possible idea: https://appimage.org


Wait. Google Earth has always been available for Linux? https://www.google.com/earth/versions/


They probably mean the old desktop one that has been re-branded to "Google Earth Pro". The UI looks a decade old but it's still useful for doing more advanced things like taking measurements.


Yup. That's the one. If it works for you, great, but it crashes on symbol issues for many people.


FWIW, I use Google Earth Pro on Fedora quite often, and I'm deeply appreciative of the work it took to make that such a simple and enjoyable experience.

I hate that the vocal and tiny minority of linux users who are never satisfied are the ones that most people hear from.


Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: