Hacker News new | past | comments | ask | show | jobs | submit login
Win32 Is the Only Stable ABI on Linux? (hiler.eu)
456 points by pantalaimon on Aug 15, 2022 | hide | past | favorite | 517 comments



Many moons ago, one of the things I did was to port the Windows version of Google Earth to both Mac and Linux. I did the mac first, which was onerous, because of all the work involved in abstracting away system specific API's, but once that was done, I thought Linux would be a lesser task, and we hired a great linux guy to help with that.

Turns out, while getting it running on linux was totally doable, getting it distributed was a completely different story. Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base? The first approach is a lot of work, and suffers from breakages from time to time, and you alienate users not on your list of supported distros. The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

End result; management canned the Linux version because too much ongoing support work was required, and no matter what you did, you got hate mail from Gentoo users.


> Due to IP reasons, this can't ship as code, so we need to ship binaries. How do you do that?

I build on an distro with an old enough glibc following this table: https://gist.github.com/wagenet/35adca1a032cec2999d47b6c40aa... (right now rockylinux:8 which is equivalent to centos:8 and good enough for debian stable and anything more recent than that ; last year I was still on centos:7), use dlopen as much as possible instead of "normal" linking and then it works on the more recent ones without issues.


I worked on a product that shipped as a closed source binary .so (across four OSes and two architectures) for almost seven years, and that's exactly what we did too — build on the oldest libc and kernel any of your supported distros (or OS versions) support, statically link as much as you can, and be defensive about _any_ runtime dependencies you have.


That's the trick. AppImage has a pretty good list of other best practices too: https://docs.appimage.org/reference/best-practices.html (applies even if you don't use AppImages).


If what you're doing works for you, great, but in case it stops working at some point (or if for some reason you need to build on a current-gen distro version), you could also consider using this:

https://github.com/wheybags/glibc_version_header

It's a set of autogenerated headers that use symbol aliasing to allow you to build against your current version of glibc, but link to the proper older versioned symbols such that it will run on whatever oldest version of glibc you select.


glibc 2.34 has a hard break where you cannot compile with 2.34 and have it work with older glibc versions even if you use those version headers. It will always link __libc_start_main@GLIBC_2.34 (it's some kind of new security hardening measure, see https://sourceware.org/bugzilla/show_bug.cgi?id=23323).

Since additionally you also need to build all your dependencies with this same trick, including say libstdc++, it's really easiest to take GP's advice and build in a container with the old library versions. And nothing beats being able to actually test it on the old system.


> We need to ship binaries. How do you do that? Do you maintain a few separate versions for a few popular distributions? Do you target the Linux Standard Base?

When I worked on mod_pagespeed we went with the first approach, building an RPM and a DEB. As long as we built on the oldest still-supported CentOS and Ubuntu LTS, 32-bit and 64-bit, we found that our packages worked reliably on all RPM- and DEB-based distros. Building four packages was annoying, but we automated it.

(We also distributed source, so it may be that it didn't work for some people and they instead built from source. But people would usually ask questions on https://groups.google.com/g/mod-pagespeed-discuss before resorting to that I don't think I saw this issue.)


FWIW, these days Valve tries to solve same problems with their steam runtime[0][1]. Still doesn't seem easy, but looks like almost workable solution.

[0] https://github.com/ValveSoftware/steam-runtime

[1] https://archive.fosdem.org/2020/schedule/event/containers_st...


A multi billion dollar company with massive investments in Linux making an almost workable solution means everyone else is screwed


Nope. Valve has to deal with whatever binaries clueless developers uploaded over the years which they can't update wheres you only need to leant how to make your one binary portable. Entirely different issues.


Well, or we could remember the idea of linux… „IP reasons“ shouldn‘t be an obstacle in the first place… lol


Exactly. I'm so tired of this excuse. They need to fix their own intellectual property problems, not bend the entire Linux ecosystem to their world view.


I feel this is the chief reason why the fabled "Year of the Linux Desktop" will never be a thing.

Microsoft expects Windows to be a means to an end: You run Windows to use your computer.

Linux neckbeards expect Linux to be the end to a means: You use your computer to run Linux.

If Linux is fragmented so bad that it is completely incompatible with the way software development and support work in the real world, the problem is Linux because Windows, Mac/iOS, and Android (incidentally a flavor of Linux) can all deal with it.

Of course, if you're not interested at all in mainstream desktop Linux adoption and are content hacking away at FOSS code while grumbling about the evils of capitalism and proprietary code, then more power to you.


You can absolutely choose Linux for strictly practical reasons - if you do serverside or embedded programming choosing anything else is counterproductive. But for desktop programming Linux is a struggle.


And yet, successful software are developed and run on Linux.


Doing productive distributed computing for about 25 years, most of them not spent on GNU/Linux.

Nowadays, with cloud platforms making the underlying OS transparent, even less.


You run GNU/Linux because you believe in free software. It's not surprising that you run into problems when developing nonfree software for it.


No, you run GNU/Linux for the freedom. Everyone else* runs Linux because it’s cheap, it’s fast, and it works with all their GoRust ElasticDockernetes gizmos.


We want things on our terms, not theirs. They're the ones who need to learn to do things our way, not the other way around.

Are you seriously telling me these billion dollar corporations can't manage to get in contact with some popular distribution's maintainers and work something out?


Why on earth would they ever do that? Nobody in the entire world, to several approximations, would ever know or care. I doubt any of those companies are interested in writing off the wasted dev cost for ideological purity.


>Are you seriously telling me these billion dollar corporations can't manage to get in contact with some popular distribution's maintainers and work something out?

Nope, because those Linux distros aren't making them any worthwhile money while sending over far too many worthless end-user complaints.

If Linux neckbeards truly want to realize the Year of the Linux Desktop, they have to accept how the rest of the world at large works and play by those rules. It's how Android obtained mainstream success despite being Linux, and it's something any other Linux distro can do if they ditched the neckbeard pride.

Or to put it another way: The vast majority of computer users don't care about free-as-in-freedom or open-as-in-auditable source code. The only thing users care about is getting shit done. All other operating systems, including Android, understand and respect this. It's only Linux that chooses to be either willingly naive or in denial.


>If Linux neckbeards truly want to realize the Year of the Linux Desktop, they have to accept how the rest of the world at large works and play by those rules.

Firstly, name-calling does not help in getting your point across.

The opposite argument could easily be made. What would be the point of the "year of the Linux desktop" if Linux is not substantially different from other OSes in the way it treats its users? That's why nobody is celebrating the "era of the Linux palmtop" with Android.

Linux makes different trade-offs from those made by the commercial OSes. The diversity is valuable. That's not to say there isn't room for improvement, but I would be pretty bummed if Linux lost what makes it different.


Linux is more than likely suited for a different sector of the computing market than the desktop, considering its endless failures to breakthrough (Android aside) and the unchanging and fundamentally incompatible-with-desktop ideologies held by the neckbeards who really run the whole show.

The point I want to convey is not so much that Linux should change (though as a desktop user I certainly wouldn't mind), but that anyone within Linux who complains about How The World Is Wrong(tm) needs to wake up from their freedom-infused obsession and smell reality. The rest of the desktop world functions fine, so if it's only Linux that Just Can't(tm) then the problem is Linux.


> Linux is more than likely suited for a different sector of the computing market than the desktop, considering its endless failures to breakthrough

Linux has suited me very well as a desktop for 10+ years. (I'm aware it is probably not suitable for every person or use case.)

To maintain the health of the Linux desktop(s), we do need to be open to new people and ideas from outside. But Linux is, and should remain, different from proprietary OSes. Otherwise, what is the point?


Being different for difference's sake doesn't mean much if it's not useful.

It's like saying a certain screwdriver must be made with a smooth ball point to be different from other screwdrivers. Nevermind that nobody can figure out a use for such a screwdriver and everyone happily (or begrudgingly) goes back to using flathead, Philips, and Torx drivers.

Most people don't care about free-as-in-freedom software or open-as-in-auditable source code, they just want to run Office and Photoshop and maybe play some snazzy games. A tool must first and foremost be useful in order to achieve mainstream appeal, Linux has consistently failed to do so because it is flat out not useful for most people.

If achieving the "Year of the Linux Desktop" is a real goal of the Linux community at large, some fundamental changes in ideology must happen:

* Acceptance or at least tolerance of proprietary source code (eg: Nvidia drivers). Most users don't care what philosophy of code they're running, they just want their computer to work and be useful.

* More emphasis on GUIs and a refined user experience. Neckbeards might only want the CLI and consider anything else below their ability to care, but most users want a good GUI like in any other widely accepted operating system.

* Accept that there can be such a thing as too much choice. Developers don't want to look after their code on five dozen flavors of Linux, some consolidation and stabilization of distros and runtime environments are a hard requirement to beating the chicken-or-egg problem.

* Less hostility to new users and outsiders. The elitism within the Linux community at large is stupid. Being a CLI wizard doesn't give anyone a higher horse to ride on, and it's not going to attract new users anyway.

* Less marketing emphasis on liberal and FOSS ideologies. They're all fine concepts to have, but most users care about free as in free beer, not free as in freedom. Cater to what users want in order to win users over.

Linux can be many great things, but its ideologies and philosophies hold it back in a wider world that values usability and practicality more than freedoms and openness.

If "Year of the Linux Desktop" is not a goal, then Linux can continue as it currently is. The enterprise world will be Linux's stronghold for the foreseeable future as before, and the desktop world will continue moving on with Windows/Mac/iOS/Android like always for better or worse. But Linux can't then complain the desktop world keeps disregarding them, because priorities are different and Linux chose to be incompatible.


Desktop linux is useful to lots of people. It is not a ball pointed screwdriver.

I'm just one linux user. But my impression is that the "the year of the Linux desktop" is a joke even in the Linux community. I don't really care about world domination for Linux (although that's almost what we have on the server side).

I want a healthy enough market share for software availability to be good. For me, it is pretty good. I had to spin up a Windows VM for the first time in about 5 years last weekend to set up my doorbell with the manufacturer's utility. Otherwise there are more than enough games that run well on Linux to occupy the time I have (which isn't a lot). All the basics are covered and everything else is on the web. Dev tooling is second to none.

I want a useful, powerful, general purpose computing environment where I can do what I want without unnecessary obstacles, whether or not that what I want is aligned with the business model of a vendor. The philosophy is abstract, but in my view it has a real impact.

No software is "free as in beer" to create. I really don't know, but I wonder if marketing it as such would be a bad idea. Free software projects often need contributers as much as users. Companies with bigger marketing budgets want their stuff to be valued.

Otherwise, I agree with most of what you said about onboarding new users and less technical people, as well as fragmentation and ABI stability. Some of it is a necessary trade off - if people have the freedom to make changes to their software, you're going to end up with more diversity compared to ecosystems where that's not allowed.

Nvidea have announced the open-sourcing of their driver for Linux compatibility. Perhaps the GPL die-hards were right to hold firm in that...

The term "neckbeard" is an unkind stereotype. Please reconsider your use of it.


I know there are many reasonable and respectable people in the Linux community, a lot of my friends are as such, but they are sadly not the guys at the forefront of Linux development and marketing at large.

I have no plans on reconsidering calling certain parts of the Linux community neckbeards. If they want to force their ideologies down other peoples' throats ("you fix your intellectual property problems", "proprietary code is evil", "use the terminal", etc.), I'll call them how I see fit. Respect is earned, as they say; and they certainly aren't respecting the users nor the world at large.

That aside, objectively it can't be denied that Linux doesn't satisfy most users' needs. If it did we wouldn't be having this discussion nor would Windows have 80~90% of the desktop market. You're fortunate to be someone that desktop Linux can sufficiently provide for, but not everyone is like that.

Windows/Mac/iOS/Android for all their faults can satisfy the needs and desires of almost everyone, it's something Linux critically fails and needs to improve on regardless of worldly aspirations.

Nvidia open sourcing large portions of their driver was a welcome turn of events, though like you I'm not sure if credit for it should be given to the anti-proprietary diehards... :V


Back to the insults again,

Linux is working fine for those who want to use it as such. No neckbeard owes a 'fuckstain' rest of the world a god damn thing.


Actually the only thing Linux Android has is the kernel, nothing on userspace APIs exposes them to app developers.

Any access to Linux subsystems or syscalls on Android, work as matter of luck on specific devices, or is an area Google isn't yet enforcing Android security userspace rules.


> It's how Android obtained mainstream success despite being Linux, and it's something any other Linux distro can do if they ditched the neckbeard pride.

They can do it, but it’s not guaranteed/unlikely to lead to mainstream success. Android had a multi billion dollar corporation behind it, and likely needed that to succeed.

Another Linux distro that freezes its ABI and supports commercial software distribution likely would be irrelevant until it got hundreds of millions of users, and getting there is a big challenge.


FSF: you have to ship your code as GPL if you want to interface properly with the Linux kernel

Software companies: OK then we won't ship our software for Linux

linux users: (surprised pikachu)

like, what exactly is the "excuse" here? FSF set a deliberately onerous license on the assumption that you will either join (because you need to use the GPL codebase) or do it yourself... and companies either do it themselves, or don't release software for the platform. Or they do and it breaks.

Same story as with ZFS basically. The license that is necessary for commercial games/software to run in the necessary ways (that require interaction with the kernel) is incompatible with the license that FSF has chosen. And you can't build an anticheat without poking at the kernel, otherwise it's trivial to hide from it.

Anti-cheat is fundamentally a problem of controlling the code that can run on an end-user's system - looking at memory or packets to scrape out useful data that the game is not presenting to the user - and that's functionally incompatible with a free software system in the first place. And attempting to do so requires interacting with the kernel, and if you're not in the kernel tree then you're chasing the kernel ABI. And anti-cheat rootkits will never be in the kernel tree anyway, period.

Where is the excuse? These are just incompatible products at every level, both conceptually and legally/licensing. That's by design, that's what GPL is intended to do as a copyleft license.

People seem to have this weird idealistic view of GPL, that it's about "protecting user freedoms" and that just makes all the problems of the world go away and everybody happily falls into line, but the mechanism by which it works is by causing problems and removing interoperability for developers of third-party software with incompatible licenses. If you don't do GPL, you can't play with the GPL codebase, and if your kernel is GPL and you need to do kernel things, then as they say - "wow, sucks to be you". But that's working as intended: GPL is a license which is intended to cause breakage and hinder developer freedoms, strategically, in the interests of greater end-user freedom in the long term.

If you just want open-source, the linux kernel should have been MIT/BSD licensed and it wouldn't have been a problem. But GPL isn't about open-source, it's about pushing back on copyright licensing as an ideology.


FWIW, one of the things that's been happening in FreeBSD for the past couple of years is linuxkpi; essentially implementation of various APIs that a Linux kernel would provide, implemented as wrappers around FreeBSD's native kernel APIs. This is being used for graphics drivers - For Intel and ATI FreeBSD uses the drivers from Linux kernel... 5.10 I believe. Linuxkpi, while not ideal, makes maintaining them practical, compared to gazillions of patch collisions on every merge from upstream if they were ported in a traditional way. The same is happening with WiFi drivers, and it's quite obvious that it will get more common.

Which is to say:

freebsd users: wave


Stop damning the GPLv2 no derrivatives. Given the main author, the project leads and the supporting developers and steakholders on the Linux kernel project have defined what they expect the phrase "derrived work" to mean already in extreme detail given past court cases it's closest that is and isn't expected to be allowed the lisence and its scope is now a resolved issue. Unless you plan to prosecute this in a territory in bad faith.

The ZFS situation here is a completely different (as is past bad behaviour by Nvidia/VMware) kettle of fish and doesn't need to be dragged into via a license discussion on an ABI thread.

The kernel keeps an evolving ABI as expected. This is a win pro/con for Linux, but frankly its so rare to hit an incompatability at the kernel layer it's as good as stable. More things break elsewhere typically many times before this is an issue on MacOS or Win1x for that matter...

As for anticheat what's that even got to do with Linux' API situation is beyond me.


.NET5+ its my choice as a SME with this challenge. I run the same code across every device and only support what MS supports. These days u could likely redo this with a webview and wasm... Let the webview handle the graphics abstraction for you!


Hybrid Blazor than.


Was static linking not enough?

I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

It is possible, though, that whatever you statically link doesn't work with the running kernel, of course. And there are a lot of variants out there; every distribution has their own patch cadence. (A past example of this was the Go memory corruption issue from 1.13 on certain kernels. 1.14 added various checks for distribution + kernel version to warn people of the issue, and still got it wrong in several cases. Live on the bleeding edge, die on the bleeding edge.)


> I feel like the problem most people run into today is glibc vs. musl differences. They develop on Ubuntu, then think they can just copy their binaries into a "FROM alpine:latest" container, which doesn't actually work.

Could it work with gcompat? Alpine has it in the community repo.

https://git.adelielinux.org/adelie/gcompat


gcompat is roughly the "yeah the plugs look the same so just stick the 120 V device into the 240 V socket" approach to libc compatibility.


that's running it directly on musl. gcompat is more like a passive adapter which works except you need to know if the device you're plugging in actually supports 240V, which most stuff does nowadays but when it doesn't it explodes horribly.


Static linking against MUSL only makes sense for relatively simple command line tools. As soon as 'system DLLs' like X11 or GL are involved it's back to 'DLLs all the way down'.


> Was static linking not enough?

It is a GPL violation when non-GPL software does it.


glibc is LGPL, not GPL, so it wouldn't be a violation as long as you provided its source code and ability to replace it by the user, for example by providing compiled object files for relinking. And musl is MIT, so no problems there either.


How do Firefox and Blender do it? They just provide compressed archives, which you uncompress into a folder and run the binary, no problem. I myself once had to write a small CLI program in Rust, where I statically linked musl. I know, can't compare that with OpenGL stuff, but Firefox and Blender do use OpenGL (and perhaps even Vulkan these days?).


Firefox maintains an Flatpak package on Flathub. Flatpak uses runtimes to provide a base layer of provided libraries that are the same regardless which distro you use.

https://beta.flathub.org/apps/details/org.mozilla.firefox


with difficulty, and not that well. for example, firefox binaries require gtk built with support for X, despite only actually using wayland at runtime if configured. the reason why people generally don't complain about it is because if you have this sort of weird configuration, you can usually compile firefox yourself, or have it compiled by your distro. with binary-only releases, all complaints (IMO justifiably) go to the proprietary software vendors.


I’m happy to blame the OS vendors for not creating a useable base env, I think that’s one of the core tenants for an OS and not providing it is a problem. It may be easier and may push an ideological agenda but I don’t think it’s the right thing to do.


Firefox has a binary they ship in a zip which is broken but they also officially ship a Flatpak which is excellent.


Not sure what you mean; I've been using the Firefox zip for over a decade now with zero problems.


The irony of this comment/response pair being in this thread is delightful.


> The second version, using LSB, was worse, as they specify ancient libraries and things like OpenGL aren't handled properly.

That was a shame. There was a lot of hope for LSB, but in the end the execution flopped. I don't know if it would have been possible to make it succeed.


So this sort of bleeds in to the Init Wars, but there's a lot of back and forth about whether LSB flopped or was deliberately strangled by a particular player in the Linux ecosystem.


I guess this is another instance of Windows and Mac OS are operating systems. "Linux" is a kernel, powering multiple different operating systems.


It is important to note that this comment is from a time before snaps, flatpaks and AppImages.


Yesterday I tried to install an Inkscape plugin I have been using for a long time. I upgraded my system and the plug-in went away. So I download the zip, open Inkscape, open the plugin manager, go to add the new plugin by file manager… and the opened file manager is unable to see my home directory (weird because when opening Inkscape files it can see home, but when installing extensions it cannot). It took some time to figure out how to get the downloaded file in to a folder the Inkscape snap could see. Somehow though I still could not get it installed. Eventually I uninstalled the snap and installed the .deb version. That worked!

Recently I downloaded an AppImage for digiKam. It immediately crashed when trying to open it, because I believe glibc did not work with my system version (a recent stable Ubuntu).

Last week I needed to install a gnome extension. The standard and seemingly only supported way of doing this is to open a web page, install a Firefox extension, and then click a button on the web page to install it. The page told me to install the Firefox extension and that worked properly. Then it said Firefox didn’t have access to the necessary parts of my file system. It turns out FF is a snap and file system access is limited, so the official way of installing the gnome extension doesn’t work. I ended up having to download and install chrome and install the gnome extension from there.

These new “solutions” have their own problems.


Gnome's insistence on using web pages for local configuration settings is the dumbest shit ever. It's built on top of a cross platform GUI library but instead of leveraging that they came up with a janky system using a browser extension where you're never 100% sure you're safe from an exploit.


Gnome and making bad choices, name a better combo


OSX and brainwashed plebs.


If snaps or flatpaks are the only future for Linux desktop software distribution then I'm switching to windows+wsl


Snaps are universally hated by the Linux community as they have many problems, but what's wrong with Flatpak?


Instead of fixing fundamental problem, flatpaks/snaps bring the whole new layer of crap and hacks. They try to solve real problem, but the way they do it is like reanimating dead body cells via cancer. But that's not even the worst part.

They can lead eventually to "Microsoft Linux Store". Canonical pushes snaps for a reason and they are partnered with MS. Flatpacks essentially follow the same route and can be embraced, extended and extinguished, and we'll be left with bare kernel+glibc distros and the rest available for 20$ in Steam/MS/you name it store


Yeah. Now people just statically link the dynamic libraries.


>The first approach is a lot of work, and suffers from breakages from time to time

Are there any distros that treat their public APIs as an unbreakable contract with developers like what MS does?


RedHat claims or at least claimed that for EL. I think it’s limited to within minor releases though, with majors being API.

That’s fine if you’re OK relying on their packages and 3rd party “enterprise” software that’s “certified” for the release. No one in their right mind would run RHEL on a desktop.

The most annoying to me was that RHEL6 was still under support and had an ancient kernel that excluded running Go, GRaalVM, etc. static binaries. No epoll() IIRC.

Often times you find yourself having to pull more and more libraries into a build. It all starts with wanting a current Python and before you know it you’re bringing in your own OpenSSL.

And they have no problem changing their system management software in patch releases. They’ve changed priority of config files too many times. But that’s another rant for another day.

This is a place where I wish some BSD won out. With all the chunks of the base userspace + kernel each moving in their own direction it’s impossible to get out of this place. Then add in every permutation of those pieces from the distros.

Multiple kernel versions * multiple libc implementations * multiple inits * …

I’d never try to make binary-only software for Linux. Dealing with packaging OSS is bad enough.


> No one in their right mind would run RHEL on a desktop.

glances nervously at my corporate-issued ThinkPad running RHEL 8


> No one in their right mind would run RHEL on a desktop.

I worked somewhere where we ran CentOS on the desktop. That seemed to work pretty well. I don't see why RHEL would be any worse, apart from being more expensive.


I ran CentOS on the desktop for many years. It was a very nice, solid setup that I could rely on updating without sweating about an upgrade breaking something. I've recently switched to fedora in light of recent CentOS 8 shenanigans but CentOS 7 was wonderful at the time.


What did centos 8 do?


It ceased to exist. Redhat stopped supporting the version that was equivalent to RHEL8 and kept CentOS stream for developers targeting RHEL 8 to use.

This came with some changes to open up the developer license program for RHEL so that it could be used for small scale production workloads.

The big problem was that the latter was only hinted at by the time the CentOS EOL was announced and didn't get spelled out in precise language for a few more months, which led to a lot of very angry sysadmins who had been using CentOS in production franticly searching for a new platform in the meantime.


I believe GP is referring to the early termination of CentOS Linux 8 at the end of 2021 rather than matching RHEL 8 to 2029. Red Hat reallocated resources to CentOS Stream for 8+, which EOLs at the end of their respective RHEL major release's Full Support phase (the first 5/5.5 years).

As a result, new rebuild distributions spun up to fill in CentOS Linux's role in the ecosystem as bug-for-bug clones.

CentOS Linux 7 is and will still be maintained until the mid-2024 EOL of RHEL 7.


I ran RHEL (IIRC it was RHEL 6) on my desktop at Amazon from 2013 to 2015, as did all SDEs.


> No one in their right mind would run RHEL on a desktop.

Err.... yes we do? It's a development base I know isn't going to change for a long time, and I can always flatpak whatever applications I need. Hell, now that RHEL 9 includes pipewire I put it on my DAW/DJ laptop.


No, no one does. It's a lot more work to maintain all public APIs and their behavior for all time; it can often prevent even fixing bugs, if some apps come to depend on the buggy behavior. Microsoft would occasionally add API parameters/ options to let clients opt-in to bug fixes, or auto-detect known-popular apps and apply special bug-fix behaviors just for them.

Even Apple doesn't make that level of "unbreakable contract" commitment. Apple will announce deprecations of APIs with two or three years of opportunity to fix them. If apps don't upgrade within the timeframe, they just stop working in newer versions of macOS.


Most Apple-deprecated API stick around rather than “just stop working in newer versions of macOS.” Binary compatibility is very well-maintained over the long term.


Red hat definitely has Kabi and Abi gaurantees.


Are these Linux app distribution problems solved by using Flatpak?


Most of them are, yes. AppImage also solves this, but doesn't have as robust of an update/package management system


AppImage is basically a fancy zip file. It's still completely up to you to make sure the thing you put in the zip file will actually run on other people's system.


Yeah, their Linux guy obviously didn't know what he was doing.


Google Earth was released in 2001, and Flatpak in 2015. That's a 14 year window of time in which this wasn't an option.


Flatpack only came out in 2015


In that context, cannot the issue be sidestepped entirely by statically linking[1] everything you need?

AFAIK the LGPL license even allows you to statically link glibc, as long as you provide a way for your user to load their own version of the libs by themself if that's what they want.

[1]: (or dlopening libs you bundle with your executable)


Ricers wanna rice! Can we spin the globe so fast that it breaks apart?

Would the hate mails also have included 'internal' users, say from Chromium-OS?


Another approach might be a hybrid, with a closed-source binary "core", and open-source code and linkage glue between that and OS/other libraries. And an open-source project with one-or-few officially-supported distributions, but welcoming forks or community support of others.

A large surface area app (like Google Earth?) could be less than ideal for this. But I've seen a closed-source library, already developed internally on linux, with a small api, and potential for community, where more open availability quagmired on this seemingly false choice of "which distributions would we support?"


> Due to IP reasons, this can't ship as code, so we need to ship binaries.

Good, it should be as difficult as possible, if not illegal, to ship proprietary crap to Linux. The operating system was always intended to be Free Software. If I cannot audit the code, it’s spyware crap and doesn’t belong in the Linux world anyway.


Could you have used something like this:

https://justine.lol/cosmopolitan/index.html


I'd assume not without violating causality?


I took the question to be whether having something like that available, at the time, would have solved any of their problems with distributing Google Earth for Linux


I don't think Cosmopolitan as it currently exists would do it because the difficulty in making graphics cross-platform [0]. Maybe Cosmopolitan circa 2032?

[0]: https://github.com/jart/cosmopolitan/issues/35#issuecomment-....


Loki managed to release binaries for Linux long before Google Earth was a thing. I'm not going to claim that things are/were perfect but you never needed to support each distro individually: Just ship your damned dependencies except for base system stuff like libc and OpenGL which does provide pretty good backwards compatibility so you only need to target the oldest version you want to support and it will work on newer ones as well.


And then you have many devs complaining as to why MS doesn’t want to invest time on MAUI for Linux. This is why.


One possible idea: https://appimage.org


Wait. Google Earth has always been available for Linux? https://www.google.com/earth/versions/


They probably mean the old desktop one that has been re-branded to "Google Earth Pro". The UI looks a decade old but it's still useful for doing more advanced things like taking measurements.


Yup. That's the one. If it works for you, great, but it crashes on symbol issues for many people.


FWIW, I use Google Earth Pro on Fedora quite often, and I'm deeply appreciative of the work it took to make that such a simple and enjoyable experience.

I hate that the vocal and tiny minority of linux users who are never satisfied are the ones that most people hear from.


Flatpak solved this issue. You use a "runtime" as the base layer, similar to the initial `FROM` in Dockerfiles. Flatpak runs then the app in a containerized environment.


Agree. Had a few games on Steam crap out with the native version, forced it to use proton with the Windows version, everything worked flawlessly. Developers natively porting to linux seem to be wasting their time.

Funnily enough with wine we kinda recreated the model of modern windows, where Win32 is a personality on top of the NTAPI which then interfaces with the kernel. Wine sits between the application and the zoo of libraries including libc that change all the time.


> Developers natively porting to linux seem to be wasting their time.

Factorio runs so much better than any of this emulationware, it's one of the reasons I love the game so much and gifted licenses for friends using Windows.

Some software claims to support Linux but uses some tricks to avoid recompiling and it's always noticeable, either as lag, as UI quirks, or some features plainly don't work because all the testers were windows users.

Emulating as a quick workaround is all fair game but don't ship that as a Linux release. I appreciate native software (so long as it's not java), and I'm also interested in buying your game if you advertise it as compatible with WINE (then I'm confident that it'll work okay and you're interested in fixing bugs under emulation), just don't mislead and pretend and then use a compatibility layer.


In case you weren't aware Wine is not an emulator, it is a compatibility layer.

The whole point of wine is to take a native Windows app, only compiled for Windows and translate its Windows calls to Linux calls.


> In case you weren't aware Wine is not an emulator, it is a compatibility layer.

Ehhh. I know it’s in the name, but I feel like the significance is debatable. It’s not a CPU emulator, true. It is emulating calls, which is emulation of a sort.


Usually when you say "emulator" people think there's an inherent performance hit because of a fetch-decode-execute interpreter loop somewhere. Reimplementations of things don't have that performance hit even though they are lumped under the same umbrella as actual interpreters and recompilers.

Related note: if WINE is an emulator why isn't Linux or GNU? They both reimplement parts of UNIX, which was even more proprietary than Windows is today.


Nowadays it's emulators all the way down.

On most of these architectures the software eventually executes as x86 machine code, and the distance between x86 machine code and the actual processes inside a modern CPU implementing the x86 code set is so vast you can call a modern CPU an "x86 emulator built in hardware."


Entirely accurate. Also makes it easier to have a firmware/microcode update that gives you a new instruction.


> If WINE is an emulator why isn't Linux or GNU?

I mean, it depends on the context. I don't think it would be wrong to say that Linux "emulates a UNIX environment" or some such, which is closer to what OP actually wrote about Wine.

You've probably used a "terminal emulator" at some point today. ;)


It does have a terminal emulator, but you can also drop to a tty.


> if WINE is an emulator why isn't Linux or GNU?

They... are? I mean for that matter Intel/AMD instruction sets are CISC emulators on top of RISC silicon.


UNIX source code, at least for the original versions, was released in 2002: https://slashdot.org/story/02/01/24/0146248/caldera-releases...


In regular joe-schmo parlance, an emulator would be something that translates a hardware machine into software that is run on a different machine. Hardware being the important word here. Performance has nothing to do with how people use the term emulator in regular parlance.


It’s a reimplementation of the APIs rather than an emulation. Same as how Linux reimplemented UNIX APIs, but it’s not a UNIX emulator.


That's still emulating the underlying API, and accepted usage of the word. Much like the FreeBSD linux emulator translates linux syscalls into FreeBSD ones.


By that logic any modern windows is an emulator of win32, since that is not a kernel API but a user space library "emulating" it.

Exactly the same way as wine. Wine does not translate the calls, it for most part actually implements the underlying logic.

Win32 is just a bunch of shared libraries: https://en.wikipedia.org/wiki/Microsoft_Windows_library_file...


You're exactly right, and the Wine project agrees.

> That said, Wine can be thought of as a Windows emulator in much the same way that Windows Vista can be thought of as a Windows XP emulator: both allow you to run the same applications by translating system calls in much the same way. Setting Wine to mimic Windows XP is not much different from setting Vista to launch an application in XP compatibility mode.

> [...]

> "Wine is not just an emulator" is more accurate. Thinking of Wine as just an emulator is really forgetting about the other things it is. Wine's "emulator" is really just a binary loader that allows Windows applications to interface with the Wine API replacement.


Is any compatibility shim an emulator?


This is all really just a philosophical question as to how you choose to use the word. It's the same as people who get in a twist over every game with procedural generation and permadeath being called a "Roguelike" even though that particular subgenre used to be more specific to turn-based RPGs with procedural generation, permadeath and total loss of all progress between runs.

People who came into using the term earlier tend to think of it more narrowly, but colloquial use of the term has drifted to mean something more generic, e.g. "emulation" used to mean "making one piece of hardware pretend to be another", but now can sometimes just mean, "when one thing acts like another at all".


In this case, however, the term "to emulate" predates microprocessors, so it clearly can't have ever referred exclusively to ISA translation!

"Emulator" might be a more recent term—it would be interesting to see the etymology, actually—but it's reasonable to conclude that anything which "emulates" must be an "emulator". (Also, OP didn't actually use the word "emulator".)

Edit: Nope, the word "emulator" dates back to at least the 1800s (although it has certainly grown in usage more recently): https://books.google.com/ngrams/graph?content=emulator&year_...


Huh, what happened in 1984/85 that made "emulator" so much more popular a term than before?

Aha! It was Apple's Macintosh release, which included an Apple II emulator built-in: https://books.google.com/books?id=Ti8EAAAAMBAJ&pg=PA13&lpg=P...


I don't think it could have been just that? Usage peaked in 1984, but it rose quite steadily starting in the early 70s.

I do assume most of the increase in usage was related to computers/software, however. The timing is right.


Saying wine is an emulator is as wrong as saying docker (on linux) is a virual machine, even though you could say it allows you to run a “virtual environment” in the same hand-wavy way you're using the word “emulating” in your sentence.


How is it hand-wavy?

> emulate (transitive verb)

> To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system.

It's the exact same way its used by FreeBSD for its linux compatibility layer. It's the same way that Wine even uses in their FAQ.

> That said, Wine can be thought of as a Windows emulator in much the same way that Windows Vista can be thought of as a Windows XP emulator: both allow you to run the same applications by translating system calls in much the same way. Setting Wine to mimic Windows XP is not much different from setting Vista to launch an application in XP compatibility mode.

> [...]

> "Wine is not just an emulator" is more accurate. Thinking of Wine as just an emulator is really forgetting about the other things it is. Wine's "emulator" is really just a binary loader that allows Windows applications to interface with the Wine API replacement.


The problem with this definition is that it's so broad it encompasses many many things that are never talked about as “emulators”. By this definition, Docker is an emulator, a VM is an emulator, an x86_64 CPU is an emulator (because it “emulates“, in the broadest sense, x86), a C compiler is an emulator (“emulating” the PDP-11 on modern hardware), etc.

Even your own quote reveals the issue:

> > That said, Wine can be thought of as a Windows emulator in much the same way that Windows Vista can be thought of as a Windows XP emulator

Yet nobody talks about the latest Windows as being an emulator for older windows …

In short, this definition is akin to defining humans as “bipeds without feather”, we definitely fit this definition but it's way too broad to be useful.


I think most people interpret emulation as CPU emulation, not a compatibility layer otherwise .NET core is probably just one fat emulator.


I hate to break to to those people, but that approach to emulation is only used for very old systems. Once you get into 32 bit, it’s mostly HLE.


By that definition, wouldn't the whole of POSIX simply be an "emulation layer" ;-)


It is not a "Windows Emulator", but it is certainly a "Windows Userspace Emulator"

Dolphin can run Wii binaries on top of Linux, by emulating a Wii.

Wine can run Windows binaries on top of Linux, by emulating Windows Userspace.

Why would one be an emulator, and the other is not?

Is there some distinction in what an emulator is that goes against common sense?

This reminds of the Transpiler/Compiler "debate." They're both still compilers. They're both emulators (VMs & Compatability Layers).

What the creators meant to say, IMO, is WINVM, "Wine is not a Virtual Machine".


Wine is more like a re-implementation of windows Userland using a different kernel (and graphics libraries).

Since the hardware is all the same, there is nothing to emulate or compile.

Or from the other direction - windows NT contains a re-implementation of the win16 API. Is that an emulator?


The creators also would not call virtual machines emulators because they don't translate CPU instructions, which is their narrow criteria for "emulator" that nobody else seems to use.


> is their narrow criteria for "emulator" that nobody else seems to use.

Except the people actually writing or talking about real “emulators”. You know, for things like NES or Gameboy on x86, x86 on WASM, etc.

Sure, you can argue that VMs or Wine are emulators in the broadest sense, but then I could argue that your CPU is an emulator too since it doesn't really runs ASM, and with that very loose meaning almost anything computers related is an emulator. And in practice that's never what people mean when we're talking about an emulator. (Even this thread started with the wrong postulate that WINE must have incurred a performance penalty because the commented believed it was an emulator).


Wine-like things have long been called emulators.

When I was at Interactive Systems Corporation in the mid to late'80s and we were porting System V Release 3 to the 386 for AT&T, we wrote a Wine-like program called i286emul to run 286 System V Release 2 binaries. We and AT&T called it an emulator [1].

Later AT&T and Microsoft and Sun were involved in various projects to merge features from System V R3, BSD, SunOS, and XENIX into System V R4. As part of that they wanted a way to run 286 XENIX binaries, and Microsoft contracted with Interactive for that. We wrote another Wine-like program for that called x286emul. We and Microsoft called it an emulator too [2]

The XENIX emulator led to the stupidest meeting I have ever had to attend. Microsoft said there was an issue with the kernel that could not be resolved over the phone or email. So me and the other guy working on x286emul had to go to Microsoft for a meeting.

A flag needed to be added to the process data structure that would tell the kernel to make some slight changes to a system call or two, due to some differences between System V and XENIX. It was something like XENIX having separate error codes for some things System V rolled into one, or something like that.

The meeting was about how to set/clear that flag. Microsoft wanted to know if we preferred that it be done as a new flag to an existing system call or if a new system call should be added. I looked at the other guy from Interactive and said something like "A flag's fine for me", he said he agreed. We said "flag" to Microsoft, and the meeting ended.

That couldn't have been handled by phone or email?

[1] http://osr507doc.xinuos.com/man/html.C/i286emul.C.html

[2] http://osr507doc.xinuos.com/man/html.C/x286emul.C.html


Things like Wine used to be called emulator, but this usage fell out of fashion a while ago. Now the word “emulator” has been generally refined to mean a (inherently slow) hardware emulator, and most of the compatibility layers explicitly market themselves as not being emulators: “Wine is not an emulator” “Rosetta isn't an emulator”, “x86 compatibility mode is not an emulator”, “virtualization isn't emulation”, and so on.

And even the starting comment on that thread seems to abide by this definition, as it complains about some (imaginary) emulation overhead when using Wine.

Languages changes overtime as usages evolve: when 80286 was released, the French word «baiser» still meant “to kiss” for most people, now it means “to fuck”.


Every new user seems to think Wine is an emulator, and plenty still after, so I don't believe it's out of fashion. If they don't, it's only because someone tried to make them feel dumb about it with the WINE acronym, which seems to exist because so many people call it an emulator. Maybe if that many people are mistaken, they're actually right.

I especially don't know who's calling Rosetta 2 "not an emulator," given that it's software-emulating x86 arch and actually comes with a noticeable slowdown, not that emulators need to have big overhead.


> Every new user seems to think Wine is an emulator

Most new linux users think that Linux=Ubuntu, does that make it correct? Beginners comes with misconceptions and are then being corrected during their learning process, that's how it works.

> Maybe if that many people are mistaken, they're actually right.

This argument is fabulous, it's like fractally broken, let's have a little bit of fun with it:

I'm pretty sure Wine is niche enough that there's more flat-earthers on this planet than people believing wine to be an emulator, does that number makes them right?

And how about the other people, the majority who know Wine isn't an emulator, would you say “maybe if that many people are correct they're actually wrong”?

> I especially don't know who's calling Rosetta 2 "not an emulator,"

Well, Apple.

You're correct though that Rosetta is arguably an emulator (except, a really sophisticated one, with hardware support, to makes it fast enough) unlike all the other (that interestingly enough you don't address) but if you read my comment again you'll see no contradiction (hint: the key word in that sentence is “market”).


Yes, those people are writing hardware emulators. Doesn't mean they're the only "real" kind.

As for my Intel CPU, it isn't pretending to be another kind of CPU. Intel makes a leading implementation of x86. Wine follows Microsoft's Windows implementation and translates to Linux calls, and the entire point is so you can run programs intended for Windows on Linux instead. You can get relative about it, but it's not really, they're clearly different. Either way, doesn't support WINE's acronym.


> Yes, those people are writing hardware emulators. Doesn't mean they're the only "real" kind.

They're the only kind that doesn't involve a super broad definition of what the word “emulation” means (so broad that most computing actually fits in this definition).

> As for my Intel CPU, it isn't pretending to be another kind of CPU.

Until one day you realize that you can run an x86 program on a 64 bit CPU (but fortunately, this isn't done through “emulation” proper either).


> Until one day you realize that you can run an x86 program on a 64 bit CPU (but fortunately, this isn't done through “emulation” proper either).

It's reasonable to call that emulation if there's a separate layer for it translating to/from x86-64, rather than the hardware specifically supporting -32. My CPU isn't doing that cause I'm not running 32-bit software on macOS.


So AMD CPU’s are emulators then?


No they're not.


The Wii emulator is emulating the whole system including the CPU; WINE describes itself as not an emulator specifically because it's not doing anything about the CPU (hence not working on ARM Linux without extra work).

(I'm not 100% sold on this; I think "CPU emulator" and "ABI emulator" are both reasonable descriptions, albeit of different things, but that's the distinction that the WINE folks are making.)


By any well-known definition of an emulator, like https://en.wikipedia.org/wiki/Emulator, Wine is an emulator. It's emulating a Windows system to Windows programs. It's not emulating hardware is all. That WINE acronym, other than being a slightly annoying way to correct people, is wrong. Reminds me of Jimmy Neutron smartly calling table salt "sodium chloride" when really he's less correct than the layman he's talking to, since there are additives.

WINE should simply stand for WINdows Emulator.


According to the Wikipedia link given, emulation is for host and target computer systems. In a computing context, 'emulate' is not a synonym for 'impersonate' - it describes an approach for a goal.

Wine does not aim to emulate a "Windows computer system" (aka a machine which booted into Windows). For instance, it doesn't allow one to run windows device drivers. WINE is taking an approach that ultimately means it is doing a subset of what is possible with full emulation (hopefully in return going much faster).

To put it another way, a mobile phone app with a HP49 interface that understands RPN is not necessarily a calculator emulator. It could just be an app that draws a calculator interface and understands RPN and some drawing commands. It doesn't necessarily have the capability to run firmwares or user binaries written for the Saturn processor.


The people who wrote Wine thought it was an emulator. Their first idea for a name was "winemu" but they didn't like that, then thought to shorten it to "wine".

The "Wine is not an emulator" suggestion was first made in 1993, not because Wine is not an emulator but because there was concern that "Windows Emulator" might run into trademark problems.

Eventually that was accepted as an alternative meaning to the name. The Wine FAQ up until late 1997 said "The word Wine stands for one of two things: WINdows Emulator, or Wine Is Not an Emulator. Both are right. Use whichever one you like best".

The release notes called it an emulator up through release 981108: "The is release 981108 of Wine, the MS Windows emulator".

The next release changed that to "This is release 981211 of Wine, free implementation of Windows on Unix".

As far as I've been able to tell, there were two reasons they stopped saying it was an emulator.

First, there were two ways you could use it. You could use it the original way, as a Windows emulator on Unix to run Windows binaries. But if you had the source to a Windows program you could you could compile that source on Unix and link with libraries from Wine. That would give you a real native Unix binary. In essence this was using Wine as Unix app framework.

Second most people who would be likely to use Wine had probably only ever encountered emulators before that were emulating hardware. For example there was Virtual PC from Connectix for Mac, which came out in 1997, and there were emulators on Linux for various old 8-bit systems such as NES and Apple II.

Those emulators were doing CPU emulation and that was not fast in those days. It really only worked acceptably well if you were emulating hardware that had been a couple of orders of magnitude slower than your current computer.

Continuing to say that Wine is an emulator would likely cause many people to think it must be slow too and so skip it.


One might say it emulates windows library and system calls.


Or they might say that it translates windows library and system calls.


Same way a CPU emulator translates instructions.


Much like one could say it internets Windows userland and system calls with Linux and its userland.


No, not at all like that. One slight difference is that what you said is complete nonsense.

I'm using an accepted definition of emulation:

> emulate (transitive verb)

> To imitate the function of (another system), as by modifications to hardware or software that allow the imitating system to accept the same data, execute the same programs, and achieve the same results as the imitated system.

This usage is pretty common, for example FreeBSD has a linux emulation layer that takes Linux syscalls and translates them into FreeBSD syscalls. WINE saying it stands for "Wine Is Not an Emulator" is irrelevant to the fact that it is blatantly is one.


Yes, nonsense was the point. Congrats.


A lot of high level emulation works this way as well. And, similarly, FPGA cores often aren't "simulating game hardware" either.


Surely you see that this is a slight semantic distinction, unless you consider Wine apps to be Linux native?


Have you actually tried to run the Windows version of Factorio through Proton and experienced slowdowns? In my experience, WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it), unless there are issues related to graphics API translation which is a separate topic.


I believe OP is referring to the fact Factorio has some optimization on Linux such as using fork() to autosave (which is copy-on-write unlike its Windows counterpart), which result in zero-stuttering during auto-save for large-SPM bases.


I’m a huge fan of Wine, but there’s no reason to run Factorio under it. The native Linux version works flawlessly from my experience, and I’m even using a distro where problems with native binaries are common.


> WINE doesn't result in a noticeable slowdown compared to running on Windows natively (without "emulationware" as you call it)

That says very little about whether there'd be a noticeable slowdown compared to running a Linux-native version on Linux natively, though.


> unless there are issues related to graphics API translation

But there almost always are problems in that area. I've never had a game run with the same FPS and stability in Wine vs natively in Windows.


My point was it's not inherent to WINE. There's a lot of Windows games which support Vulkan or OpenGL, those work excellently.


Csgo is OpenGL and doesn't work excellently in my experience, but yeah, in theory they should.


CS:GO is Direct3Dish with an OpenGL shim (or alternatively with a better Vulkan shim now).


Exactly the same fps, unlikely.

But phoronix has tons of benchmarks showing that WINE/proton are in the same ballpark as the native windows version, sometimes a bit slower but as many times a bit faster as well.


Wine Is Not an Emulator


Sure, but it's also WINdows Emulator


No, it's not, because it's not a emulator


"In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest)."

Sounds like Wine.


Wine isn't emulating a computer. It's another implementation of the Win32 API.

Just like Linux isn't a Unix emulator.


There is a Linux emulator for Unix (namely FreeBSD), and it's one of the first examples in the Wikipedia article on emulation. https://www.openbsd.org/60.html shows OpenBSD removing "Linux emulation."

Linux was never designed to be the same as Unix, just similar. But it's more popular now, so Unix users found utility in pretending to run Linux.


Well you know.. I guess I'm OG when it comes to naming.


I believe it now actually is again, or was, or something. It's a question when dealing with Win16 and now Win32 (Windows-on-Windows).


Ha Perfect!

Yes, wine is an Windows binary executor with library translation.

https://wiki.winehq.org/FAQ#Is_Wine_an_emulator.3F_There_see...


Except sometimes on aarch64


I've been using wine and glibc for almost 20 years now and wine is waaaay more unstable than glibc.

Wine is nice until you try to play with sims3 after updating wine. Every new release of wine breaks it.

Please use wine for more than a few months before commenting on how good it is.

It's normal that every new release some game stops working. Which is why steam offers the option to choose which proton version to use. If they all worked great one could just stick to the last.


As someone who's been gaming on Proton or Lutris + Raw Wine, I'm not sure I agree. I regularly update Proton or Wine without seeing major issues or regressions. It certainly happens sometimes, but I'm not sure it's any worse of a "version binding" problem than a lot of stuff in Linux is. Sure, sometimes you have to specifically use an older version, but getting "native" linux games to work on different GPU architectures or distros is a mess as well, and often involves pinning drivers or dependencies. I've had games not run on my Fedora laptop that run fine on my Ubuntu desktop, but for the most part, Wine or Proton installed things work the same across Linux installs, and often with better performance somehow.


I specifically mentioned the sims3. That one is constantly broken by updates.

Also age of empires2 hd, after working fine with wine/proton for a decade, doesn't work with the latest proton for me.


Sure, I'm not contesting that Wine breaks things with updates. So does a lot of stuff on Linux. The amount of times I run an apt update and some config file is now obsolete or just gone is a lot more often than I'd like.

The advantage is that the Wine Ecosystem seems to realize this more than the Linux ecosystem at large, and specifically makes it easy to pin versions and never update. If it worked, why update? Or why not roll back? I'm already used to having to do that with every other part of linux gaming including my graphics drivers...


> If it worked, why update?

For multiplayer games, which nowadays get updated every day or something, and old versions are incompatible.


I'm not asking why you'd update the game, I'm asking why you'd update Wine if your game "is constantly broken by updates".

It's one thing if the game keeps updating and then you have to hope it works on the same version of Wine or re-find which version of Wine the new version of the game works with, but presumably that's a problem that the OP isn't happening with The Sims 3.

If the game works on a specific version of Wine, why would you mess with it? Or if you are, then treat it like any major OS update and back up/be ready to roll back if it breaks something. Wine is especially good at letting you make multiple sub-environments, so it's not like your whole system has to be on the same version of Wine.


You update Wine to fix game A, and it breaks game B. Or something small is broken in a game, so you try to update to fix it.


That's not how Wine or Proton works in my experience. As OP said, Steam and Lutris both have tooling to easily set up Wine prefixes per-game, including specifying specific versions, including custom compilations like GloriousEggroll's builds. In general, you can flip between versions comfortably and easily, often without having to do anything more than changing a value in a GUI.

I could see it being an issue if you were managing your Wine prefixes by hand, but that's like deciding to install your OS dependencies without apt or dnf and then complaining that Linux has bad packages because you chose not to use a package manager.

I literally play multiple online games in Wine/Proton on a daily basis, including Path of Exile, FF14, WoW, and Payday 2. The only one I've had major issues outside of general performance with is FF14 and that's because it integrates closely with Steam and their launcher uses a super outdated version of .NET that Wine hasn't worked out how to emulate. It's been broken for years, and is a known and documented issue with Wine.


And Aoe2 HD broke with every game update in Wine. I had to keep patching in different DLLs. Gave up one day. It's worse than the original game anyway.


I used to say that Wine makes Linux tolerable, but after using it for several years I've concluded that Wine makes Windows tolerable.


Absolute opposite experience for me. The native versions of Half-Life, Cities: Skylines and a bunch of other games refuse to start up at all for me for a few years now. Meanwhile I've been on the bleeding edge of Proton and I can count the number of breakages with my sizeable collection of working Windows games within the last couple of years on one hand. It's been a fantastic experience for me with Proton.


Mind saying what the error is? Linker problem?


Not sure if I'm honest. Starting up steam through the terminal and launching the game doesn't give me anything indicating the reason in the logs, which is weird. I'm using Arch and tried both steam and steam-runtime already.


It works fine for me. If it was something with glibc it wouldn't work for me.


Half life works fine for me on latest kernel, latest glibc.

Probably you have different unrelated issues.


> Please use wine for more than a few months before commenting on how good it is.

I’ve used it for several years, and even to play Sims 2 from time to time, and while I’ve had issues the experience only gets better over time. It’s gotten to the point where I can confidently install any game on my Steam library and expect it to run. And be right most of the time.


But not sims 3

Age of empires DE will not work with proton. And it's not really a top notch graphics game.


I'm not entirely sure what the point of updating Wine is in the first place. If you have a version that works with the game you're trying to play, why not pin it? Things definitely tend to break with Wine upgrades by nature of what Wine does, that's why it's common for people to have multiple versions of Wine installed.


Y'all sure you don't have Sims 2 and Sims 3 mixed up? Sims 3 is rated Platinum on AppDB / Gold on ProtonDB (and I've ran it on Proton on multiple machines and distros without issue), whereas Sims 2 had a Garbage rating on AppDB for the longest time (apparently the Origin version is Silver now, but still).


I am most definitely talking about Sims 2. I play the Origin version through Lutris, from when they gave it away for free.


> Every new release of wine breaks it.

Is there any way to easily choose which Wine version you use for compatiblity? Multiple Wine versions without VMs etc?


Steam lets you do that, but I think it's a global setting and not per game.

Debian normally keeps 2 versions of wine in the repositories, but if none of those 2 work, you're out of luck.


> if none of those 2 work, you're out of luck

That's not true. There are multiple tools for managing multiple versions of Wine and Wine-related tools for gaming, the oldest one being PlayOnLinux. Lutris is the most widely used one, and works great in my experience.


This is wrong. Steam lets you choose a Proton (Wine) version per-game.


> Developers natively porting to linux seem to be wasting their time.

Initial port of Valve's source engine ran 20% faster without any special optimizations back in the day. So I don't see why the effort is wasted.


Isn’t part of the original point not just that Wine is a perfect (dubious, imo) compatibility layer, but that distributing a native port is cumbersome on the Linux ecosystem?


I don't buy the cumbersomeness argument for Linux games. A lot of games in the past and today has been distributed as Linux binaries. Most famously Quake and Unreal Tournament series had native Linux binaries on disks and they worked well until I migrated to 64 bit distributions. I'm sure they'll work equally fine if I multi-arch my installations.

Many of the games bundled by HumbleBundle as downloadable setups have Linux binaries. Some are re-built as 64 bit binaries and updated long after the bundle has closed.

I still play Darwinia on my 64 bit Linux system occasionally.

Most of these games are distributed as .tar.gz archives, too.

I can accept and respect not creating Linux builds as engineering (using Windows only APIs, libraries, etc.) and business (not enough market) decisions, but cumbersomeness is not an excuse, it's a result of the other related choices.

In my book, if a company doesn't want to support Linux, they can tell it bluntly, but telling "we want to, but it's hard, and this is Linux's problem" doesn't sound sincere even remotely.


> have Linux binaries

In January, HumlbeBundle removed Linux support from their Trove [1].

HumbleBundle support points out that it's not simple [2]:

> While it is entirely possible to install and run games and programs from the Linux GUI, implementation across distros can be wildly different. For this reason, this guide will explain how to install and launch games using the Terminal.

And, usually, only a small list of distributions are supported, like Ubuntu or Mint. For example, Bundle 9 [3].

All of the above seems to supports the "cumbersomeness argument for Linux games".

1. https://kotaku.com/latest-humble-bundle-change-leaves-mac-li...

2. https://support.humblebundle.com/hc/en-us/articles/219377857...

3. https://support.humblebundle.com/hc/en-us/articles/115011722...


Humbe Bundle has been focusing on profit only and ignoring their original values ever since it has been sold to IGN. It's not surprising that they would cut support for something only a small percentage of their users use (remember, they are primarily a Steam key reseller now). That has nothing to do with Linux support being cumbersome just with any OS support benefiting from economies of scale that makes minority platforms less lucrative.


The irony is, Linux users Were the highest paying customers before the transfer to IGN.

Their contribution easily made up to half of all sales money wise.


Reference? For Steam, Linux gaming just broke 1% this year [1]. I don't see how that math could work.

1. https://www.phoronix.com/news/Steam-Survey-January-2022


When they were publishing pie charts and made you fill a little questionnaire every time you buy a bundle, the stats showed that Linux users always paid a little more than the average, and this exponentially added up causing Linux users to pay a much greater sum at the end of the bundle, in terms of total money spent.

I'm not sure I can find the charts now (they were live and per bundle), but I track Humble since day 1. I remember it well.


Some of the earlier Humble Bundles had Linux users contributing about ¼ of the funds but I don't think it was ever higher than that. [0] These early bundles were also before Steam for Linux was a thing so were a major source of new Linux ports (relative to the number of available native games).

[0] https://cheesetalks.net/humble/


> In January, HumlbeBundle removed Linux support from their Trove [1].

Yet, I have downloaded all new builds for a couple of games for amd64 from one of their oldest bundles.

It's not as clear cut as it seems.


> I can accept and respect not creating Linux builds as engineering (using Windows only APIs, libraries, etc.) and business (not enough market) decisions..

Yep, and if somebody made your game work under Wine/Proton or Bottles/Heroic, just pay them to support it or create an AppImage/Flatpak and that's the easiest way to get compatibility.


Basically game developers never once booted linux.

They'd need to do some learning and don't want to. They might have superficially read something about distributions and think that a software cannot run on two different distributions.

I've read this excuse time and time again. And saying it tells me that the person never actually tried to compile anything on linux.


On the contrary, they are more that used to POSIX stacks on macOS, iOS, PlayStation, Nintendo, Android, ChromeOS....

Yet porting to GNU/Linux is not worthwhile.


I kindly disagree with you. Most of the platform provided by a console is nicely abstracted with SDKs. The code they touch is the SDK which provides direct, and tailored access to utilities and capabilities provided by the platform itself.

Even the Linux binaries of then AAA titles are ported by some talented developers, sometimes out of the studio.

I remember porting of Unreal Tournament to Linux was an official effort, but a work of a single guy.

So, I don't expect studio-wide POSIX knowledge on game studios.


The point was that even with lower barriers there is no economic interest in doing the work.


So, it's as I said. The hurdle is not technical. The studios just don't prefer, and I'm OK with that.


It is hard to prefer something that usually don't provide profits.


Are you really surprised the consumer market isn't buying a product that doesn't exist?


macos? The same OS which decided no more 32bit binaries because of reasons? The same OS which decided no more opengl because of reasons? That OS?

No gamer uses that OS. If you think it's hard on linux, it's much harder on osx.

android is not posix, that is completely hidden from the developer.

chromeos is just linux + google tracking.

consoles are a complete separate world.

Please let's try to have a serious conversation. Game developers that use frameworks such as unreal are often unaware of how the underlying system works.


>No gamer uses that OS. If you think it's hard on linux, it's much harder on osx.

You are conflating "gamer" with "game developer". Don't forget that millions of game devs successfully build and launch games for iOS.

>consoles are a complete separate world.

The PS4 is literally FreeBSD.


> Don't forget that millions of game devs successfully build and launch games for iOS.

With XCode, and tons of visual tools which abstract the OS and device to a great extent, making it almost invisible.

> The PS4 is literally FreeBSD.

Which is also shipped with nice SDKs for everything and anyhting related to PS4.


Same can be applied to GNU/Linux and most studios just don't care.


Yeah, instead of SDL + OpenGL/Vulkan, game studios use DirectX + NVIDIA Game Works, which provides a nice, cozy walled garden and vendor lock-in.

Win-Win for Microsoft and NVIDIA.

Again, it's just market powers and path of least resistance. It's not technical.


Of course it is not technical, there is no money to be made for most game studios.


You want me to believe on ios it never happens that apple breaks support for older software Like they do all the time on computers?


It happens, but there is an economic advantage in putting up with such breakages, while on GNU/Linux systems that money bag isn't there to keep game studios interested other than testing waters and running away afterwards.


Yes of course there is: reselling the same game over and over.


Game developers...


FWIW, targeting proton is likely the best platform target for future Windows compatibility too.


There are plenty of examples of things being the other way around. For example, heavily modding Kerbal Space Program basically necessitated running Linux because that's the only platform that had a native 64-bit build that was even remotely stable (this has since been fixed, but for the longest time the 64-bit Windows version was horrendously broken) and therefore the only platform wherein a long mod list wouldn't rapidly blow through the 32-bit application RAM ceiling.


This wasn't a problem with the game itself. It's their anti-cheat malware that stopped working. On Windows these things are implemented as kernel modules designed to own our computers and take away our control so we can't cheat at video games.

It's always great when stuff like that breaks on Linux. I'm of the opinion it should be impossible for them to even implement this stuff on Linux but broken and ineffective is good too.


Coincidentally, Win32 is also the only stable API on Windows.

WinForms and WPF are still half-broken on .NET 5+, WinRT is out, basically any other desktop development framework since the Win32 era that is older than a couple of years is deprecated. Microsoft is famous for introducing new frameworks and deprecating them a few years later. Win32 is the only exception I can think of.


I was gonna say, I think Win32 is the only stable API full stop. Everything else is churn city.


Yeah. And MFC on top makes it a bit more chewable :3


.NET Framework is still there by default out of the box, and still runs WinForms and WPF like it always did.


Which version of it? 1.0?


Ironically, 1.0 is the one that's the most problematic, because it never shipped in any version of Windows by default.

.NET 4.6 is the version that shipped in Win10 back in 2015. Win11 ships with .NET 4.8. Thus, both OSes can run any apps written for .NET 4.x out of the box. I would expect this to remain true for many years to come, given that Windows still supports VB6.

.NET 3.5 runtime (which supports apps written for .NET 2.0 and 3.0, as well) is also available in Win10 and Win11, but it's an optional feature that must be explicitly enabled - although the OS will automatically prompt you to do so if you try to run a .NET app that needs it.


4.8x ships and still gets security updates


Microsoft insisted on introducing a bunch of breaking changes into .NET Core (and all future .NET versions), making 4.8 a "dead end" in which many enterprise customers of mine have become stuck.

ASP.NET Web Forms sites are especially stuck, and on top of that I have customers that have developed Web Forms sites in VB.NET! Luckily there are some good bulk-conversion tools available now, but still, there is no smooth upgrade path for a lot of popular systems.

Similarly, Windows Communication Foundation, Workflow Foundation, and a bunch of other popular libraries or frameworks are dead in the water. SAP Crystal Reports is surprisingly common, but doesn't even have an official NuGet package!


That's a different story, though. The point is that old apps written against .NET Framework still work - you don't have to port them to .NET 5+ to get them to run on Win11.


Yes, they will run, as will most VB6 apps from 1998.

But if you have developed a .NET Framework app that you want to see living into the future a decade or more from today, you need to know that maintaining it is going to be increasingly painful as time goes on. You are locked to an old version of the C# language, and you absolutely can't count on third-party dependencies to stay supported. At some point MS might well decide to drop tooling for .NET Framework development in new releases of Visual Studio.


>"Coincidentally, Win32 is also the only stable API on Windows"

And this is what I use for my Windows apps. In the end I have self contained binaries that can run on anything starting from Vista and up to the most up to date OS.


Honest question, do you get HiDPI support if you write a raw win32 app nowadays? I haven’t developed for windows in over a decade so I’ve been out of the loop, but I also used to think of win32 being the only “true” API for windows apps, but it’s been so long that I’m not sure if that opinion has gotten stale.

As a sometimes windows user, I occasionally see apps that render absolutely tiny when the resolution is scaled to 200% on my 4k monitor, and I often wonder to myself whether those are raw win32 apps that are getting left behind and showing their age, or if something else is going on.


IIRC yes but you have to opt into it with a manifest (or an API call).


I use Delphi for my "official" windows desktop applications. They have GUI library that wraps Win32. Seem to handles HiDPI just fine.

I do use raw Win32 but not for GUI stuff.


GDI32 has always had support for configurable screen DPI. You used to be able to customize it in Win95 but they hid the setting because too many poorly developed applications were written with inflexible pixel based dimensioning. If you lay everything out in twips it will scale without any special effort.


WinForms is just mostly a managed Win32 wrapper so unsurprisingly it’s very stable on the OS frameworks (4.X).

Building for .NET Framework using any APIs is extremely stable as development has mostly ceased. You pick a target framework depending on how old windows versions you must support.


WinRT lives on as WinAppSDK.


Metro lives on as UWP lives on as WinRT lives on as Project Reunion lives on as WinAppSDK.

Exactly the point the OP was making. Win32 is stable.


It is all COM, all those marketing names only reflect a set of interfaces, using .NET metadata instead of TLB files and processes being bound to an App Container.

Win32 is stable indeed, raw Win32 is stuck in Windows XP API surface, most stuff that came afterwards is based on COM.


The names aren't getting better either ...


Since when is .NET 5+ part of Windows?


Since MS decided to deprecate .NET Framework, making .NET 5+ the recommended basis for C# desktop development going forward. Yes you will still be able to run your old apps for many decades still, but you can never move to a newer version of the C# language and maintaining them is going to be an increasing pain as the years go by. I've already been down this road with VB6.


And .NET 4.8 is still installed by default on Windows 11 and will presumably happily run your WPF app if you target it.


.NET 4.8 might be one of those things they won't be able to get rid of.


Windows 11 still ships msvbvm60.dll - that's the runtime for Visual Basic 6, released back in 1998. And it is officially supported:

https://docs.microsoft.com/en-us/previous-versions/visualstu...


I recently experienced this in a critical situation. Long story short, something went very wrong during a big live event and I needed some tool to fix it.

I downloaded the 2 year old Linux binary, but it didn't run. I tried running it from an old Ubuntu Docker container, but there were dependencies missing and repos were long gone. Luckily it was open source, but compiling was taking ages. So in a case of "no way this works, but it doesn't hurt to try" I downloaded the Windows executable and ran it under Wine. Worked like a charm and everything was fixed before GCC was done compiling (I have a slow laptop).


I have personally used containers for this reason to set my gaming environment. If something breaks, all I need to do is to run older image and everything works.


Notes:

"EAC" is Easy Anti Cheat, sold by Epic.[1] Not EarthCoin.

"EOS", in this context, is probably Epic Online Services, not one of the 103 other known uses of that acronym.[2]

Here's a list of the games using those features.[3]

So, many of these issues are for people building games with Epic's Unreal Engine on Linux. The last time I tried UE5, after the three hour build, it complained I had an NVidia driver it didn't like. I don't use UE5, but I've tried it out of curiosity. They do support Linux, but, as is typical, it's not the first platform they get working. Epic does have support forums, and if this is some Epic problem encountered by a developer, it can probably be fixed or worked round.

Wine is impressive. It's amazing that they can run full 3D games effectively. Mostly. Getting Wine bugs fixed is somewhat difficult. The Wine people want bugs reported against the current dev version. Wine isn't set up to support multiple installed versions of itself. There's a thing called PlayOnLinux which does Wine version switching, but the Wine team does not accept bug reports if that's in use.[4] So you may need a spare machine with the dev version of Wine for bug reproduction.

[1] https://www.easy.ac/en-us/

[2] https://acronyms.thefreedictionary.com/EOS

[3] https://steamcommunity.com/groups/EpicGamesSucks/discussions...

[4] https://wiki.winehq.org/Bugs


> Wine isn't set up to support multiple installed versions of itself.

huh? the official wine packages for ubuntu, debian, and i believe fedora provide separate wine-devel and wine-staging packages, which can be installed in parallel with each other and with distro packages. in fact, debian (and ubuntu) as well as arch provide separate wine and wine-staging packages as part of the distro itself, no separate repo required.

wine has no special support for relocated installations, but no more or less so than any large Unix program; you can install as many copies as you want, but they must be compiled with different --prefixes, and you cannot use different versions of wine simultaneously with the same WINEPREFIX.


Oh, that's good to know. Thanks.


Related:

Win32 is the stable Linux userland ABI (and the consequences): https://news.ycombinator.com/item?id=30490570

336 points, 242 comments, 5 months ago


Without getting into spoilers, I'll say that playing "Inscryption" really got me thinking about Docker's continued development could help consumers in the gaming industry.

I would love to see game being virtualized and isolated from the default userspace with passthrough for graphics and input to mitigate latency concerns. Abandonware could become a relic of the past! Being able to play what you buy on any device you have access to would be amazing.

I won't hold my breath, though. The industry pretty loudly rejected Nvidia's attempt to let us play games on their cloud without having to buy them all again. Todd needs the ability to sell us 15 versions of Skyrim to buy another house.


For games on Steam there's the Steam Linux Runtime which can run games on Linux in a specialized container to isolate them from these sort of bugs.

There's also a variant of this container that contains a forked version of Wine for running Windows games as well.


Doesn't the Steam Linux Runtime have a problem in the other direction though? Games are using libraries which are so old that they have bugs which are long since fixed or don't work properly in modern contexts. Apparently a lot of issues with Steam + Wayland comes from the ancient libraries in the Steam Linux Runtime from what I have been able to find out from googling issues I've experienced under Wayland.


> Abandonware could become a relic of the past!

That would eat into some business models though, like Nintendo's quadruple-dipping with its virtual consoles


Good. All those games should be in the public domain anyway. It's been 30-40 years, Nintendo has been more than adequately compensated.


Write to your representative and plead to reduce copyright (I'd pitch in for the original 14 year term).


I would certainly do that if I was american.


I agree, but some notoriously litigious companies probably will not.


Flatpak is basically Docker for linux, there are layers and everything. What you're saying should be possible if you make a AppImage/Flatpak out of the Steam Runtime+Proton(if needed)+Game, it should run anywhere with the right drivers.


Good luck once wayland starts to actually be used to run any game from before wayland.


Xwayland exists, all my games use Xwayland, there are no stable proton/wine implementation that's uses Wayland natively.


It exists now, but it will be left to rot once (probably in another 15 to 20 years) wayland is finally ready and most software is migrated to it.


Isn't that what gamescope has been doing for quite some time?


Glibc is not Linux, and they have different backwards compatibility policies, but everyone should still read Linus Torvalds' classic 2012 email about ABI compability: https://lkml.org/lkml/2012/12/23/75 Teaser: It begins with "Mauro, SHUT THE FUCK UP!"


man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team


The context, from Mauro’s previous message:

> Only an application that handles video should be using those controls, and as far as I know, pulseaudio is not a such application. Or are it trying to do world domination? So, on a first glance, this doesn't sound like a regression, but, instead, it looks tha pulseaudio/tumbleweed has some serious bugs and/or regressions.

Style and culture are certainly open for debate (I wouldn’t be as harsh as Linus), but correcting a maintainer who was behaving this way towards a large number of affected users was warranted. The kernel broke the API contract, a user reported it, and Mauro blamed the user for it.


I'd like to mention that Mauro is a very nice person. I worked briefly with him when I submitted some patches to ZBar and it was the best experience I've had contributing to open source to this day. He gave me feedback and I got to learn new stuff such as D-Bus integration.


When this comes up in conversation it is worthwhile remembering that Linux was built on the team of volunteers centered around Torvalds who was famous for not acting like a jerk. Really. The perception of him among hackers of being a good guy, you could work with, who acknowledged when linux had bugs, accepted patches and was pretty self-effacing is probably the thing that most made that project at that time take off to the stratosphere. Linus was a massive contrast to traditional bearded unix-assholery.

The nature of the work changes. The pressures change. The requirements change. We age. Also the times change too.

But yeah, it is possible to act like a jerk sometime without actually being a jerk in all things. It is also possible to be a lovely person who makes the odd mistake. Assholes can have good points. Life is nuanced.

Of the bajillion emails linus has sent to lkml how many of them can you find that you believe show evidence of being a jerk.

Compare to Theo de Raadt at OpenBSD who have also built a pretty useful thing with their community. Compare also to Larry Wall and Guido van Rossum.

None of us is above reasoned, productive criticism. Linus has done ok.


It’s not my personal style, but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds. From Amy Klonuchar throwing binders (https://www.businessinsider.com/amy-klobuchar-throwing-binde...) to tons of high level folks in banking, law firms, etc.

Put differently, you can construct a high functioning team composed of certain personalities who can dish out and take this sort of communication style without burning out on it.


I've definitely seen teams that were low functioning because they were so worried about consensus and upsetting someone else that no one ever criticized any decisions any team member made even if they were both impactful and objectively terrible.


This is worse


You've just described Japanese work culture.


That’s too cliche. As a counterpoint, HP and IBM are US firms and also get no shit done. And Japanese groups have their share of jerks, in spades.

Issue is not jerks vs consensus, it’s way more complicated than that.


If you mean cliché as in "widespread and pre-eminent" then I agree, because that it's a fair description of Japanese work culture. The jerk side of it is taken care of by the hierarchy, those who are above you may be a jerk to you, and those on a similar level will instead use passive aggressive tactics but are incredibly unlikely to be a Torvalds style jerk.

If you want to point out where the extra complication is, I'd love to know.


The original comment was "no one ever criticized any decisions any team member made even if they were both impactful and objectively terrible."

That is just false at every level you can imagine. I mean, could a developed country even function under this conditions ? Every team member in most companies would need to get it right in the first try a crazy amount of time for a company to have any reasonable profits, that's just crazy.

On the jerkitude, yes you need to calibrate for the culture observed. They won't be shouting insanities in the open-space, but it's also way more direct that passive aggressiveness: spending 15 mins getting lectured by a coworker at your desk on why your last report is full of errors and you're not pulling your weight is basically as effective in context.


> I mean, could a developed country even function under this conditions ?

Have you worked in Japan?


You need to go below the surface. That guy telling you he's just following the orders just doesn't give a shit about what you're trying to do and wants you to go away.


So that's a no.

From [1]:

> Fear of Decisions

> Decisions are the first step to failure, and nobody wants to fail. But decisions must be made. How does this dichotomy resolve itself? Meetings. Endless meetings and emails, planning documents, pre-planning documents, post-planning documents, meeting documents, and endless discussion of all the things by all the people all the time. The thinking goes, if everyone is involved in the decision-making process, then when something inevitably goes wrong, there’s no individual person to blame! Problem solved.

Also from [1]:

> Crowdthinking

> If you are in a group of Japanese people and ask a question, you will sometimes witness the following sequence of events: > > Everybody looks around at other people > One person begins to suggest something slightly > Slight or emphatic agreement in a domino-effect across the group > > The need to protect social harmony is so deeply ingrained in society that sometimes even in friendly events this will happen, not just at work. This often works well for social questions, but anything work-related will probably best be asked 1:1.

From [2]:

> “It’s not only about the etiquette,” says Yuko Morimoto, a consultant with Japan Intercultural Consulting, a Japan-focused firm that helps foreign companies work effectively with each other. What’s really important is understanding the different styles of communication that different cultures have. Like American’s reputation for being direct. And the Japanese’ predilection for what Morimoto says is just the opposite. The Japanese, say Morimoto, often say no to saying no.

> “They feel hesitant to say I don’t like your product,” Morimoto says. “So they say something like, ‘Oh that’s a good idea. Let us think about it.'”

> “Yes” in Japan doesn’t mean the same thing as “yes” in English. Instead, notes Morimoto, it could mean, “We just met, and I don’t think it’s polite for me to say no right away.”

> “Or they say, ‘Yes, yes.’ But yes means, ‘Yes, I’m hearing you.’ It doesn’t necessarily mean, ‘Yes, I like it,’” she says. “It can be yes-yes, or it can an iffy-yes, or it can be a no-yes.”

> To decipher what’s really being said, you need more information, Morimoto says. Was another meeting scheduled? Was a price agreed on? Was a contract signed? The Japanese, she notes, are more risk-averse than Americans. They want consensus. So you can expect that a Japanese company will take its time making decisions and making sure everyone is on board with them.

Finally, because I think I've helped you enough, every Thursday this[3] Reddit sub publishes a complaint thread. You can learn an awful lot about Japanese work culture from it, and from the regular posts about strange work behaviour, without ever having to leave your home.

[1] https://xevix.medium.com/gaijin-engineer-in-tokyo-aaa9be8919...

[2] https://www.marketplace.org/2015/08/11/world/etiquette-and-r...

[3] https://teddit.net/r/japanlife/


That's a lot sources and I appreciate the effort, but also r/japanlife is a cesspool of people who basically have difficulties to adapt to a foreign culture. It's not a phenomenon specific to Japan, in any country there will be a pool of foreigners staying within their community and sharing/taking advices internally instead of participating in native communities.

Tales of the NHK guy coming for subscription is a good example of that spirit. Every soul in Japan gets NHK visits, just tell it to your friends. Why does that need to end on reddit.

On the "gaijin in Tokyo" experience, I was in an US firm and fucking meeting hell with 20 people in the room and no decision taking within weeks was par for the course. That guy came to Japan for a startup gig, did he even work in the same kind of setups in his origin country ?

I had a look at the rest of the blog, and that's just basic culture shock. Sometimes I feel we should stick a name to it, like we did for the "Paris Syndrome"

From your intercultural consultant quote:

> What’s really important is understanding the different styles of communication that different cultures have.

PS: I kinda love the view from the other side, where accepting foreign workers is basically a long road of training them to fit not just in the company, but in society in general

https://www.yume-tec.co.jp/column/その他/863


> That's a lot sources and I appreciate the effort, but also r/japanlife is a cesspool of people who basically have difficulties to adapt to a foreign culture. It's not a phenomenon specific to Japan, in any country there will be a pool of foreigners staying within their community and sharing/taking advices internally instead of participating in native communities.

That may be true but that would be to underplay the difficulty of integration in Japan specifically, especially compared to other places. The vast majority of foreigners leave within 2 years and very few settle long term, that is not a phenomena I've seen so starkly in other places I've lived in Asia.

> On the "gaijin in Tokyo" experience, I was in an US firm and fucking meeting hell with 20 people in the room and no decision taking within weeks was par for the course.

Again, that may be true, but that doesn't mean that:

a) it's representative of US work culture

b) it's not representative of Japanese work culture

I'm British and I've worked in places that certainly have too many meetings, not enough decision making or responsibility taken, but that is nothing compared to Japan. It's simply on another level here and no amount of saying "it happened to me too elsewhere" is going to change that.


> The vast majority of foreigners leave within 2 years and very few settle long term, that is not a phenomena I've seen so starkly in other places I've lived in Asia.

I'm looking at the OECD stats here: https://stats.oecd.org/Index.aspx?DataSetCode=MIG

Looking at a before pandemic year (2018), that's 519 683 of migrants. Korea has 495 079, and that's basically on par with the UK and twice as much as France (provided France is twice as small in many metrics).

I don't give too much credit to the exact numbers, but Japan has a nonetheless a serious amount of foreigners, mostly from other Asian countries. They blend in much more than western foreigners so it's harder to tell from the look of it. To note there still is a culture shock and I had Chinese coworkers pretty heated up about many aspects, but they were pretty fast to understand how to make things work out.

I don't know your life, but if you traveled enough and got invited to work in a Japanese company, I'd assume you have a relatively high profile and the company inviting you wasn't some small scrappy business.

Big enough businesses are also traditionally slow and cumbersome, they have no incentive for speed and boldness, and if you touch any of the bigger companies making significant change just requires a ton of politics. I really believe that's basically the same everywhere. People might feel it's less static in Europe or USA because of more flashy colors, diversity and more buzz, but try looking at any company of the same size as Panasonic and look at what they're actually doing, and it's usually "not much more" (Think about Ford splitting out a separate EV division from the mother structure because nothing would happen otherwise)

On the people who stay long term, I think it's extremely difficult to judge depending on your position. In particular if you joined "expats" groups and made friends there, the probability a bunch of them leave after a while is usually higher.

I only sporadically met a few foreigners here and there, but 20 years later most of them have families and built a career. But they were not the type to spend their week-end in Roppongi bars either.


From [1], in 2019:

> Resident foreigners totaled 2.22 million -- an all-time high and 1.76% of the population

That doesn't compare to the UK at all, or France. There's also no point in pointing at the inflow without the outflow, as the article points out:

> The number of immigrants to Japan minus the number of people leaving the country came to 165,000, government data released Friday shows.

So, if those OECD figures you supplied are correct, the inflow was dwarfed by the outflow which again is nothing like the UK or France - as a proportion of population or as a total. I'm not surprised. As that great reference, Wikipedia writes[2] (also using OECD figures[3], and in a much better format for comparison):

> Japan receives a low number of immigrants compared to other G7 countries.[9] This is consistent with Gallup data, which shows that Japan is an exceptionally unpopular migrant destination to potential migrants, with the number of potential migrants wishing to migrate to Japan 12 times less than those who wished to migrate to the US and 3 times less than those who wished to migrate to Canada,[10] which roughly corresponds to the actual relative differences in migrant inflows between the three countries.[9] Some Japanese scholars have pointed out that Japanese immigration laws, at least toward high-skilled migrants, are relatively lenient compared to other developed countries, and that the main factor behind its low migrant inflows is because it is a highly unattractive migrant destination compared to other developed countries.[11] This is also apparent when looking at Japan's work visa programme for "specified skilled worker", which had less than 3,000 applicants, despite an annual goal of attracting 40,000 overseas workers.

exceptionally unpopular migrant destination” eh? Again, colour me shocked.

From [4], just the title should be enough:

> Japan’s Labor Productivity Lowest in G7

but it would still be misleading because that sounds *better than the reality:

> Japan ranks twenty-first for labor productivity among the 36 nations of the Organization for Economic Cooperation and Development

Japanese work culture fits the cliché very well and trying to deny it is a stretch of gigantic proportions. What some bloke you met managing to build a career in Japan has to do with Japanese work culture not being deadeningly slow and moribund, only you will know.

[1] https://asia.nikkei.com/Spotlight/Japan-immigration/Japan-im...

[2] https://en.wikipedia.org/wiki/Immigration_to_Japan

[3] https://data.oecd.org/chart/5StJ You can click on the link at the top and compare even better by having it highlight the parts you car about.

[4] https://www.nippon.com/en/japan-data/h00619/japan%E2%80%99s-...


> but there’s plenty of high functioning teams in different domains headed by leaders who communicate like Torvalds

Maybe in the past. It is not acceptable now.

A lot of men from the "old days" are finding that their table thumping "plain talking" (obscenity shouting) ways are getting them sidelined and ignored.

Good.


Some of us would love to work on a team like this. It would be nice to have the option. Your definition of "acceptable" might not actually result in teams that can take on the big challenges we face as a species as men who did find this kind of thing acceptable retire out of the workforce.


We might all die but at least no one's feelings would be hurt, no matter what they did or didn't do.

I am only half joking. It's a good thing no one is being forced to work with Linus. People really need to keep that in mind


If "We've always done it this way and it's a risk to do it differently" was the argument that carried the day, few of us would have to worry about these questions at all because we'd never have gotten out from under feudalism.


> Some of us would love to work on a team like this. It would be nice to have the option

Be the change you wish to see.


You're literally responding to an Amy Klobuchar cite, with a sweeping implication that being a jerk in the workplace is a "men" thing. Wow.



It is not really a competition!

Anybody can be like that, anybody at all. And in my opinion it is a very good thing it is going out of fashion!


Touché


If a person works best in this fashion, who are you to say it is unacceptable?


Are the teams high functioning because of that, or despite that?


I would assume a little of both. I've seen weeks wasted just because someone wouldn't say "that's a bad idea". I've also seen whole projects turn to crap, and then get canceled, when people that new better decided to remain silent, to avoid conflict.

Through my years, it seems to be increasingly rare to find disagreeable people, and that agreeableness is being favored/demanded. I'm not one to judge if it's working or not, but when I see people getting upset at managers because the manager criticized their work/explanation during the presentation of that work, which is literally meant for criticism, I know quality coming from that group will be impacted. Maybe not surprising, but many of these people are new graduates. The few "senior" people I know, like this, are from companies who are in the process of failing, in very public ways.

I think the ideal scenario is a somewhat supportive direct manager, and a disagreeable, quality demanding, manager somewhere not far above, keeping the ship from sinking.


I don't work in IT, but in the medical field. We have the advantage/disadvantage of working in many teams during our training (around 20-30). There are varying cultures in teams, and what I found was that teams with high levels of criticism / conflict generally functioned the poorest. Patient care was delivered despite the dysfunction and toxic culture, but it also created an environment where staff were unhappy, fearful of mistakes, and avoidant. The best and most effective teams I worked in had a less hierarchical structure, but were led strongly, with good team working and communication.

That's anecdote, but there's evidence that certain team styles lead to more effective work [1], and suggestion that serious failures of organisations relate to cultural workplace toxicity and leadership [2].

I've seen in the thread a slight strawman argument that 'people too timid to say what they think about something leads to poorer working' or similar. I totally agree with that, but good communication is not what we're talking about here, and people can be clear, confident and respectful.

[1] https://www.civilitysaveslives.com/theevidence [2] https://blogs.lse.ac.uk/politicsandpolicy/30566/


My unsubstantiated guess is that the Kernel team has a lot of intelligent people but not on the emotional and empathy field. And some of them are really full of themselves, so you need to get them off their high horses


That probably works if people get bribed with interesting enough projects (Linux) or money (banks, lawfirms...). Most other projects probably fall apart before you can blink an eye


Speaking about consensus - there is another thread on the HN where people complain about Android 13 UI. I guess that was built with a healthy dose of consensus.

The point is - sometimes you need a jerk with a vision so that the thing you're building don't turn into amorphous blob.


You need someone with vision who enforces strict adherence to that vision. I'm not convinced you have to be a jerk to do that though.


Yes, you don't need to be a jerk to do that. Linus Torvalds used to be a jerk (perhaps still is, but I think much less so these days). Do you have a non-jerk with Torvalds' vision?


The old quote: I would trust Einstein, but I wouldn't trust a committee of Einsteins.


How is Steve Jobs getting left out of this conversation


Maybe because: if you cannot say anything good about a dead person, don't say anything.


>by god, this is not how you build consensus or a high functioning team

Says you, while criticizing Linus Torvalds from 2012. Who has a better track record of building consensus and high functioning teams ?


Says Linus Torvalds from 2018…

> My flippant attacks in emails have been both unprofessional and uncalled for. Especially at times when I made it personal. In my quest for a better patch, this made sense to me. I know now this was not OK and I am truly sorry.

> The above is basically a long-winded way to get to the somewhat painful personal admission that hey, I need to change some of my behavior, and I want to apologize to the people that my personal behavior hurt and possibly drove away from kernel development entirely.

https://lkml.org/lkml/2018/9/16/167

He still writes very frankly, but he generally doesn’t resort to personal insults like he did in the past.


That doesn't change his point about building high functioning teams in the past, though. Just that he is capable of adapting to the times. He had 27 years of career leading Linux before 2018. Successful, by any measure.


> Says Linus Torvalds from 2018…

He was successful between 2012 and 2018 with that style. The track record is still there.


I think if you take it out of context (which most people do), it looks a lot worse than it is.

A very senior guy who shouldve known better was trying, fairly persistently, to break a very simple rule everybody agreed to for a very bad reason. Linus told him to shut the fuck up.

I wouldnt say that Linus's reaction was anything to look up to but I wouldnt say that calling the tone police is at all justified either.


I mean, the guy sent one short email before Torvalds flew off the handle; that’s hardly “trying persistently” to break a rule. I can think of a thousand assertive ways to tell the guy to shut up that wouldn’t have required behaving like an angry toddler.


> by god, this is not how you build a consensus or a high functioning team

I beg to differ. Linus Torvalds is an example for us all, and I’d argue he has one of the most, if not the most, highly functioning open source teams in the world. The beauty in open source is you’re not stuck with the people you do not want to work with. You can “pick” your “boss”. Plus, different people communicate differently. Linus is abrasive. That is Okay because it works for him. What is not okay is having other people policing the tone in a conversation. Linus had this same conversation with Sarah Sharp, I’ll post the relevant quote below:

Because if you want me to "act professional", I can tell you that I'm not interested. I'm sitting in my home office wearign a bathrobe. The same way I'm not going to start wearing ties, I'm also not going to buy into the fake politeness, the lying, the office politics and backstabbing, the passive aggressiveness, and the buzzwords. Because THAT is what "acting professionally" results in: people resort to all kinds of really nasty things because they are forced to act out their normal urges in unnatural ways.


> man it's always a trip to see how much of a jerk torvalds could be, even if exasperation is warranted in this context (i have no idea), by god, this is not how you build consensus or a high functioning team

True. I think Linux could've been pretty successful if someone with good management practices had been in charge from the start.


> by god, this is not how you build consensus or a high functioning team

Linus has been pretty successful so far. There's not just "one style" that works.


maybe it is how you build the world's most popular operating system?

because he did


People often think that because jerks work at successful companies, you need to be a jerk to be successful. It’s more the other way around: a successful firm can carry many people who don’t add value, like parasites.

Guarantee you Linus wasn’t this bad in the 90s.


I think he's not this bad these days? He issued some public apologies for his behaviour. He gave us Linux and Git. Yes, he used to be an asshole, but he still did way more for the betterment of humanity than most people.


Yep Linus is reformed these days. He took some time off and went to sensitivity training a few years ago. I'm sure like all humans he has bad days and makes mistakes, but all in all he's really trying.


> Guarantee you Linus wasn’t this bad in the 90s.

Guarantee you that Linux was not that big and influential in the industry in the 90s.


Everyone I knew ran Linux in the late 90's, I chose to run FreeBSD. I guarantee you Linux was very influential. Solaris was its only real competitor after a while, and Sun went all Java and screwed up its OS.


naw man, let the old git be. he is a lovely old man. one day we wont have people like this. he gave more than he took.


He's a product of a different time. Personally, I love his attitude -- wouldn't want to work under him though.


Glibc is GNU/Linux though and cannot be avoided when distributing packages to end users. If you want to interact with the userspace to do things get users, groups, netgroups, or DNS queries you have to use glibc functions or your users will hit weird edge cases like being able to resolve hosts in cURL but not your app.

Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely! But we're not really there. This is something the BSD's got absolutely right.


There are other libc implementations that work on Linux with various tradeoffs. Alpine famously uses musl libc for a lightweight libc for containers. These alternate libc implementations implement users/groups/network manipulation via well-known files like /etc/shadow, /etc/passwd, etc. You could fully statically link one of these into your app and just rely on the extremely stable kernel ABI if you're so interested.


We're not disagreeing. You can, of course, use other libc's on Linux the kernel, but you cannot use other libc's on GNU/Linux the distro that uses glibc without some things not working. This can be fine on your own systems so long as you're aware of the tradeoffs but if you're distributing your software for use on other people's systems your users will be annoyed with you.

Even Go parses /etc/nsswitch.conf and farms out to cgo when it finds a module it can't handle. This technically doesn't work because there's no guarantee that the hosts or dns entries in nsswitch have consistent behavior, it's just the name of a library you're supposed to dlopen. On evil, but valid, distro resolv.conf points to 0.0.0.0 and hosts module reads an sqlite file.


> you cannot use other libc's on GNU/Linux the distro that uses glibc without some things not working

As the comment your replying to points out, you can statically link your libc requirements and work on any Linux distro under the sun.

You can also LD_PRELOAD any library you need, and also work on any Linux distro under the sun. This is effectively how games work on Windows too, they ship all their own libraries. Steam installs a specific copy of the VCREDIST any given game needs when you install the game.

If you are not releasing source code, it's unreasonable to think the ABIs you require will just be present on any random computer. Ship the code you need, it's not hard.


> Now, do I think it would make total sense for syscall wrappers and NSS to be split into their own libs (or dbus interfaces maybe) with stable ABIs to enable other libc's, absolutely!

I worked on this a few years ago: liblinux.

https://github.com/matheusmoreira/liblinux

It's still a nice proof of concept but I abandoned it when I found out the Linux kernel itself has a superior nolibc library that they use for their own tools:

https://github.com/torvalds/linux/tree/master/tools/include/...

It used to be a single header but it looks like they've recently organized it into a proper project!

> This is something the BSD's got absolutely right.

BSDs and all the other operating systems force us to use their C libraries and the C ABI. I think Linux's approach is better. It has a language-agnostic system call binary interface: it's just a simple calling convention and the system call instruction.

The right place for system call support is the compiler. We should have a system_call keyword that makes it emit code in the aforementioned calling convention. With this single keyword, it's possible to do literally anything on Linux. Wrappers for every specific system call should be part of every language's standard library with language-specific types and semantics.

An example of one such language is Virgil by HN used titzer:

https://news.ycombinator.com/item?id=28283632


But there are other "Linux"'s that are not GNU/Linux which was I think the point. Like Android, which doesn't use glibc, and doesn't have this mess. I think that was one of the things people used to complain about, that Android didn't use glibc, but since glibc seems to break ABI compatibility kinda on the regular that was probably the right call.


Solaris had separate libc, libnss, libsocket, and libpthread, I think?

Unlike many languages, Go doesn't use any libc on Linux. It uses the raw kernel API/ABI: system calls. Which is why a Go 1.18 binary is specified to be compatible with kernel version 2.6.32 (from December 2009) or later.

There are trade-offs here. But the application developer does have choices, they're just not no-cost.


If in distribution discussions Linux is name for the operating system and shell, downplaying the role of GNU, then it is also fair game to say here: Linux does not have a stable ABI because glibc changed.


Really appreciate your stuff Bjorn, this link always brings a smile to my (too young to be cynical) face.


Thanks!


I am no longer able to see this comment. It says the message body was removed.

Anyone else? I'll have to assume this is the history of how we built great things being deleted in realtime.


I think lkml.org has issues with lots of traffic: https://lore.kernel.org/lkml/CA+55aFy98A+LJK4+GWMcbzaa1zsPBR...


The ABI of the Linux kernel seems reasonably stable. Somebody should write a new dynamic linker that lets you easily have multiple versions of libraries - even libc - around. Then its just like windows where you have to install some weird MSVC runtimes to play old games.


Or, GNU could just recognise their extremely central position in the GNU/Linux ecosystem and just not. break. everything. all. the. time.

It honestly really shouldn't be this hard, but GNU seems to have an intense aversion towards stability. Maybe moving to LLVM's replacements will be the long-term solution. GNU is certainly positioning itself to become more and more irrelevant with time, seemingly intentionally.


The issue is more subtle than that. The GNU and glibc people believe that they provide a very high level of backwards compatibility. They don't have an aversion towards stability and in fact, go far beyond most libraries by e.g. providing old versions of symbols.

The issue here is actually that app compatibility is something that's hard to do purely via theory. The GNU guys do compatibility on a per function level by looking at a change, and saying "this is a technical ABI break so we will version a symbol". This is not what it takes to keep apps working. What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something. And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.

Linux is really hurt here by the total lack of any unit testing or UI scripting standards. It'd be very hard to mass test software on the scale needed to find regressions. And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic. As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem. Actual users don't count for much. It's probably inevitable in any system that isn't driven by a profit motive.


I think part of the problem is that by default you build against the newest version of symbols available on your system. So it's real easy when you're working with code to commit yourself to some symbols you may not even need; there's nothing like Microsoft's "target a specific version of the runtime".


I really, really miss such a feature with glibc. There are so many times when I just want to copy a simple binary from one system to another and it won't work simply because of symbol versioning and because the target has a slightly older glibc. Just using Ubuntu LTS on a server and the interim releases on a development machine is a huge PITA.


You actually can do that using inline assembly, it's just a very obscure trick. Many years ago I wrote a tool that generated a header file locking any code compiled with it to older symbol versions. It was called apbuild but it stopped being maintained some years after I stopped doing Linux stuff. I see from the comments below that someone else has made something similar, although apbuild was a comprehensive solution that wrapped gcc. It wasn't just a symbol versioning header, it did all sorts of tricks. Whatever was necessary to make things Just Work when compiling on a newer distro for an older distro.


I wonder what Zig does?


> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.

There are already sophisticated binary analysis tools for detecting ABI breakages, not to mention extensive guidelines.

> And, the Linux/GNU world never had a commercial "customer is always right" culture on this topic.

Vendors like Red Hat are extremely attentive towards their customers. But if you're not paying, then you only deserve whatever attention they choose to give you.


> As can be seen from the threads, the typical response to being told an app broke is to blame the app developers, rather than fix the problem.

This is false. Actual problems get fixed, and very quickly at that.

Normally the issues are from proprietary applications that were buggy to begin with, and never bothered to read the documentation. I'd say to a paying customer that if a behaviour is documented, it's their problem.


> Normally the issues are from proprietary applications that were buggy to begin with, and never bothered to read the documentation. I'd say to a paying customer that if a behaviour is documented, it's their problem.

… But that's exactly why Win32 was great; Microsoft actually spent effort making their OS was compatible with broken applications. Or at least, Microsoft of long past did; supposedly they worked around a use-after-free bug in SimCity for Windows 3.x when they shipped Windows 95. Windows still has infrastructure to apply application-specific hacks (Application Compatibility Database).

I have no reason to believe their newer stacks have anything like this.


The issue I see most often is someone compiled the application on a slightly newer version of Linux and when they try to run it on a slightly older machine it barfs saying that it needs GLIBC_2.31 and the system libc only has up to GLIBC_2.28 or something like that. Even if you aren't using anything that changed in the newer versions it will refuse to run.


That is not a bug. Just make a chroot with an older glibc and build there if you want to link against an older glibc. That easy.

It works with future versions, not with past versions.


That only works if you're the person who compiled the application.

A recent example where I ran into this: Cura 5.0+ won't run on Ubuntu 20 because the system libraries on Ubuntu 20 are too old.


That's an ergonomic deficiency. In practice, you probably need more than glibc, so then you have to make sure that other bits are available in this chroot. And if it so happens that one of the build tools that you rely on needs a newer version of glibc than the one you're building against, it still breaks down.

On Windows, you specify the target version of the platform when building, and you get a binary that is guaranteed to load on that platform.


> That's an ergonomic deficiency.

Do you build software on your desktop machine and ship it? Do you not build in a chroot (or container, how the cool kids call them nowadays) to make sure the build is actually using what you think it should be using?

You have to build in a chroot or similar in any case. Just use the CORRECT one.

> On Windows, you specify the target version of the platform when building, and you get a binary that is guaranteed to load on that platform.

Except if you need a c++ redistributable… then you must ship the redistributable .exe setup inside your setup because windows. Let's not pretend shipping software on windows is easier.

Anyway all of this only applies to proprietary software. For free sotware the distribution figures it all out, does automatic rebuilds if needed and so on.

Really, just stick to free software, it's much easier.


You don't need chroot to make sure that your build uses correct versions of all dependencies; you just need a sane build system that doesn't grab system-wide headers and libs. Setting up chroot is way overkill for this purpose; it's not something that should even require root.

In case of Windows SDK, each version ships headers that are compatible going all the way back to Win9x, and you use #defines to select the subset that corresponds to the OS you're targeting.

With respect to the C runtime, Windows has been shipping its equivalent of glibc as a builtin OS component for 7 years now. And prior to that, you could always statically link.


> Setting up chroot is way overkill for this purpose;

debootstrap stable chrootdir

That's it. Don't pretend it's difficult because it isn't.


> Even if you aren't using anything that changed in the newer versions it will refuse to run.

Nope, versioning is on a per symbol basis so if that prevents you from running the program it is actually using a symbol (i.e. function) that changed.


Flatpak exists to solve this.


Flatpak throws the baby out with the bathwater though. For example, on Ubuntu 22 if you install Firefox and don't have snaps enabled it installs the Flatpak, but if you do it can't access ffmpeg in the system so you can't play a lot of video files. It also fails to read your profile from Ubuntu 20 so you lose all of your settings, passwords, plugins, etc... It also wants to save files in some wierdass directory buried way deep in the /run filesystem. System integration also breaks, so if a Gnome app tries to open a link to a website the process silently fails.


> it can't access ffmpeg in the system so you can't play a lot of video files.

That's a typical packaging bug. Libraries should be bundled in the build or specified as a runtime dependency.

> It also fails to read your profile from Ubuntu 20 so you lose all of your settings, passwords, plugins, etc... It also wants to save files in some wierdass directory buried way deep in the /run filesystem. System integration also breaks, so if a Gnome app tries to open a link to a website the process silently fails.

All of this sucks and the user experience here needs a ton of work. I get the technical issues with co-mingling sandboxed and unsandboxed state, but there needs to be at least user-friendly options to migrate or share state with the sandbox.

A replacement for the xdg-open/xdg-mime/gio unholy trinity that offers runtime extension points for sandboxes might be nice. Maybe I could write a prototype service.


I think the issue with ffmpeg is that it needs to access GL driver libraries in order to support hardware acceleration. The GL libraries are dependent on your hardware and have to match what you have installed on the system. So you'd end up having to support a hundred different variations of Firefox flatpacks and users would have to make sure they match when installing, and remember to uninstall and reinstall when they update their graphics drivers.


Ah, this is a perennial problem on NixOS as well, at least when packaging anything with a bubblewrap sandbox (e.g. Steam).


Well you talk about Windows, that was true in the pre-Windows 8 era. Have you used Windows recently?

I bought a new laptop, and decided to give Windows a second chance. With Windows 11 installed, there were a ton of things that didn't worked. To me it was not acceptable for a 3000$ laptop. Problems with drivers, blue screens of death, applications what just didn't run properly (and commonly used applications, not something obscure). I never had these problems with Linux.

I mean we talk about Windows that is stable mostly because we use Windows versions after they are out since 5 years and most of the problems were fixed. Good, now companies are finishing the transition to Windows 10, not Windows 11, after staying with Windows 7 for years. After 10 years they will probably move to Windows 11, when most of its bug are fixed.

If you use a rolling-release Linux distro, such as ArchLinux, some problems on new software are expected. It's the equivalent of using an insider build of Windows, with the difference that ArchLinux is mostly usable as a daily OS (it requires some knowledge to solve the problems that inevitably arrive, but I used it for years). If you use let's say Ubuntu LTS, you don't have these kind of problems, and it mostly run without any issue (less issues than Windows for sure).

By the way, maintaining compatibility has a cost: have you ever wandered because a full installation of Ubuntu that is a complete system with all the program that you use, an office suite, driver for all the hardware, multimedia players, etc is less than 5Gb while a fresh install of Windows is minimum 30Gb but I think nowadays even more?

> And then if they broke important apps they roll the change back or find a workaround regardless of whether it's an incompatible change in theory or not, because it is in practice.

Never saw Microsoft do that: whey will simply say that it's not compatible and the software vendor has to update. That is not a problem by the way... OS developer should move along and can't maintain backward compatibility forever.

> The GNU and glibc people believe that they provide a very high level of backwards compatibility.

That is true. It's mostly backward compatible, having a 100% backward compatibility is not possible. Problems are fixed as they are detected.

> What it actually takes is what the commercial OS vendors do (or used to do): have large libraries of important apps that they drive through a mix of automated and manual testing to discover quickly when they broke something.

There is one issue: GNU can't test non-free software for obvious licensing and policy issues (i.e. an association that endorses free software can't buy licenses of proprietary software to test it). So a third party should test it and report problems in case of broken backward compatibility.

Keep in mind that binary compatibility is something that is not fundamental on Linux, since it's assumed that you have the source code of everything and in case you recompile the software. GNU/Linux born as a FOSS operating system, and was never designed to run proprietary software on it. There are edge cases where you need to run a binary for other reasons (you lost the source code, compiling it is complicated or takes a lot of time) but surely are edge cases and not a lot of time should be spent to address them.

Beside that glibc it's only one of the possible libc that you can use on Linux: if you are developing proprietary software in my opinion you should use MUSL libc, it has a MIT license (so you can statically link it into your proprietary binary) and it's 100% POSIX compliant. Surely glibc has more feature, but probably your software doesn't use them.

Another viable option is to distribute your software with one of the new packaging formats that are in reality containers: snap, flatpack, appimage. That allows you to distribute the software along with all the dependencies and don't worry about ABI incompatibility.


I literally run on Windows insider for two my laptops - primary one is on beta channel and auxiliary laptop is on alpha channel. Both running Windows 11 and had 10 running before. Auxiliary one lives on insider for I think 5 years of not 6 and definitely had issues, like Intel wifi stopped working and some other minor ones, but main one, had, I guess 3-4 BSODs over 2 years and around 10 times not waking up from sleep. That's pretty much all of the issues.

For me it's impressive and I cannot complain on stability.


I believe that appimage still contains the glibc compatibility issues. I've read through appimage creation guides which suggest compiling on the oldest distro possible as glibc is forward compatible but not backwards.


[flagged]


You cut out a key word:

> Linux is really hurt here by the total lack of any unit testing or UI scripting standards.

> standards

I've been very impressed reading how the Rust developers handle this. They have a tool called crater[1], which runs regression tests for the compiler against all Rust code ever released on crates.io or GitHub. Every front-facing change that is even slightly risky must pass a crater run.

https://github.com/rust-lang/crater

Surely Microsoft has internal tools for Windows that do the same thing: run a battery of tests across popular apps and make sure changes in the OS don't break any user apps.

Where's the similar test harness for Linux you can run that tests hundreds of popular apps across Wayland/X11 and Gnome/KDE/XFCE and makes sure everything still works?


> Surely Microsoft has internal tools for Windows that do the same thing: run a battery of tests across popular apps and make sure changes in the OS don't break any user apps.

And hardware, they actually deploy to hardware they buy locally from retailers to verify things still work too last I checked. Because there is always that "one popular laptop" that has stupid quirks. I know they try to focus on a spectrum of commonly used models based on the telemetry too.


And crater costs a bunch, runs for a week, and it's not a guarantee things won't break. I'm not sure it runs every crate or just top 1 million. It used to, but I could see that changing, if

And in case of closed source software, that isn't publicly available, crates wouldn't work.


Crater's an embarrassingly parallel problem though, it's only a matter of how much hardware you throw at it. Microsoft already donates the hardware used by Crater, it would have no problem allocating 10x as much for its own purposes.


How much of crates are ran on crater? All of them?

Also I think there are magnitudes more C libs/apps, than Rust crates.


There are certainly more things written in C than in Rust--the advantage of being fifty years old--but the standardization of the build system in Rust means that it would be difficult for any C compiler (or OS, or libc, or etc.) to produce a comparable corpus of C code to automatically test against (crates.io currently has 90,000 crates). But that's fine, because for the purpose of this thread that just means that Microsoft's theoretical Crater-like run for Windows compatibility just takes even less time and resources to run.


If you want to compile a large fraction of C/C++ code, just take a distro and rebuild it from scratch--Debian actually does this reasonably frequently. All of the distros have to somehow solve the problem of figuring out how to compile and install everything they package, although some are better at letting you change the build environment for testing than others. (From what I understand, Debian and Nix are the best bets here.)

But what that doesn't solve is making sure that the resulting builds actually works. Cargo, for Rust, makes running some form of tests relatively easy, and Rust is new enough that virtually every published package is going to contain some amount of unit tests. But for random open-source packages? Not really. Pick a random numerics library--for something like an linear programming solver, this is the most comprehensive automated test suite I've seen: https://github.com/coin-or/Clp/tree/master/test


> But that's fine, because for the purpose of this thread that just means that Microsoft's theoretical Crater-like run for Windows compatibility just takes even less time and resources

Huh? I don't follow. There are more libs to test and they aren't standardized. How does that mean theoretical Crater will take less resources?

Did you mean excluding non-testable code? That doesn't prevent future glibc-EAC incompatibility.


The manual labor would be greater, yes, and that's a problem. But the original point of this thread was about dismissing the idea of Crater at scale, which is unnecessary 1) because it's an embarrassingly parallel problem, and 2) because you're probably not going to have a testable corpus larger than crates.io anyway, so the hardware resources required are not exorbitant for a company of Microsoft's means. Even if they could only cobble together 10,000 C apps to test, that's a big improvement over having zero.


Thanks for that clarification.

I agree it's embarrassingly parallel, but the expenses scale almost linearly.

Overall I agree, for MSFT it's doable, but I doubt any Linux distro has enough money to continually provide this level of support.


This still has nothing to do with Linux. Unit testing isn't standardized in most languages. Even in Rust people have custom frameworks!

The Linux Kernel does have such a project doing batteries of tests. Userspace may not, but that's not a "unit test" problem. In fact it's the opposite, it's integration tests.


Right, but Linux (the OS) doesn't have unit tests to ensure that changes to the underlying system doesn't break the software on top. Imagine if MS released a new version of Windows and tons of applications stopped functioning. Everyone would blame MS. The Linux community does it all the time and just says that it's the price of progress.


I think the problem is that there isn't really a thing like "Linux the OS"; there's Debian, Ubuntu, Gentoo, Red Hat, and more than I can remember, and they all do things different: sometimes subtly so, sometimes not so subtly. This is quite different from the Windows position where you have one Windows (multiple editions, but still one Windows) and that's it.

This is why a lot of games now just say "tested on Ubuntu XX LTS" and call it a day. I believe Steam just ships with half an Ubuntu system for their Linux games and uses that, even if you're running on Arch Linux or whatnot.

This has long been both a strong and weak point of the Linux ecosystem. On one hand, you can say "I don't want no stinkin' systemd, GNU libc, and Xorg!" and go with runit, musl, and Wayland if you want and most things still work (well, mostly anyway), but on the other hand you run in to all sort of cases where it works and then doesn't, or works on one Linux distro and not the other, etc.

I don't think there's clean solution to any of these issues. Compatibility is the one of the hard problems in computers because there is no solution that will satisfy everyone and there are multiple reasonable positions, all with their own trade-offs.


So, I very much agree with mike_hearn, their description of how glibc is backwards compatible in theory due to symbol versioning matches my understanding of how glibc works, and their lack of care to test if glibc stays backwards compatible in practice seems evident. They certainly don't seem to do automated UI tests against a suite of representative precompiled binaries to ensure compatibility.

However, I don't understand where unit testing comes in. Testing that whole applications keep working with new glibc versions sounds a lot like integration testing. What's the "unit" that's being tested when ensuring that the software on top of glibc doesn't break?


You're right, I should have written "integration tests".


The Linux Kernel does have tests, and many of the apps on top have unit tests too.

> Imagine if MS released a new version of Windows and tons of applications stopped functioning. Everyone would blame MS.

I don't have to imagine, this literally happens every Windows release.


Well, let's see. What do I know about this topic?

I've used Linux since the Slackware days. I also spent years working on Wine, including professionally at CodeWeavers. My name can still be found all over the source code:

https://gitlab.winehq.org/search?search=mike%20hearn&nav_sou...

and I'm listed as an author of the Wine developers guide:

https://wiki.winehq.org/Wine_Developer%27s_Guide

Some of the things I worked on were the times when the kernel made ABI changes that broke Wine, like here, where I work with Linus to resolve a breakage introduced by an ABI incompatible change to the ptrace syscall:

https://lore.kernel.org/all/1101161953.13273.7.camel@littleg...

I also did lots of work on cross-distribution binary compatibility for Linux apps, for example by developing the apbuild tool which made it easy to "cross compile" Linux binaries in ways that significantly increased their binary portability by controlling glibc symbol versions and linker flags:

https://github.com/DeaDBeeF-Player/apbuild/blob/master/Chang...

So I think I know more than my fair share about the guts of how Win32 and Linux work, especially around compatibility. Now, if you had finished reading to the end of the sentence you'd see that I said:

"Linux is really hurt here by the total lack of any unit testing or UI scripting standards"

... unit testing or UI scripting standards. Of course Linux apps often have unit tests. But to drive real world apps through a standard set of user interactions, you really need UI level tests and tools that make UI scripting easy. Windows has tons of these like AutoHotKey, but there is (or was, it's been some years since I looked) a lack of this sort of thing for Linux due to the proliferation of toolkits. Some support accessibility APIs but others are custom and don't.

It's not the biggest problem. The cultural issues are more important. My point is that the reason Win32 is so stable is that for the longest time Microsoft took the perspective that it wouldn't blame app developers for changes in the OS, even when theoretically it could. They also built huge libraries of apps they'd purchased and used armies of manual testers (+automated tests) to ensure those apps still seemed to work on new OS versions. The Wine developers took a similar perspective: they wouldn't refuse to run an app that does buggy or unreasonable things, because the goal is to run all Windows software and not try to teach developers lessons or make beautiful code.


> But to drive real world apps through a standard set of user interactions, you really need UI level tests and tools that make UI scripting easy. Windows has tons of these like AutoHotKey, but there is (or was, it's been some years since I looked) a lack of this sort of thing for Linux due to the proliferation of toolkits.

This made me remember a tool that was quite popular in the Red Hat/GNOME community in 2006-2007 or so:

https://gitlab.com/dogtail/dogtail

I wonder if it every got any traction?


Thank you for your work!


GNU / glibc is _hardly_ the problem regarding ABI stability. TFA is about a library trying to parse executable files, so it's kind of a corner case; hardly representative.

The problem when you try to run a binary from the 90s on Linux is not glibc. Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.


> Think e.g. one of Loki games like SimCity. The audio will not work (and this will be a kernel ABI problem...). The graphics will not work. There will be no desktop integration whatsoever.

I have it running on an up to date system. There is definitely an issue that it's a pain to get working, especially for people not familiar with the cli or ldd and such, as it wants a few things that are not here by default. But once you get it the few libs it needs and ossp to emulate the missing oss in the kernel, there is no issue with gameplay, graphics or audio aside from the intro video that doesn't run.

So I guess the issue is that the compatibility is not user friendly ? Not sure how that should be fixed though. Even if Loki had shipped all the needed lib with the program, it would still be an issue not to have sound due to distro making the choice of not building oss anymore.


It would seem from your example that the issue is a lack of overall commitment to compatibility. There are Windows games from 1990s that still run fine w/sound - which is not surprising, given that every old Win32 API related to sound is still there, emulated as needed on top of the newer APIs. It sounds like Linux distros could do this here as well, since emulation is already implemented - they just choose to not have it set up out of the box.


> So I guess the issue is that the compatibility is not user friendly ?

I don't understand this point -- this is like claiming Linux has perfect ABI compatibility because at the end of the day you can run your software under a VM or a container. Of course everything has perfect compatibility if you go out of your way using old installations or emulation layers -- people under Windows actually install the Wine DX9 libraries since they have better compatibility than the native MS ones. But this means nilch for Windows' ABI compatibility record (or lack thereof).


Windows installs those MSVC runtimes via windows update for the last decade.

With Linux, ever revision of gcc has its own glibcxx, but distros don't keep those up to date. So that you'll find that code built with even an old compiler (like gcc10) isn't supported out of the box.


I read "old compiler" and thought you meant something like GCC 4.8.5, not something released in 2020!


The Linux kernel ABIs are explicitly documented as stable. If they change and user space programs break, it's a bug in the kernel.

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...

https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/lin...


Someone should invent a command to change root… we should call it chroot!


The article seems to document ways in which it isn't. I have no idea personally, are these just not really practical problems?


The article is talking about userland, not the kernel's ABI.


Sounds like you want Flatpak, Docker or Snap :)


Just use nix.


I can run a Windows 95 app on Windows 10 and it has a reasonable chance of success.

Should Linux (userland) strive for that? Or is Year of the Linux Desktop only covers things compiled in the last 10 years?


It's what the kernel strives for. They're remarkably consistent in their refrain of "we never break userspace."

I think it would be reasonable for glibc and similar to have similar goals, but I also don't run those projects and don't know what the competing interests are.


> I think it would be reasonable for glibc and similar to have similar goals

I don’t think userspace ever had this goal. The current consensus appears to be containers, as storage is cheap and maintaining backwards compatibility is expensive


Containers are not a great solution for programs that need graphics drivers (games) or quick startup times (command line tools).

I've been wrestling with glibc versioning issues lately and it has been incredibly painful. All of my projects are either games or CLI tools for games, which means that "just use a snap/flatpak/appimage" is not a solution.


There's no reason launching a container _must_ be slow. Under the hood, launching a containerized process is just making a few kernel syscalls that have very little overhead. You might be thinking of docker, which is slow because it's intended for certain use cases and brings a lot of baggage with it in the name of backward compatibility from a time when the Linux container ecosystem was much less mature.

There are several projects working on fast, slim containers (and even VMs) with completely negligible startup times.

I don't know what is holding back container/VM access to graphics hardware, but it can't be insurmountable if the cloud providers are doing it.


The problem with containers and graphics drivers is that those drivers have an userspace component. This depends on the hardware (e.g. AMD vs. Intel vs. NVidia all have different drivers of course) and in the case of NVidia this has to be exactly matched with the kernel version (this is less of an issue with VMs, but then you need something like SR-IOV which isn't quite on consumer HW, or do dedicated PCIE throughput which doesn't allow the host to use it).

So version management becomes a major pita, from shipping drivers too old to support the hardware to having a driver that doesn't match the kernel. In the cloud this is mostly solved by using VMs and hardware with SR-IOV. (and a fixed HW vendor so you know which set of drivers to include)


> I don't know what is holding back container/VM access to graphics hardware, but it can't be insurmountable if the cloud providers are doing it.

Cloud providers have graphics hardware with SR-IOV support. It is exactly the kind of functionality, that the GPU vendors use for segmentation of their more expensive gear.


> Should Linux (userland) strive for that?

The linux "userland" includes thousands of independent projects. You'll need to be more specific.

> Or is Year of the Linux Desktop only covers things compiled in the last 10 years?

If you want ABI compatibility then you'll have to pay, it's that simple. Expecting anything more is flat out unreasonable.


> The linux "userland" includes thousands of independent projects. You'll need to be more specific.

I think it's pretty clear from the context.

The core GNU userland: glibc, coreutils, gcc, etc.


Just try changing your hosts or nameservers across different versions of Ubuntu Server.

The fragmentation is such a mess even between 1.x major versions. Their own documentation is broken or non existant.


Here is some game from '93. Compile it yourself (with some trivial changes).

https://github.com/DikuMUDOmnibus/ROM

Trivial !

But if you still have some obiections then let's wait ~27 years and then talk about games developed on Linux / *nix.


Does that not miss the point of the above poster, this does not show that Linux have good binary compatibility, but that C is a very stable language. Would it run fine if you compiled it on a 27 year old compiler and then tried to run it on Linux is the question that should be asked if I am not mistaken.


Is it really a reasonable goal to want an operating system to run a 27 year old binary without any modification or compatibility tool? There does need to be some way to run such binaries, but doing that by making the kernel and all core ABIs stable over several decades would make evolving the operating system very difficult. I think it would be better to provide such compatibility via compatibility layers like wine and sandboxing in the style of flatpak.


Which is also how Windows itself does it. Wanna run DOS or 16-bit binaries, you reach for an emulator.


It should be noted that 32-bit versions of Windows include support for 16-bit DOS and Win16 binaries. The last 32-bit version of Windows was Windows 10, which is still actively supported.


I think it shows that compiling is prefered way to go. So it's more like twisting around the point :)

But what with old, binary only games ? Same as with old movies you want to watch and Hollywood prefers to not show anymore... They are super stupid IMO but maybe they have their reasons.

And that DT_HASH lack can be easily patched if someone wants. And if GNU will keep to sabotage like this then there is time to move off GNU. Ah, right - noone wants to sponsor libc fork for few years... Maybe article is right about binaries after all ;)


YSK, this code will likely fail in weird ways on platforms with default unsigned char like ARM because it makes the classic mistake of assuming that the getc return value is compatible with char type despite getc returning int and not char. EOF is -1, and assigning a char on ARM changes to 255 so you'll read past the end of some buffers and then crash.


Maybe there will be some problems on weird platforms. But if game is good some datails can be resolved. With bad games too ;) With source code, that is.


This is a long standing question and has nothing to do with Linux or windows. It's a design philosophy.

Yes the win32 abi is very stable. It's also a very inflexible piece of code and it drags it's 20 year old context around with it. If you want to add something to it you are going to work and work hard to ensure that your feature plays nicely with 20 year old code and if what you want to do is ambitious...say refactor it to improve it's performance...you are eternally fighting a large chunk of the codebase implementation that can't be changed.

Linux isn't about that and it never has been, it's about making the best monolithic kernel possible with high level Unix concepts that don't always have to have faithful implementations. The upside here is that you can build large and ambitious features that refactor large parts of how core components work if you like, but you might only compile those features against a somewhat recent glib.

This is a choice. You the developer can link whatever version you want. If you want to build in support for glib then just use features that only existed 10 years ago and you'll get similar compatibility to win32. If not then you are free to explore new features and performance you don't have to implement or track provided you consider it a sensible use cases that someone has to be running a somewhat recent version of glib.

The pros and cons are up for you to decide but it's not as simple as saying that windows is better because it's focus is backwards compatibility. There is an ocean of contexts hidden behind that seemingly magical backwards support...


A design philosophy of not being able to run old software?

A design philosophy of always having to update your system?

A design philosophy of being unable to distribute compiled software for all Linux distros?

Most Win32 applications from Windows 95 work just fine in Windows 11 in 2022. That's proper design.


According to Wikipedia, "Win32 is the 32-bit application programming interface (API) for versions of Windows from 95 onwards.".

Also from there "The initial design and planning of Windows 95 can be traced back to around March 1992" and it was released in '95. So arguably, the design decisions are closer to 30 years old than 20 :)


The main structure is from win16, although adding support for paging and process isolation was a pretty big improvement in win32. IMO its held up extremely well considering its 40 years old.


Yeah but as a consequence, games (closed source games, which means basically all of them) don’t even bother targeting Linux.


I assume Flatpak fixes this by locking your app to a compatible version of glibc.


Surprisingly, that seems correct—a Flatpak bundle includes a glibc; though that only leaves me with more questions:

- On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space. In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

- On the other hand, a number of system services on linux-gnu depend on loading host libraries. Even if we ignore NSS (or exile it into a separate server process as it should have been in the first place), that leaves accelerated graphics: whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism). These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

- Modern toolkits basically live on accelerated graphics. Flatpak was created to distribute graphical applications built on modern toolkits.

- ... Wait, what?


There is no 'system' glibc. Linux doesn't care. The Linux kernel loads up the ELF interpreter specified in the ELF file based on the existing file namespace. If that ELF interpreter is the system one, then linux will likely remap it from existing page cache. If it's something else, linux will load it and then it will parse the remaining ELF sections. Linux kernel is incredibly stable ABI-wise. You can have any number of dynamic linkers happily co-existing on the machine. With Linux-based operating systems like NixOS, this is a normal day-to-day thing. The kernel doesn't care.

> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the system either.

No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space. User-space may use an older version of the API, but it will still work.

> whether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism)

OpenGL is even more straightforwards because it is typically consumed as a dynamically loaded API, thus as long as the symbols match, it's fairly straightforwards to replace the system libGL.


I know, I both run NixOS and have made syscalls from assembly :) Sorry, slipped a bit in my phrasing. In the argument above, instead of “the system glibc” read “the glibc targeted by the compiler used for the libGL that corresponds to the graphics driver loaded into the running kernel”. (Unironically, the whole point of the list above was to avoid this sort of monster, but it seems I haven’t managed it.)


> No they don't. The Linux kernel ABI doesn't really ever break. Any open-source driver shouldn't require any knowledge of internals from user-space.

[laughs in Nvidia]


NVIDIA is not an open-source driver [1], and if you look in your dmesg logs, your kernel will complain about how it's tainted. That doesn't change the truth value about what I said about 'open-source' drivers.

[1] I think this may have changed very very recently.


This is all correct and I'd also add that ld.so doesn't need to have any special knowledge of glibc (or the kernel) in the first place. From the POV of ld.so, glibc is just another regular ELF shared object that uses the same features as everything else. There's nothing hard-coded in ld.so that loads libc.so.6 differently from anything else. And the only thing ld.so needs to know about the kernel is how to make a handful of system calls to open files and mmap things, and those system calls that have existed in Linux/Unix for eternity.


Needs to have? In an ideal world, probably not. Has and uses? Definitely. For one thing, they need to agree about userland ABI particulars like the arrangement of thread-local storage and so on, which have not stayed still since the System V days; but most importantly, as a practical matter, ld.so lives in the same source tree as glibc, exports unversioned symbols marked GLIBC_PRIVATE to it[1], and the contract between the two has always been considered private and unstable.

[1] https://sourceware.org/git/?p=glibc.git;a=blob;f=elf/Version...


> On one hand, only one version of ld.so can exist in a single address space (duh). Glibc requires carnal knowledge of ld.so, thus only one version of glibc can exist in a single address space.

Yes

> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.

But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.

> hether you use Wayland or X, ultimately an accelerated graphics driver amounts to a change in libGL and libEGL / libGLX (directly or through some sort of dispatch mechanism).

In Wayland, your app tells the server to render a bitmap. How you got that bitmap rendered is up to you.

The optimization is that you send dma-buf handle instead of a bitmap. This is a kernel construct, not userspace driver one. This allows also cross-API app/compositor (i.e. Vulkan compositor and OpenGL app, or vice-versa). This also means you can use different version of the userspace driver with compositor than inside the container and they share the kernel driver.

> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.

> but the previous point means that you can’t load them from the host system either.

You actually can -- bind mount that single binary into the container. You will use binary from the host, but load it using ld.so from inside container.


>> In a Flatpak you have (?) to assume the system glibc is incompatible with the bundled one either way, thus you can’t assume you can load host libraries.

> Not exactly. You must assume that the host glibc is incompatible with the bundled one, that's right.

> But that does not mean you cannot load host libraries. You can load them (provided you got them somehow inside the container namespace, including their dependencies) using the linker inside the container.

I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.

>> These libraries require carnal knowledge of the kernel-space driver, thus emphatically cannot be bundled; but the previous point means that you can’t load them from the host system either.

> Yes and no; Intel and AMD user space drivers have to work with variety of kernel versions, so they cannot be too tight. Nvidia driver has tightly coupled user space and kernel space, but with the recent open-sourcing the kernel part, this will also change.

My impression of out-of-tree accelerated is mainly from fighting fglrx for the Radeon 9600 circa 2008, so extremely out of date. Intel is in-tree, so I’m willing to believe it has some degree of ABI stability, at least if an i915 blog post[1] is to be believed. Apparently AMD is also in-tree these days. Nvidia is binary-only, so the smart thing for them would probably be to build against an ancient Glibc so that it runs on everything.

But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).

I load a flatpak using Gtk3 GtkGLArea from 2015.

What happens?

[1] https://blog.ffwll.ch/2013/11/botching-up-ioctls.html


> I meant that the glibcs are potentially ABI-incompatible both ways, not just that they’ll fight if you try to load both of them at once. Specifically, if the bundled (thus loaded) glibc is old, 2.U, and you try to load a host library wants a new frobnicate@GLIBC_2_V, V > U, you lose, right? I just don’t see any way around it.

Yup. So the answer is to minimize the amount of loaded host libraries, ideally 0. If that cannot be done, the builder of that host library will have to make sure it is backward compatible.

> But suppose the year is 2025, and a shiny new GPU architecture has come out, so groundbreaking no driver today can even lay down command buffers for it. The vendor is kind enough to provide an open-source driver that gets into every distro, and the userspace portion compiled against a distro-current Glibc ends up referencing an AVX-512 memcpy@GLIBC_3000 (or something).

> I load a flatpak using Gtk3 GtkGLArea from 2015.

Ideally, that driver would be build as an extension of the runtime your flatpak uses. I.e. everything based on org.freedesktop.Platform (or derivatives like org.gnome.Platform and org.kde.Platform) has extensions maintained with appropriate Mesa and Nvidia user space drivers.

So if new open source driver, if it is not part of Mesa, or you are not using above mentioned runtimes, would need to be build against the 2015 runtime. The nice thing is, that the platforms have corresponding sdks, so it is not a problem targeting specific/old version.


Does graphics on Linux work by loading the driver into your process? I assumed it works via writing a protocol to shared memory in case of Wayland, or over a socket (or some byzantine shared memory stuff that is only defined in the Xorg source) in case of X11.

From my experience, if you have the kernel headers and have all the required options compiled into your kernel, then you can go really far back and build a modern glibc and Gtk+ Stack, and use a modern application on an old system. If you do some tricks with Rpath, everything is self-contained. I think it should work the other way around, with old apps on a new kernel + display server, as well.


So there are two parts to this: the app producing the image in the application window and then the windowing system combining multiple windows together to form the final image you see on screen.

The former gets done in process (using e.g. GL/vulkan) and then that final image gets passed onto the windowing system which is a separate process and could run outside the container.

As an aside, with accelerated graphics you mostly pass a file descriptor to the GPU memory containing the image, rather than mucking around with traditional shared memory.


Does graphics on Linux work by loading the driver into your process?

Yes, it's called Direct Rendering (DRI) and it allows apps to drive GPUs with as little overhead as possible. The output of the GPU goes into the shared memory so that the compositor can see it.


Static linking -- always the ready-to-go response for anything ABI-related. But does it really help? What use is a statically linked glibc+Xlib when your desktop no longer sets resolv.conf in the usual place and no longer speaks the X11 protocol (perhaps in the name of security) ?


I guess that kind of proves the point that there is no "stable", well, anything on Linux. Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.

/etc/sysctl.conf is a good example; on some systems it just works, on some systems you need to enable a systemd service thingy for it, but on some systems the systemd thingy doesn't read /etc/sysctl.conf and only /etc/sysctl.d.

So a simple "if you're running Linux, edit /etc/sysctl.conf to make these changes persist" has now become a much more complicated story. Writing a script to work on all Linux distros is much harder than it needs to be.


> Something like /etc/resolv.conf is part of the user-visible API on Linux; if you change that, you're going to break applications.

Apps were not supposed to open /etc/resolv.conf by themselves. If they did, they are broken. Just because the file is available, transparently, doesn't mean it is not a part of the internal implementation.

Even golang runtime checks nsswitch for known, good configuration before using resolv.conf instead of thunking to glibc.


The point was that if you're statically linking something that paths such as /etc/resolv.conf become "hard-coded", so that seems like an unimportant detail; something needs to check it, whether that's an application or an application through a library call: it's the same thing. /etc/nsswitch.conf is just kicking the can down the road from /etc/resolv.conf to /etc/nsswitch.conf.


What would exactly be that "something"?

Statically linking glibc was always strongly discouraged. Even if you did it, nss and iconv are always dynamically loaded, and they need symbols from glibc itself, so your statically-glibc-linked binary would need to load dynamic libc anyway!

No, something doesn't need to check it, today it exists only to serve broken applications (and systems broken by their admins). Its semantic doesn't allow to express the needs of resolvers today (like per-interface or per-zone resolving or other mechanisms, like mDNS).

How the resolver is configured is its internal issue; apps should just use getnameinfo().


Even statically linked, the problems you just described are valid. The issue is x11 isn’t holding up and no one wants to change. Wayland was that promise of change but has taken 15+ years to develop (and still developing).

Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.

Deepin desktop and elementary are on the top of my list for elegance and ease of use. Apps and games need a solid ABI and this back and forth between gnome and kde doesn’t help.

With so many different wm’s and desktop environments, x11 is still the only method of getting a window with an opengl context in any kind of standard way. Wayland, X12, whatever it is, we need a universal ABI for window dressing for Linux to be taken seriously on the desktop.


With the rise of WSL, I have a real hard time justifying wanting a linux desktop.

I've got a VM with a full linux distro at my fingertips. Virtualization has gotten more than fast enough. And now, with windows 11, I get an X server integrated with my WSL instance so even if I WANTED a linux app, I can launch it just like I would if I were using linux as my host.

It does suck that the WSL1 notion of "not a vm" didn't take off, but at the same time, when the VM looks and behaves like a regular bash terminal, what more could you realistically want?


> what more could you realistically want?

some privacy, no telemetry, no ads, and the computer only applying updates that I choose and only rebooting when I ask it to?

(I know it's a lot to ask for these days...)


I opted in telemetry to ensure OS vendor has data on usage, issues, crashes. The same for say Firefox.

I believe providing it helps vendors to improve and fix issues quicker.


:pulls out wallet: where?


WSL2 is very limited; from not having a "proper init" to having NAT-ed network, it is fine for running simple docker containers, but proper linux it is not.

Comparing it to the real linux is like comparing powershell prompt to full windows.


> [WSL] is fine for running simple docker containers

WSL is passable for running Docker containers, and that is if you add a ton of complicated socket forwarding machinery and Window-side service management automation to it, which is what Docker Desktop does.

Without that, WSL2's lack of a proper init system means that you literally don't even have a way to automatically start `dockerd`. And you have the same story for the integration of that Docker daemon with the other Linux WSL hosts.

> Comparing it to the real linux is like comparing powershell prompt to full windows.

True, although in a way this comparison is unflattering to Linux because PowerShell is generally better made, better documented, and less hostile to automation or customization than most software that comes with Windows.


> WSL is passable for running Docker containers, and that is if you add a ton of complicated socket forwarding machinery and Window-side service management automation to it, which is what Docker Desktop does.

That's exactly the reason why I mentioned simple docker containers.

Once I needed to make some of them work with macvlan/ipvlan, and docker desktop on mac and windows were both completely unusable.


Say I'm webdev (I'm not) and proper Linux for me is that one which can start Flask dev env, and I can copypaste things from stackoverflow. Totally fine for my needs to call it proper Linux.


Webdev is about the only scenario, in which it would be ok; funny how you mentioned exactly this one.

However, for webdev, native linux would be fine as well.


WSL is really only a viable alternative to dual-booting for Windows people who have merely dabbled in Linux desktop usage. Admittedly, this is likely the only case Microsoft cares about.

But if you're used to Linux, Windows is not only borderline unbearable in a cultural way, but you're likely to notice that a ton of important pieces of WSL (and the wider Windows CLI environment) are broken, inadequate, missing, or just different in a way that makes them unattractive to longtime Linux users.

> I've got a VM with a full linux distro at my fingertips. [...] when the VM looks and behaves like a regular bash terminal, what more could you realistically want?

To name a few, things, limited strictly to WSL:

  - your distro's normal init system / a standard way to configure persistent services
  - binfmt_misc interop that works consistently (for example, some applications cause hangs for unclear reasons, when I pipe them into `clip.exe`
  - integration for services that involve running agents and/or hardware access (e.g., Docker, GnuPG, SSH)
  - WSLg support on Windows 10,  whose absence is purely artificial (what version of Windows my corporate laptop runs is not up to me)
  - passable performance with files on the Windows side so that basic amenities like a Git prompt in your shell don't suck
  - bridged networking or some other advanced networking configuration


> - your distro's normal init system / a standard way to configure persistent services

> - bridged networking or some other advanced networking configuration

Sounds like something in sysadmin's language.

I hope WSL will keep within the current paradigm, not trying to replace your Proxmox test lab.


Yeah, WSL(2) was a huge huge win for Microsoft. It seems silly, but it’s kept thousands of devs from dual booting…


I dunno

it's quite possible it'll work out as well as IBM's OS/2 running Windows apps did


Since when saving time and increasing productivity became silly? Not even saying getting rid of Linux VMs, shared folders and similar annoying things is a good thing overall.


> Linux desktop is a distro concern now. Not an ecosystem concern. It’s long left the realm of an linux concern when MacOS went free (with paid hardware of course) and Windows was giving away free windows 10 licenses to anyone who asked for it.

You seem fixated on the Free Beer misinterpretation of Free Software.


No, but it sounds that way I guess. It’s more about where the developer en-masse focus lays. Few developers are interested in the desktop for Linux because they are supported on windows or Mac and during the time period I mentioned, it didn’t cost them anything monetary.

There were indications that windows and Linux May converge. Instead we got WSL2. A lot of times we decide to develop something because of the pain of using the other thing. Sometimes we develop something as a “me too”. Sometimes we develop something that is just better. Sometimes, it’s worse.

My point is the fight for a foothold in Linux desktop looked promising for a bit. SteamOS looked like it was gaining, steam…

The reality is there are complexities at that level that people don’t want to deal with and we all have opinions on how it should work, should look, and should be called.

Red Hat (former RH’er myself) should take this on and really standardize something outside of core and server land. And no, it should not be Gnome.


…also locking in any security vulnerabilities.


I mean, we are talking about videogames here.


Multiplayer is a thing, where both crashing servers and also attacking other clients (even in non-p2p titles) is not that uncommon. Many titles don't permit community servers any more, of course.


Wasn't it Elden Ring or another From Software game that had a RCE ? This article talks about it : https://wccftech.com/dark-souls-rce-exploit-fixed-elden-ring...

A lot of games have multiplayer functionalities these days. That make them a potential target for RCE and related vulnerabilities. Granted, if you don't play video game as root, the impact should be limited, but it is still something to be aware of.


Skimming over the details, that seems like a bunch of bugs in the game code. I don't think dynamic linking would help there.


If you're never going to update your program and don't care about another heartbleed effecting your product and users, then sure.


Would you rather run a statically linked go or rust binary with the native crypto implementations of ssl or a dynamically linked OpenSSL that is easier to upgrade? (Honest question)


The glibc libs should be ELF clean. Namely, a pure and simple ELF64 dynamic exe, should be able to "libdl"/dynamically load any glibc lib. It is maybe fixed and possible in latest glibc.

The tricky part is the sysv ABI for TLS-ized system variables: __tls_get_addr(). For instance errno. It seems the pure and simple ELF64 dynamic exe would have to parse the ELF headers of the dynamically loaded shared libs in order to get the "offsets" of the variables. Never actually got into this for good though.

And in the game realm, you have c++ games (I have an extremely negative opinion about this language), and the static libstdc++ from gcc does not "libdl"/dynamically load what it requires from the glibc, and it seems even worse, namely it would depends on glibc internal symbols.


Then, if I got it right for TLS-ized variables, dlsym should do the trick. Namely, dlsym will return the address of a variable for the calling thread. Then this pointer can be cached the way the thread wants. On x86_64, it can "optimize" the dlsym calls by reusing the same address for all threads, namely 1 call is enough.

Now the real pain is this static libstdc++ not libdl-ing anything, or worse expecting internal glibc symbols (c++ ...)


glibc != Linux

a better analogy would be targeting the latest version of MSVCRT that happens to be installed on your system (instead of bundling it)

... also which mostly works but sometimes breaks


Windows has switched from app-redistributed MSVC runtimes to OS-distributed "universal CRT" since Windows 10 (2015). Unlike MSVCRT, uCRT is ABI-stable.


Nearly 99% of Linux software is linked to glibc.

What the f are you talking about?

This is not MSVCRT by a long shot.


I was going to make a joke about a.out support (and all the crazy stuff that enables, like old SCO binaries) but apparently a.out was removed in May as an option in the Linux kernel.

https://lwn.net/Articles/895969/

At least we still have WINE.


One way to achieve similar results on Linux might be for the Linux kernel team to start taking control of core libraries like x11 and wayland, and to extend the same "don't break userspace" philosophy to them also. That isn't going to happen, but I can dream!


There was a period where a Linux libc was maintained, but it was long-ago deprecated in favour of glibc. Perhaps that was a mistake.


My understanding is that the Linux devs like only having "Linux" only be the kernel; if they wanted to run things BSD-style (whole base system developed as a single unit) I assume they would have done that by now (it's been almost 30 years).


I'm not sure about taking over the entire GUI ecosystem but I certainly do want more functionality in the kernel instead of user space precisely because of how stable and universal the kernel is. I want system calls, not C libraries.


"DT_HASH is not part of the ABI" is like saying "DNS is not part of the Internet".


Maybe a counterpoint is "x86-64 Linux ABI Makes a Pretty Good Lingua Franca" [0] from αcτµαlly pδrταblε εxεcµταblε of Aug 2022.

0. https://justine.lol/ape.html



I used to disagree with this browser-as-OS mentality, but seeing as it's sandboxed and supports webGL, wasm, WebRTC, etc. I find it pretty convienient (if I'm forced to run zoom for example, I can just keep it in the browser). Just as long as website/application vendors test their stuff across different browsers.


At this point I'm pretty convinced that no-one at Microsoft ever did a better job in keeping people on Windows than what the maintainers of glibc are doing …


Well, the WSL team did a lot I think (including new Terminal). WSL, WSL2, WSLg, WSA - I almost never use full Linux VMs now, my pretty simple needs are covered with it.


Not news. In fact, wine/proton really is the preferred wayof doing things.

Valve saw the light years but they weren't the first. Even Carmack has been saying it before the whole gaming on Linux thing became viable.


The change that caused the break would be equivalent to the PE file format changing in an incompatible way on Windows, to give an idea of how severe it is.


Dynamically-linked userspace C is a smouldering trash heap of bad decisions.

You'd be better off distributing anything--anything--else than dynamically-linked binaries. Jar files, statically-linked binaries, C source code, Python source code, Brainfuck code ffs...

The "./configure and recompile from source" model of Linux is just too deeply entrenched. Pity.


like Excel is more stable than Windows itself, you can open a spreadsheet from the win16 days and it'll just work....


Personal experience: Office 2021 and Office 97 do not paginate a DOC file created (by Microsoft employees) in Office 97 the same way, so the table of contents ends up different.


Nope, had multiple that did even work after the latest update


Heh, I've just been debugging an issue that is triggered by the upgrade to glibc 2.29 (Debian bullseye era).

https://github.com/pst-format/libpst/issues/7


As a gamedev that tried shipping to linux, we really need some standardized minimized image to target with ancient glibc and such, and some guarantees that if it runs on the image that it runs on future linux versions.


Just target Flatpak. You got a standardised runtime, and glibc is included in the container. If it works on your machine, it'll work on my machine, since the only difference will be the kernel and Linus is pretty adamant in retaining compatibility at the syscall level.


Sidenote. I remember when Warcraft 3 ran better in Wine+Debian than in Windows. Athlon II x2 CPU and Nvidia Geforce 6600 GT with a wooping 256MB of VRAM. That was one hot machine. Poor coolers.


I tried to run WoW and Starcraft 2 with Wine, and it did not install/run


Yeah Linus's "we don't break user space" is a joke.

Great, the kernel syscalls API is stable. Who cares, if you can't run a 7 year old binary because everything from vdso to libc to libstc++ to ld-linux.so is incompatible.

Good luck. No it's not just a matter of LD_LIBRARY_PATH and shipping a binary with a copy. That only helps with third party libs, and only if vdso and ld-linux is compatible.

My 28 year experience running Linux is that it's API (source code) unbroken, but absolutely not ABI.


Linus does provide a stable ABI with Linux, it's GNU who drops the ball and doesn't. You're criticizing Linus for something he has nothing to do with. What's the point in that?


Linus limited his scope to something that doesn't matter for users.

I think this is a valid criticism.

It's admirable to do the dishes, but the house is also on fire, so nobody will be able to enjoy the clean plates, so what's then even the point of doing the dishes?

In fact, in this analogy he could have saved the kitten instead of done the dishes.

Err, back from analogy land: ABI stability makes it harder to make things better, improving and replacing APIs. This is expected. But here we are in the worst of both worlds. Thanks to the kernel we are slowed down in improvements, and thanks to kinda-userspace (i.e. vdso & ld-linux), and userspace infra (libc, libstdc++, libm) we don't have ABI compatability either.

So it's lose-lose.


Linus chose to only care about the kernel. So there's possibly some fault there.


So it's Linus' fault he didn't just write the kernel but should also decided to being responsible for the entire userspace as well? What kind of argument is that.


By that logic nothing is anyone's fault. "I did exactly what I meant to do, nothing more, nothing less. Therefore you cannot say anything negative about it. Have a nice day." I feel like there's a Peanuts comic like this: "No matter what happens any place or any time in the world, this absolves me from all blame!"


He should have run for president.


I wrote a game for my masters thesis in 2008. I wrote it in C++ and targeted Linux. Recently i tried to run it and not only binaries didn't work (that's a given), but even making it compile was a struggle cause I used some gui libraries that were abandoned and there was no version working with modern libc. It was easier to port the game to windows than to make it compile on Linux again...


Proprietary devs should use static linking (with musl) or chroots/containers. What makes the author think they are the target audience of glibc?


Thanks, but I think I'll stick with Windows: their target audience is famously everyone and for an unlimited time.


Have fun with libGL!


I hadn't thought of that... Flatpaks let you use specific mesa versions, though.


Linux has a more stable windows ABI then windows itself. If a game stops working on windows it will likely still work with wine on Linux.


In my opinion, Valve+distros should fork glibc and do a glibc distribution that focuses in absolute stability.

Didn't glibc devs said that distros have the freedom to choose what to maintain to not break their applications? This would be just a collaboration between the distros to maintain the stability.


I suppose that Win32 can be helpful if you want to make programs that run on both Windows and on Linux (and also ReactOS, I suppose), but might not work as well for programs with Linux specific capabilities.

(Also, I have problems to install Wine, due to package manager conflicts.)

There are other possibilities, such as .NET (although some comments in here says it is less stable, some says it works), which also can be used on Windows and on Linux. There is also HTML, which has too many of its own problems and I do not know how stable HTML really is, either. And then, also Java. Or, you can write a program for DOS, NES/Famicom, or something else, and can be emulated in many systems. (A program written for NES/Famicom might well run better on many systems than a native code does, especially if you do not do something too tricky in the code (in which case some implementations might not be compatible).) Of course also the different ways they have different advantages and disadvantages, with compatibility, efficiency, capability, and other features.


I laugh so hard... with tears! But, to be fair, Unreal 2004 still works almost perfectly on not too obsolete Ubuntu. Or did I have to do some glibc trickery?.. Can't remember for sure.


If anything, not breaking things makes you more careful about what you put in. I feel like that's not a bad rule to go by.


what about the web browser. Isn't that also a stable ABI ? Or its not a "Binary interface" because it only support Javascript?

What about web assembly ?


If web browsers had any kind of stable interface we wouldn't have: https://caniuse.com/, polyfils, vendor CSS prefixes and the rest of crutches. WASM isn't binary. But that's all irrelevant when we talk about ABI.

ABI is specifically binary interface between binary modules. For example: my hello_world binary and glibc or glibc and linux kernel or some binary and libsqlite3.


The kernel <> userspace API is stable, famously so. Dynamic linking to glibc is a terrible idea, statically link your binaries against musl and they'll still work in 100 years.


game binaries need to dynamically load system libs. A statically linked binary would have to include a full ELF loader.


Trying to statically link with glibc throws specific warnings that certain calls aren't portable.

With musl? No such problem.

Fuck even uclibc is more portable than glibc and its a dead project afaik


> With musl? No such problem.

Does musl even implement the functionality glibc was warning about?


OK, glibc ABI stability may not be perfect, but is there any evidence that Wine is any better? That sounds implausible to me.


The difference is if Wine breaks an application that works on Windows, it's considered a bug that should be fixed, regardless of why.


oh, no, not again: kids working for big tech constantly, but randomly, deprecating, removing and breaking apis/abis/features in the kernel/libraries/everywhere. I honestly belive that all relationships between big tech companies and opensource are toxic and follow the microsoft principle of the embrace, extend, and extinguish.


It's not, and it is super sad to hear people advocating for such horrible idea

Linux being infested by windows is the beginning of its death to me, what a tragedy

A well deserved death after the system-d drama anyways


Why is it a horrible idea?


This is by design, and everybody should be aware of that. I don't know about glibc, but as far as the kernel is concerned, Linus has never guaranteed ABI stability. API, on the other hand, is extremely stable, and there are good reasons for that.

In Windows, software is normally distributed in a binary form, so having ABI compatibility is a must.


Uh, the kernel ABI is extremely stable. You could take a binary that's statically compiled in the 90s and run it on the latest release of the kernel. "Don't break userspace" is Linus's whole schtick, and he's taking about in terms of ABI when he says that.

This is about the ABIs of userspace "system-level" libraries, glibc in particular.


The kernel absolutely does guarantee a stable userspace ABI. This post is entirely about other userspace libraries.


The Linux kernel maintains userspace API/ABI compatibility forever but inside the kernel (e.g. modules) there is no stable API/ABI.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: