Hacker News new | past | comments | ask | show | jobs | submit login
The Linux graphics stack in a nutshell (lwn.net)
274 points by zorgmonkey 10 months ago | hide | past | favorite | 94 comments



This is about 3D rendering, to be precise; I believe 2D acceleration goes through the same lower layers but the higher ones are very different.

Incidentally, one thing I noticed when I was trying to port Linux GPU drivers to Windows some time ago is what appeared to be an excessive amount of indirection; there are so many layers and places where things could be simpler.


2D acceleration is generally done through the same APIs, specifically OpenGL and Vulkan. Classically, the X compositor would use the GLX_EXT_texture_from_pixmap extension to import an X pixmap representing a window surface into OpenGL, where it can be used like any other texture. For the Wayland compositor, I believe you'd use EGL_WL_bind_wayland_display to bind a Wayland surface to an EGLImage, and then glEGLImageTargetTexture2DOES (can't believe I have that function name memorized) to bind that EGLImage to an OpenGL texture, where it can be used in the same way. Vulkan has similar extensions.

On the client side, I think most Linux apps still draw their UIs on CPU, usually accelerated with SIMD. Firefox and Chrome (I think SkiaGL is enabled on Linux?) are exceptions; they use OpenGL and/or Vulkan to draw their UI. Video playback is a different beast and in theory relies on vendor-specific extensions to decode the video in hardware. However, the last time I looked at Linux video decoding (which was years ago), the drivers were awful and interfacing with each vendor's APIs was a huge pain, and so most apps just did video decoding on CPU. (Besides, the Linux ecosystem prefers open codecs, and hardware has only recently gotten support for non-patent-encumbered video formats.)


> However, the last time I looked at Linux video decoding (which was years ago), the drivers were awful and interfacing with each vendor's APIs was a huge pain, and so most apps just did video decoding on CPU.

Nowadays VA-API is near universally supported, and any half-decent video player uses it to do hardware decoding.


For client side Qt has good GPU support but only for QML. All QML is drawn on GPU by default (expect text I think, which uses Harfbuzz) but all Qt Widgets are drawn on CPU. However things like KDE's Wayland uses direct OpenGL calls for faster composition.

Firefox has Web Render running on top of ANGLE which is a generic OpenGL layer that converts the OpenGL calls into native platform calls. ANGLE is a Google project and it is the base library for Skia which is used by Chromium to render everything. IIRC Qt / QML also uses ANGLE for Windows.


Why are toolkits still rendered on the CPU?


It's a ton of effort to write a GPU vector renderer that's both compatible with existing apps and faster than the CPU. Switching to SkiaGL would probably be the easiest approach to migrate to GPU rendering, but Skia is notoriously difficult to use outside of Google's codebases. (The running joke being "the recommended way to build Skia is to get a job at Google, but there are some workarounds available if for some reason that isn't practical.")


I love this joke! We use Skia as a PDF renderer and it does take a bit of plumbing to get it in, plus you have to track it more often than we'd like (that's not a fault of the build environment, but rather it doesn't have a stable API), plus we have local mods.


High-quality text rendering on GPU is surprisingly tricky and inefficient, unless you're using something simple like a glyph cache.


Does something like Signed Distance Fields help or is it just be an added value and not a complete different way of doing it?


Only for certain fonts at certain sizes, and only if you have the SDF generated ahead of time, both of which mean that the technique isn't general enough to render arbitrary fonts in .otf format.

Rasterizing small glyphs is super fast anyway, there's not much of a need to accelerate it if you can just cache the glyph bitmaps.


QML uses it, but it doesn't look as good as FreeType with LCD subpixel rendering and light hinting... if that is what you like (it is what I like).


I believe GTK4 uses the GPU by default


I think Cairo used OpenGL too. And X.Org itself had stuff like Glamor, XRender...


Cairo's OpenGL backends, such as glamor, never really made it out of the experimental phase and were rarely if ever used as far as I know.


The most viable approach these days is for 2D is to use the 3D hardware. There's no standard, usable API for 2D accelerated drawing the way there is for 3D, nor does it quite make sense for there to be one.

(No, OpenVG is not viable. No, Xrender is not viable. cairo and Skia both use the 3D hardware in combination with a CPU render engine.)


For the most part that's true, but simple 2D compositing is a bit of a different beast, because it can sometimes be done at scanout time, saving a blit. Last I checked, (non-Android) Linux rarely makes use of this except for the mouse cursor. But in general you can save a good bit of energy and memory bandwidth on HiDPI displays if you try to use 2D hardware layers where you can. You can virtually never use them for the UI itself, because they're far too limited, but the windowing system can often use them to composite windows together. It'd be nice if Wayland compositors made more use of this, e.g. to avoid having to blit the foreground window every frame.


I used to work on mobile graphics and the android HWC stack.

The scanout-time hardware was often less useful that you might think - only in dynamic scenes where the GPU is otherwise idle (like playing video possibly with a static UI overlay was the premier use case).

For static scenes it's more efficient to render out to a buffer (using the GPU as the scanout overlay pipes often had limited feedback capability) and just output that using overlays disabled. It didn't take many frames for that to be worth it.

For apps that were animating or otherwise updating it's window, most UI toolkits used the GPU for widget rendering. And often the scanout pipes didn't hook into the (relatively large) system caches like the GPU did, so there were times it was again faster to composite the screen on the GPU to a single scanout buffer than flush already cached data, the get the scanout hardware to read it back from the memory bus.

And there weren't as cheap as people thought - one stat I remember was that the total area of the GPU on the omap4 platform was smaller than the display pipes. Though that is now a pretty old chip, and always had a bit of focus on "multimedia".


I think your information is quite outdated. The HWC overlay planes are heavily used, you can see this trivially just doing a 'dumpsys SurfaceFlinger' or grabbing a systrace/perfetto trace. When it falls back to GPU composition it's very obvious as there's a significant hit to latency and more GPU contention.

The overlay capabilities of the modern Snapdragons are also quite absurd. They support like upwards of a dozen overlays now and even have FP16 extended sRGB support. Some HWCs (like the one in the steam deck) even have per plane 3D LUTs for HDR tone mapping (ex https://github.com/ValveSoftware/gamescope/blob/master/src/d... )

The composition is bandwidth heavy of course, but for static scenes there's a cache after the HWC in the form of panel self refresh.


CRTC planes and scanout-time compositing makes sense, and Wayland compositors do use them, even for non-cursor surfaces. It's simply not something an application can use general-purpose and guarantee (though see the recent GtkSurfaceOffload stuff for the latest attempt at it).

Personally, I don't see it as a "2D drawing API", it doesn't accelerate anything special about 2D, only blits and transforms, which a 3D API will eat for breakfast.


what happened with these VESA "2d accelerated" API that was on every SVGA card in the middle of the 90s ? They make a huge difference and was well supported on Windows and X11


That stuff has been obsolete for quite a while as the general 3D capabilities are more than enough to saturate all the GPU's memory bandwidth.


If it's a VESA standard and still supported it might be useful as a fallback for hardware that doesn't have its own driver.

Edit: But actually, I couldn't find references to anything similar besides VBE/AF which even when current got almost no support directly in hardware, so folks had to resort to hardware-specific DOS TSR's. I'm not sure if there's anything newer than that.


GPU manufacturers stopped putting 2D functionality in their chips.


The basic display interface used by UEFI and low level boot loaders these days it's called GOP - Graphics Output Protocol. It replaced VESA.


Xrender is hardware accelerated and cairo uses Xrender as a backend. Why is Xrender not "viable"?


Xrender is hardware-accelerated on an increasingly small number of devices, and even SNA, the flagship hardware-accelerated implementation in the Intel driver fell back to software rasterization extremely frequently [0]. In practice it wasn't worth it, and it was extremely buggy, hence why it fell into disrepair.

The semantics of Xrender simply don't match with what modern GPUs give you, even ones with 2D pipelines.

[0] https://gitlab.freedesktop.org/search?search=sna_pixmap_move...


Honestly I think XRENDER could be a viable API--the core idea is similar to WebRender, which Firefox uses to great effect--but the existing implementations of it are not well-optimized implementations and issue tons of draw calls using obsolete OpenGL APIs. They are slower than just drawing on CPU. You would essentially need a complete rewrite.

The bigger issue is that there's little reason to farm vector graphics rendering out to the window server in the first place. The main reason would be to avoid a window blit on HiDPI displays. But the tradeoff is that the XRENDER API is all you get, and usually apps have more sophisticated needs than what it can provide. For instance, browsers can't really use XRENDER nowadays because there's no way to describe CSS 3D transforms in it. And if you use it you're at the mercy of the window server to implement it reasonably, which is not a safe assumption. (A lot of the reason Chrome on Linux was faster than Firefox in the early days is that Firefox used XRENDER, while Chrome rendered on CPU. I remember at least one engineer at Mozilla who was bitter about that, after putting in all the work to make Firefox use it only to have it be a net loss.) In any case, you can avoid the window blit by simply using scanout compositing, as detailed in my other reply, so there is really is no compelling reason to reinvent XRENDER.


Well there was once a hardware accelerated API for 2D drawing on Windows (DirectDraw), but it died in Windows Vista when desktop composition was added in. It was still supported for application use, but it was just emulating it.

But if there was an API for 2D acceleration that was actually supported (and could be used simultaneously with desktop composition), then it could be added in to something like SDL then suddenly applications would support it.


It's slow as hell, today you need to use WIneD3D's Ddraw.dll among the WineD3D loader in the same folder of your 2D game.


Was the complexity dedicated to backwards compat or just needlessly complex for reasons sane people will never understand?


The scene graph isn't part of the renderer any more than a player object that contains the player's location. The scene graph's purpose is to make updating transforms efficient. Just because it references transforms that may need to be sent to the GPU, that doesn't mean it is part of the renderer.


Yes and no. Some type of deferred structure is almost always part of a GPU renderer as it's necessary for batching and reordering which help performance tremendously. Sometimes this is entirely an internal system behind an imperitive API, though. Like skia only offers an imperitive API even though it builds a deferred rendering structure from that under the covers.

So you end up doing a scene graph to imperitive API to internal renderer scene graph in some UI toolkits. In others they may share the same scene graph structure, applying those optimizations directly from the initial graph.

Although all that said a "pure" scene graph is often overkill and slow. They were all the rage in the early 2000s but less so these days. QML's QtQuick looks like the primary remaining example?


my Google Fu has failed me but isn't there a simple way to compile rust or go to a Linux os that boots to a gui? lots of embedded talks about tiny hardware and framebuffers etc. I can throw hardware around, it's UX/dx I just want ssh in the background and a UI Infront.


Yesn't, there aren't many tools for it. The closest you will get for a generic GUI app is various window managers/ DM for Wayland/ X11 that run in a "kiosk mode" + autologin. This way the OS (is supposed to) just boots and opens your app in effectively Fullscreen.


That did help me find https://github.com/cage-kiosk/cage/wiki initially I'm here there's other going down that path. Thanks


You can do that with Slint (https://slint.dev) and its linuxkms backend. No need for a xorg server or wayland compositor, just run the application made with Slint from the init script.


Beware Slint's license, it's either proprietary or GPLv3, or whatever this kooky thing is that doesn't allow mobile apps: https://github.com/slint-ui/slint/blob/master/LICENSES/Licen...


That license explicitly allow mobile apps:

> mobile phones are not considered to be Embedded Systems.


What part of this covers mobile apps:

> SixtyFPS hereby grants You a world-wide, royalty-free, non-exclusive license to use, reproduce, make available, modify, display, perform, distribute the Software as part of a Desktop or Web Application.


I think the closest to this is https://doc.qt.io/Boot2Qt/.


Could barely understand this article. Is it just me or can none explain the graphics stack in a sensible way?


Neither, it's quite a huge subject, so if you put in the time you'll surely be able to figure it out! I don't think that the article isn't sensible, from what I saw it seemed quite well-written and the site usually has good quality content. The thing is that it targets readers with a certain knowledge-level.

It's difficult for me to tell how much you know about graphics programming in general, so I can't really point you to any resources that would be better in your particular case!


Before the Wayland complaints take over the thread, I’d like to post a link to a very short thread by Drew DeVault.

https://fosstodon.org/@drewdevault/111607882208898175

Here are the two important posts:

———

The story of Wayland:

1. No one wanted to maintain X11 because it sucked

2. We made Wayland and it's much better

3. A vocal minority of change-averse people complained with little to no factual basis

4. They were asked to muster some labor to maintain X11

5. None of them did

6. All of the people who actually do get work done eventually stopped listening to them and moved on with Wayland

——

Some of these detractors built a tottering pile of godawful hacks on top of X11 where every piece depends on another critical design flaw of X11 and are upset that by fixing all of these design flaws their pile of hacks fell over when no one wanted to maintain the load bearing side of their hacks


In the same spirit as the parent post, we can write the story of opposition to Wayland. If this seems to you as an overly confrontational rendition, please compare this to the original and ruminate on why it seems confrontational to you and if that reason in fact touches on the problem itself.

1. A bunch of developers decided to replace the windowing system with something more akin to Desqview.

2. People complained that this now broke their previously working remote desktop.

3. They got told that their use case was utterly unimportant compared to the very pressing issue of getting rid of screen tearing.

4. Upon comments that screen tearing is irrelevant if you don't actually have a desktop, they replied that someone could write a remote desktop extension for Wayland.

5. None of them did.

6. All of the people that actually wanted to get work done stopped listening to them and continued using X11.


Tearing has more-or-less been fixed [1,2,3] in the latest version of X.org, although these changes are only present in the latest master branch and aren't in any official release and thus not shipped by most Linux distributions. I'm sure the argument could be made about how these are just more kluges and how Wayland solves this problem more optimally, but the argument to switch to Wayland to not experience anymore tearing is weaker than it has ever been.

[1] https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests...

[2] https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests...

[3] https://gitlab.freedesktop.org/xorg/xserver/-/merge_requests...


They 100% did write a remote desktop extension for Wayland. That's the difference, Wayland is actually being developed.

It'd be nice if we had software that was actually finished and solid and we could use it without change forever, but X11 isn't that, so if it's not going to get updates, we'll need something else, and hopefully it'll continue developing to be closer to that ideal.


5: There is waypipe and similar programs, some that even support audio transmission (as arguably, that’s the whole feature set. There is not much point having it built into the part responsible for display only)

6: continued using is not the same as maintaining/active development, not even close.


> 1. A bunch of developers decided [...]

> 2. People complained [...]

No FOSS developer is required to listen to you complain. The maintainers of Xorg decided they did not want to maintain it anymore.


Part of the problem is that Wayland looked at several large chunks of functionality that X11 implemented and said "no thanks, those are security holes, we want nothing to do with them, that is the responsibility of the distro/DE/etc"

Which is rather in conflict with earlier promises about maintaining functionality.

This resulted in a whole chunk of work that was either badly done, or not done at all, or needed all sorts of hacks and extra work because it needed to be re-implemented in different ways on different distros.

So I'm really not sympathetic to the Wayland devs, they needlessly created this situation.

A more reasonable answer would have been "we don't like this, it's a security hole, but whatever, just implement a Wayland extension that matches the X11 one, and security conscious people can disable it, and everybody else can move forward productively in the short term"


The Wayland devs don't need your sympathy. What's needed is a higher s/n ratio, and that means aggressive moderation of useless "wayland sux/x11 roolz" type comments, which belong in the dustbin of [flagged] [dead] along with e.g. any comment that uses "woke" as a pejorative.

The dead horse has been beaten enough and a decision has been made. We're a few years out from the major toolkits removing their X code paths altogether. Everybody who knows anything about how the graphics stack actually works is committed to Wayland. Nobody cares at this point if you're sticking with X11. One day, you'll wake up and find that your entire GUI environment has broken all around you.


Well, yes, what is needed is a solid understanding of why we are here, and what needs to be done to get to a better place.

I like the Wayland devs, and their occasional failures should not overshadow their successes.


No what is needed is that the armchair experts shut up and start contributing to actual Wayland development.

The one doing the work decides how its done.


Well, normally that would be true.

But Wayland decided to "boil the ocean" and entirely replace a stack of stuff that was working perfectly well for most people.

So perhaps the onus should be on the people who decided to do that, to make sure that things keep working instead of just dumping it?

An alternative design would have been to have built Wayland inside X.org and tunnel the new protocol over the existing protocol, and then migrate chunks of functionality one at a time.

Which is exactly how X11 replaced X10, and X10 replaced X9, etc.

But that would have been careful engineering practice, and not nearly as much fun, so I understand why it did not happen.


“So perhaps the onus should be on the people who decided to do that”

They’re doing, you’re complaining. All the X.org software is there for all the detractors to pick up and maintain. That is what happened when the XFree maintainers didn’t want to accept patches from a larger group: it was forked into X.org.

Now the people doing the work want to do Wayland. If it doesn’t please you, then by all means carry on with the old stuff and show them how it should’ve been done.


> An alternative design would have been to have built Wayland inside X.org and tunnel the new protocol over the existing protocol, and then migrate chunks of functionality one at a time

You are looking at it from the wrong direction, though. But otherwise, that’s pretty much what wayland does for backwards compatibility with xwayland — it wraps an x session into a window you can normally use from wayland.


Yes, that is also a valid backwards compatibility choice, although it is an all-or-nothing choice, clients cannot mix and match wayland and X11 functionality.

But if they had allowed more of the X11 APIs to be supported via that Xwayland, the transition would be a lot easier.

Unfortunately that was shot down, in favour of forcing every new wayland compositor to implement things like remote desktop and screen capture differently.


> An alternative design would have been to have built Wayland inside X.org and tunnel the new protocol over the existing protocol, and then migrate chunks of functionality one at a time.

This is the kind of ignorance Drew was complaining about. Wayland is a fundamentally different design that cannot simply be embedded in the X protocol; and besides which, again, nobody wants to touch the Xorg code base.

Again. Every single person who knows or cares about the modern Linux graphics stack is pretty much in agreement that abandoning the X approach and starting from scratch was the correct choice. This has been explained time and time again by Drew, Daniel Stone, and others much more knowledgeable about this issue than I. Explaining it over and over again to the stubborn and ignorant is getting tiring. You want to stay on legacy, unsupported X11? Fine. Enjoy having no modern software available for your system, as toolkit and app developers remove their X code paths entirely. Red Hat is abandoning X11 already, and Red Hat IS userspace Linux.


For starters, it is rather rude to assume how little I know about this.

Leaving that aside, Drew is mis-characterising things a little.

Wayland is fundamentally a different __compositor__ architecture. But the X11 system is about more than that, in fact compositors are a rather late addition to the X11 system.

And the protocol architectures are actually quite similar in shape. Which is not surprising, since good ideas last, and X11 has been around for a long time and has accumulated lots of good things (along with lots of cruft too).

So it would have been quite possible, but a good deal less fun, to do it differently.


> And the protocol architectures are actually quite similar in shape. Which is not surprising, since good ideas last, and X11 has been around for a long time and has accumulated lots of good things (along with lots of cruft too).

With the XPresent extension, you effectively get "Wayland inside X11". It's a wonderful thing, as X engineering goes anyway, that precisely no one asked for.

The whole point of the Wayland project is to get rid of all the cruft that X has accumulated over the decades. Embedding Wayland inside Xorg would defeat the whole purpose of the project. The very essence of Wayland is to start over with something new, unhindered by legacy cruft, allowing innovation in the Linux graphics space to take place.

So it was decided, unanimously, by the X devs: the X team would become the Wayland team, Wayland would be the future and X would be abandonware. Anyone who wants to step up and take over maintainership of X can, but I don't see a lot of activity happening.


>Fine. Enjoy having no modern software available for your system, as toolkit and app developers remove their X code paths entirely. Red Hat is abandoning X11 already, and Red Hat IS userspace Linux.

This seems to me no different than big monopoly corp obseleting old devices with software updates. However, that gets a reaction of opposite polarity usually (including from me).


Big mono Corp can obsolete things and forbid anyone from picking it up to maintain. Nobody is preventing a group from picking up the old stuff and maintaining it - it’s just a bunch of people shouting at those doing the work (or paying for it) that they should be doing it differently.


Over a long enough period of time, technically correct choices are always orthogonal to choices that are useful to users.

I am sure windows, like X, also made assumptions that were violated with the advent of things like HiDPI, multiple monitors, video streaming, etc,.

Once assumptions are broken, you pretty much have to pile on hack after hack to get around them. This is what windows does (10+ different APIs to do the exact same action A, for many values of A), and that is why it supports 30 years of software. This is technically horrible, but it is what is ultimately useful to users.

However, it is absolutely not fun work for developers, which is why unpaid ones won't do it, and prefer the easier start-from-scratch approach.


The core developers of Wayland (and formerly X.org) are not unpaid though.


Can I run wayland software in windows ?

No ? You mean, I must still use the so infamous yet so helpfull X11 to actually print things on my screen ?

Guess I'll stick with X then ?

Come back when you have a working alternative, and please stop using some magical "authority" argument : "a decision has been made" (by whom, where, when, why, with whom authority ?)


> Can I run wayland software in windows ?

Yes. Wayland is the protocol by which WSL programs display on the Windows desktop.

> Come back when you have a working alternative, and please stop using some magical "authority" argument : "a decision has been made" (by whom, where, when, why, with whom authority ?)

By the X developers, who decided en masse to abandon X and work on Wayland instead.


Wayland achieves its lack of "suck" by being a radically simple design that simply ignores the need for those X11 "godawful hacks" (which provide useful features to a small subset of users). You can already see that people are grafting those features into Wayland compositors in non-standard ways, so soon enough Wayland compositors will have their own collection of ugly hacks. The cycle of life is beautiful, isn't it?


Also those features are not limited to esoteric stuff that no one uses.

You cannot, for example, move your own window in Wayland. If you have a multi-window application, like GIMP [1], you cannot have your application position its windows in a reasonable way.

[1]: https://gitlab.freedesktop.org/wayland/wayland-protocols/upl...


A variant of the second-system effect: one decides to reimplement something from scratch to incorporate all the lessons learned, avoiding the pile of hacks that accumulated over the years.

Then, as the project grows, you find people have been relying on hacks for so long you need to reimplement them. But your new, clean version is not designed to accomodate such abominations, so you need, very inelegantly, to hack them in.

Now you're back at square 1. Until the next naive engineer that decides to do the things the right way, once again.


But it is not situation in Wayland - it is simple - so you need to implement hacks at a different layer (it is often impossible to implement them on the Wayland layer). No matter how much stuff you throw at the compositor layer the core is unaffected.


Pushing hack to different layers won't solve the real problem ='(


> 3. A vocal minority of change-averse people complained with little to no factual basis

Little to actual factual basis?? 1) How do you explain the time it has taken?? 2) prioritising "shallow security" over "accessibility" is a fact: see https://news.ycombinator.com/item?id=38696891 ??

Wayland design was "OK", Wayland's implementation is a disaster.. Nearly every DE reinventing its own server implementation is a recipe for incompatibility, bugs..


In the thread about Firefox switching defaults to Wayland, there were some complaints about some accessibility software not being supported by Wayland. If the “tottering pile of godawful hacks” is required to not exclude blind people, it doesn’t seem that godawful…

Personally I’d prefer to use Sway, but last time I tried Zoom on Sway it gave me a lot of trouble. X11 might not be getting much future development, but it is done and it works, so who cares? It can just stay the same in perpetuity for all I care as long as it keeps working.


It is a shame that it takes a seriously long time to properly replace 30+ years of hacks on hacks.

But I don’t think the answer is to not try; people use Waylands lack of support for things to justify not using it which then means there no testing or development of those things.

I am somewhat in favour of the wayland devs being a lot slower to the punch, because understanding the problem properly and creating a somewhat clean solution takes time by itself, and writing the software to do things the clean way (screen readers for example) also takes time and effort.

I am reminded that in the c64 days people would poke random memory addresses and it was normal. Protected mode in Windows was a huge step back for many developers who were used to just writing arbitrary bits to memory.

I’m not saying we should abandon everything for progress, it’s good to be critical. But in this case I think the critical eye is only really focused on preventing change, which as many people point out is sorely needed.


I’m not saying Wayland is perfect as-is or accessibility shouldn’t be fixed. That’s a total straw man.

Every post that even touches on Wayland in the smallest way gets flooded with “Wayland sux, just keep developing Xorg” posts.

Xorg is not getting useful/meaningful/future focused development. The fact new commits exist doesn’t mean it’s a healthy alternative.

I just happened to see this post yesterday and thought it was a pretty good summary, if pithy, about the state of X vs Wayland. I don’t blame them for being mad about people continuing to beat this horse.

It must feel a bit like if people continuing to demand that we give up on electric cars and go back to developing leaded gas.


I didn’t say you said it is perfect, so if there is a strawman here it is one of your construction.

I think I will not try to defend Xorg, as I don’t really even like it, and as you note the topic is kind of beating a dead horse at this point.


As if linux was so good at accessibility before that.. as one ex-X-maintainer once said: “there is only so much lipstick you can put on a pig before you question why you try to make it fly” (I may be butchering up the quote).


Does Zoom even support Wayland or you are running it through XWayland? All these proprietary clients usually have a lot of inertia with implementing Wayland support.


Zoom supports the xdg-desktop-portal for screensharing as of semi recently.

Sway's screensharing portal (via xdg-desktop-portal-wlr) only lets you share a full monitor at a time. You cant only share a screen region or a single window, so YMMV.

Some of the big proprietary clients have their own devs using linux. Devs tend to be that type of person, so them pushing for upgrades to tools they themselves use isn't too surprising. Discord just recently released an official flatpak and cited internal dev teams as a reason.


KDE should support desktop portal better. It works fine with OBS for example in the Wayland session for screen recording of individual windows.

Does Discord support Wayland at all? I've heard a bunch of related complaints from people using its native client.


I run it as a Wayland client since a year or so now.

The only problem I know of is that screen sharing does not allow you to select an audio stream which has already spawned multiple hacks / modded clients over the years (and might also apply when running as a X client IDK)


I haven’t the slightest clue, it is a terrible program and I just wanted to do the minimal to get it working. Switching to X11 meant I was able to waste fewer brain-cycles thinking about Zoom.


For what it's worth, Zoom has worked well enough in the browser for me every time I've been forced to use it.


The thing is, in a reasonably designed replacement that shouldn't matter. Requiring applications to update to the new thing just to keep working is absurd.


Reasonable applications would use something like SDL to abstract it.

But it depends on complexity. Wine for example implementing Wayland support is a big deal and it's not trivial.


Using SDL is a good idea but it isn't a stable interface either, e.g. SDL 1 programs will not run with SDL 2. SDL 2 is also not something you can rely on being installed, you generally need to ship your own copy and even if you want to rely on distro packages you will need to adapt your code eventually as old versions are purged. SDL is also not a reasonable abstraction for all kinds of applications as it is focused on games and game like use-cases.

Backwards compatibility really should be the primary focus for anything looking to replace a system component. We do have Xwayland for that but its an incomplete solution by design as X clients won't see non-X windows nor can they capture the whole desktop.


Anyone with more complex cases like Wine can work with Wayland directly. Wayland protocols are pretty stable. If they care to support X too they'd need two paths.

But something like freerdp for example managed to do it with SDL well enough.

SDL of all projects actually did think about translating older ABIs to avoid breaking changes. I.e. there is SDL 1 over SDL 2 helper. It would be nice to see more of such efforts on Linux.


> Before the Wayland complaints take over the thread

Just BTW, there were no Wayland complaints taking over the thread till you started this with that somewhat inflammatory copypasta. Here's why your suggestion for controlling Wayland spam won't work: ...


That's certainly not my story from a user's perspective. For over a decade now, whenever I install a new system, at some point, some thing or another doesn't work and the simple fix ends up something like:

WaylandEnable=false

Surely there are interesting arguments to be made about how the design and philosophy behind Wayland is so much better than that of X, but the way I experience it is that Wayland is a nuisance, whereas X just does what it should do.


Never in my decades of Linux usage have I ever seen this "WaylandEnable=false" config option. If anything, Wayland has always been the _opt-in_ setting for many applications, not _opt-out_ as you are suggesting.


Wayland has been opt-out on Fedora /w GNOME since 2016 and Ubuntu since 2018 (though they briefly reverted due to issues). I guess "for over a decade" was a poor choice of words, but that's what it felt like.


Classic smear the opposition tactic instead of giving real arguments and addressing concerns.


[flagged]


Wanna know about acorns? No? Well, in a nutshell, they're an oak tree.


Is systemd going to add it's own DRI layer?


I really hope not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: