What looks like "typical Linux geeks being geeks" with this situation:
1) the old, mostly working thing is being abandoned in favor of
2) that new thing which doesn't work in so many cases it's laughable, even after 11 years. How many years was it between the concept of X and a working release at Palo Alto?
Note that the new situation is so perfect for passing the buck from the windowing system to the compositors, and compositor folks are busy fighting feuds over which one another's private protocols or even public ones they are not going to support.
Oh, and the browsers. Chromium is making its first shy bumbling steps towards actually working on Wayland! A mere decade after!
I've heard it was so much easier to write Wayland clients, what could have happened?
This criticism looks like "free as beer and I know better".
1. The old, mostly working thing waits your commit
2. That new thing can have some help too
I recommend developers story about this "mostly working thing" (2014) [1]. It is quite fun and eye opening, he clearly knows his subject better than most of the comments.
Wayland demo worked almost from day one. I've run it in 2010 [2]. But we need applications, that is migrating toolkits and this took 10 years.
It is very toxic development culture, cancel culture, mob.
Saying Wayland is just non-functional for reasonable use-cases without significant user workarounds (usually going back a decade in execution and clarity) isn't toxic culture, it's the truth. Yes, people can work on what they want. By the same logic, anyone can say that the direction they're taking is bad.
I strongly disagree. Since switching to Fedora almost 4 years ago, I've only booted to X.org once, three years ago. I haven't needed it since.
I do browser-based screensharing (of both individual windows and the whole screen), I have a mixed DPI monitor setup, I've played games on Steam, &c. I need no workarounds. GNOME just works.
About the only things I don't use are nvidia's crappy drivers.
You've got two primary use cases for nVidia's GPUs:
1. Gaming. Just about nobody games on Linux with nVidia. Most people I know who game (from Linux) with nVidia GPUs use PCI passthrough to a guest VM running Windows. Very few use nVidia GPUs on their host system. My primary setup is a 2950x/128GB DDR4/Vega 56/2080 Super -- the latter goes to kvm exclusively.
2. "Research" such as machine learning. You don't need Wayland or X11 for this, and the proprietary drivers work the best. To be honest, this is a space where Ubuntu Server performs best. I keep one of these around in a VM (see above) for this purpose. (Mining also goes in this category)
Everything else can probably be done with an Intel or AMD GPU tbh.
The GPU stats at https://www.gamingonlinux.com/index.php?module=statistics&vi... disagree with your first point. A few years ago I did all of my gaming direct on Linux with an nVidia GPU, and did so for over a decade, on various laptops with and without mixed Intel/nVidia GPU setups.
Anyone who wants stable, performant GPU today can buy AMD. Future is here. That's Wayland target.
I live in the past — Intel GPU, X.Org, xmonad. I've thought to wait a few more years but I've checked Sway, it works. I'll check waymonad, maybe it works, maybe I'll make it work.
I believe you assert that Wayland failure is that is not ready for end user.
Strange point, it should have been burned 10 years ago than. I use X.Org, works perfectly in Arch Linux, some claim problems on in their distribution. It should be critique of that distribution.
> Chromium is making its first shy bumbling steps towards actually working on Wayland
And Firefox support is behind MOZ_ENABLE_WAYLAND=1 flag. Clearly Wayland is in "early adopters" stage. Early adopters should not whine.
> By the same logic, anyone can say that the direction they're taking is bad.
You've clearly not watched presentation. Some people know better than developer of that technology. It would be insult on my job. I've had enough people stating "technical debt is not real".
> Dear Google Cloud: Your Deprecation Policy Is Killing You
Open source is nothing like Google deprecating its services, anyone can run code, but it is naive to expect free indefinite support.
My grandfather was a preacher and a millennialist, which means he thought the world would end soon. He based this out of his intimate knowledge of the Bible, spanning over 60 years of study. He was also a missionary who starting churches and had a radio show.
Surely that fulfills both contribution and prediction.
I therefore disagree with your characterization.
Now, he appears to have been wrong, but just like preachers may be wrong, critics may also be right even without contribution.
He knew Bible and he was right by his interpretation of Bible. In programming world word is reality, there is only one meaning. Those who know word forge reality. This requires knowledge.
Critics who do not posses knowledge appeal to emotions, spread FUD. They may be right by coincidence like standing clocks.
I've described context. You've confirmed it. Statement supported not by fact but by inner voice. My morale stems from Christianity, I believe Enlightenment got it right — discovery of nature is discovery of God. I've been to church, I've seen priest who shared joy. I've seen other priests, I would rather not see them.
X.Org developer described what's wrong with the project [1]. Not one of the Wayland critiques addressed his points. They have no facts, they just want it to fail. Those who work hard and share for free get anger in return. Sway / wlroots maintainer (ddevault) got off the edge from this misinformation and I do not agree with him but I am not maintainer either. Where is love and God in this story?
> That’s not even considering any personal goals, which I have vanishingly little time for. I get zero exercise, and though my diet is mostly reasonable the majority of it is delivery unless I get the odd 2 hours to visit the grocery store. That is, unless I want to spend those 2 hours with my friends, which means it’s back to delivery. My dating life is almost nonexistent. I want to spend more time studying Japanese, but it’s either that or keeping up with my leisure reading. Lofty goals of also studying Chinese or Arabic are but dust in the wind. I’m addicted to caffeine, again.
> Less healthy ways have included walking to the corner store to buy unhealthy comfort foods, consuming alcohol or weed too much or too often, getting in stupid internet arguments, being mean to my friends and colleagues, and googling myself to read negative comments. [3]
That's story behind Linux infrastructure. Because of such work I have Linux and I am grateful for it, but no mob wants to finish them.
It took python community 10 years to migrate to python 3, and the migration process from python 2 to python 3, while tedious, is certainly a lot easier than porting a gui app from xorg to wayland. It could take some time until most app developers finally migrated their apps from xorg to wayland.
Nothing required big 2/3 change. Ruby and Go done it right — a series of small changes. Python 2.* had it right. The attitude of language developers was clearly stated in PEP 414 and 2to3 approach — no backward compatibility. That is nonsense, every python program had backward compatibility several minor versions deep.
Select one issue and work on it. Unicode:
# python prev
u"" # or from __future__ import unicode_literals
b""
vs
# python next
u""
b"" # or from __past__ import binary_literals
Facts speak for itself, change was to big for users to overcome it, price was too high. I've lived through Ruby string changes, not a big deal.
Wayland transition is nothing like Python 2/3. It is big but mostly invisible, XWayland runs X11 applications, toolkits hide implementation details, no X.Org deprecation.
Just like you said, for majority of people, migration from python 2 to 3 is mostly painless. Some code don't even have to be rewritten and might work on python 3 without any modification. However, some applications requires major migration because their core functionality depends on some feature in python 2 that now changed python 3, and they won't do major refactoring effort until the benefit of migrating to python 3 outweighs staying on python 2. They were essentially waiting until the ecosystem migrated to python 3, but as they're also part of the ecosystem (and often the essential part in their niche), this created a chicken and egg problem that needs a decade to be resolved.
The situation with wayland migration seem to be somewhat similar with the python 3 migration. Many gui apps that uses gui toolkit probably don't need major modification, or no modification at all. But some apps that requires platform-specific access are heavily affected and might require some major refactor, and they probably won't do it until the benefit of migrating to wayland outweigh staying on xorg.
Also, unlike python 3 migration where most of the community agree that the move is justifiable will have to happen eventually, in the case of wayland migration, the community opinion seem to be split which certainly harm wayland migration progress.
I've built Python 3.2 just to show the problem introduced by language designer
$ python3.2
Python 3.2.6 (default, Oct 26 2020, 15:29:09)
[GCC 10.2.0] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> u"foo"
File "<stdin>", line 1
u"foo"
^
SyntaxError: invalid syntax
This is hostile behavior. It took four years to solve (python 3.3), still supported:
$ python3
Python 3.8.6 (default, Sep 30 2020, 04:00:38)
[GCC 10.2.0] on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> u"foo"
'foo'
Python got better in supporting old versions over the years. Take a look at Django, it got Python 3 support in February 26, 2013 [1].
> Django 1.5 introduces support for Python 3 - specifically, Python 3.2 and above.
Notice version gap — no support for 3.0, 3.1. It was quite common.
> python 3 migration where most of the community agree that the move is justifiable
If people believed breaking change was justified they would move to 3.0, it did not happen.
I've seen both Python and Ruby community, Ruby made two breaking changes (1.9 and 2.0) while Python got its PEP 414. There was a huge split among developers.
---
Essentially you are describing how switch happened while I'm describing which lessons should have been learned. It is sad if Python community learned nothing.
> the community opinion seem to be split which certainly harm wayland migration progress.
Developers done great job, you can't run 2.7 code in 3.*, you can run X11 applications in XWayland. Is there a split in developers community? All I see is FUD among users.
worth noting that in the python 2 to 3 story, a migration path was mostly provided along with tools that tried to automatically refactor your code, the latest 2.x release was supported for nearly a decade, and there have been directions on how to temporarily write code to work on both python2 and python3.
Python 3 is only unusual in its failure. Perl6 Apocalypses [1] is wonderful. Python 3000 is pointless [2].
I've been Python developer at that time, the writing was on the wall. They have not provided migration path. PEP 414 "Complaint: This PEP may harm adoption of Python 3.2" [3], this is insane. Community voted with feet — they've stayed with Python 2 until clean migration path arrived. I've switched to Ruby and been happy since.
Ruby 1.8.7 was supported for five years.
You had to write Python 2 code and run 2to3, you had to live in the past or abandon it.
> 1. The old, mostly working thing waits your commit
From what i understand there are commits, just not releases. What is the point of committing if the maintainers do not bother to make a proper release that your work can be distributed to the users?
Do you imply X.Org sabotaged? Issues fixed, system is stable and only maintainer stops it? You should notify disillusioned people who still commit to the repo [1].
Newer isn't better-designed. Unix was really well-designed, compared to so much of the modern stuff, and it does everything I need. Nineties Linux was a nice Unix. I wish we hadn't spent the past quarter-century turning it into nineties Windows, with layers upon layers upon layers of cruft.
I don't get why Wayland is slow, when Enlightenment was fast on machines with 32MB of RAM, a 3dfx Voodoo, a spinning HDD, and a 66MHz CPU...
One of these days you should read the X section of the Unix Haters Handbook. Wayland is being built for the world as it is, X was built for a world that never was. In this case, it’s quite hard to not be better designed.
And yet it suffices to have two GPU in your system (both driven by OSS drivers) that the option of running Wayland disappears from GDM on Fedoras up to 32 at least.
The laptop in question is from the late 2016.
So much for "the world that is". My machine must be very, very otherworldly.
No, really. The author, whose name I can't remember at the moment, complains that X has the server/client roles reversed; that xcalc is a server and the display is a client. The usual defense is that it's "tongue in cheek", but from a networking standpoint it is simply wrong.
No. This is the thinking of people who can't see the utility of a use case they never personally use. It's the thinking of people who believe that "ssh -X" is "obsolete" because RDP exists, or that window managers can become obsolete because the fashion has moved on. X works, it's worked for decades, and saying that it's never worked doesn't negate that.
I find it really arrogant to claim "this software does everything I need, so it should never change", while in reality you are but an infinitesimally small part of the user base said software has to support.
Sure, all the software can stay where they are if all the developers had to serve is you. But that is just not the reality.
There isn't a magical different userbase Linux has to serve which has somehow bizarre needs not served by thoughtfully-designed older software, but which newer, poorly-designed software somehow meets.
Yes, we need a newer web browser, but that could run on old-school Linux/Unix just fine. Aside from that, there isn't anything wrong with nineties Linux which couldn't have been brought up to modern standards with a bit more discipline.
Someone decided ALSA (or OSS, it doesn't matter) was rough to deal with, so they build Pulse and JACK on top of it, rather than a modest expansion of ALSA. Someone didn't like the network layer, so they build a userspace wrapper, and someone didn't like that, so they wrapped it up in a GUI.
At the end of the day:
1) Everything is slow, requiring literally more than 100x the resources it once did.
2) Everything is complex, with layers upon layers, and text files with comments "This is managed by my hack. DO NOT EDIT THIS BY HAND. I keep the exact same data elsewhere, in my own config file, since I couldn't be bothered to read what's in the line below."
3) Everything is brittle and hard-to-understand. I knew how nineties Linux booted. Today, the logic is distributed among dozens of layers, mostly because people wanted to reinvent new things, rather than polishing/improving/fixing old ones. Different apps go for the wrong layer, and tutorials point to the wrong ones too.
Now Ubuntu is throwing its hands up at the mess, and building snap to hide all this under yet. another. layer.
As a footnote, ALSA was just about the last time this happened right, with it replacing OSS while maintaining compatibility.
Maybe you should fine community that shares your views.
I don't use pulseaudio, networkmanager, desktop environment, works perfectly in Arch Linux [1]. Arch has great wiki, it is correct. System is quite transparent.
I am fine with systemd but, for example, Void Linux [2] uses runit, Alpine Linux [3] uses OpenRC and is compiled statically.
Arch Linux pacman serves my needs perfectly, it is quick, it shows only relevant information. AUR provides ready made PKGBUILDs.
Lots of Linux distributions is its strength. It allows people with niche requirements find its home. We are far better members of societies when supporting our system than grunting.
I have never had problems distributing binary builds for Linux. There are a number of system libraries that are backwards compatible: glibc, asound, libGL, etc. - test on the oldest you want to support and any other libs you just ship yourself. And that would not be different with only one distro either unless you only care about one release of that distro.
> Arch Linux [1], Void Linux [2], Alpine Linux [3]
Gentoo is feeling left out. It's the OG distro for those who have strong opinions on how their system sould work but not strong enough to do eveything by hand.
wegs reminded me experience with Ubuntu. I've started with GNOME, tried XFCE, Openbox, system become brittle and it raised questions like "How to configure WiFi without DE?".
I've switched to Arch Linux, base install covers network configuration. It gave me stable platform, I can grow my knowledge, I can fix my system.
Void Linux and Alpine Linux are just examples of what could match voiced concerns. I lot of people recommend Manjaro, looks like a simple way to try Arch based distro. Some problems stem from upgrading release, Debian Testing could help. But I have not tried them.
I've tried Ubuntu, I don't like distribution upgrades, default theme, constant experiments on users. Gentoo is my first distro, as C++ dev at that time, compilation was fascinating. I still enjoy 3 commands joke [1], looks like it is possible to install binary packages these days. I've tried Alpine, I don't like apk, systemd and glibc are good enough for me. I've tried OpenBSD, hardware support was not as good as on Linux. I'm playing with NixOS but I am so used to Arch.
And I want to stress, each of these system has something I enjoy. It is just me who is driven by minimizing negative points.
> As a footnote, ALSA was just about the last time this happened right, with it replacing OSS while maintaining compatibility.
Nah, even ALSA was a needless overcomplicated mess. They should have just built on top of the super simple /dev/dsp. Instead we got new incompatible APIs wile OSS apps were left with requiring exclusive access to your sound card.
Computers aren't things I enjoy. They're things I stomach. I think I'd use OpenBSD if I were confident that after setting it up, it worked reliably, without changes, for the next few decades, with transparent system upgrades.
Part of that is working Bluetooth, power management, webcams, video conferencing, OBS, and similar. Historically, OpenBSD was behind Linux on working reliably.
Debian used to do this before:
1) It fell behind on hardware support, as laptops took over desktops; and
2) Ubuntu/Fedora started putting out massive numbers of half-baked technologies, and everything building atop those.
I switched to Ubuntu, without Gnome, but increasingly, things don't work without Ubuntu's UX, and snap shows a future I don't want to head down anymore.
Webcams and power management work fine on OpenBSD (assuming there exist drivers for your hardware), but Bluetooth isn’t supported at all.
What I like about it is the simplicity of everything. If you want to change your mouse sensitivity, you add a line to a particular file in /etc. If you want to autojoin a particular WiFi network, you add the SSID and password to a particular file in /etc. If you want to start a daemon on boot... you get the idea. No massive complex configuration systems with giant blobs of XML that nobody understands.
Unfortunately the flip side is that things move more slowly — they won’t support Bluetooth until someone writes code that meets the system’s bar for correctness, simplicity, reliability, documentation, etc. — which might be never. But the stuff that does work is fantastic.
If hardware/software support requirements tie you to more mainstream OSs, I do agree with other posters that Arch is the closest thing to what you want, but it’s still far from perfect.
3) How is video support with ATI/Nvidia? Will my multimonitor setup break?
The other thing which scares me is the manual upgrade process. On my systems, I've done apt-get update/dist-upgrade for a quarter century now, with never an issue.
1) There isn’t a standalone binary, but you might be able to get the web client working in Chromium. I’m not sure.
2) I don’t know.
3) AFAIK it should work.
Using the BSDs as a workstation, rather than a server, is a bit niche. Like Linux, but even more so. So unfortunately it’s still a labor of love and a lot of modern stuff won’t be supported.
Especially in the Linux world where not asking too much of their own PC is seen as a virtue, so people are running tiling WMs from the nineties, spend their time in Emacs and say everything is fine.
Meanwhile I want my Linux system to run VR, multiple 4K displays, very demanding games and bluetooth headphones. And Linux is the worst for it, because everything feels laggy and half polished there and on the proprietary OS my setup feels MUCH faster.
Sure, it's not Linux's fault, but let's stop saying everything is fine and dandy because Emacs is still running.
SGIs ran VR, multiple displays, and very demanding apps in the nineties too. They did all sorts of wonky, complex, 3d input devices too. So did DEC Alphas. This is the stuff Unix was built for.
My claim is that nineties Linux was much closer to having the right architecture for it than 2020 Linux for this sort of stuff. The reinventions and onion layers didn't help; they hurt.
I think the only piece nineties Linux didn't anticipate was the level of hot-swapping hardware (USB, Bluetooth, displays, etc.), and the level of power management. Modern Linux never got that architected or integrated quite right, because it was built with hack upon hack upon kludge. It's split up in bizarre ways between kernel and user space which would be really tough to clean up right now.
* My USB webcams wouldn't show up in a different order each time I reboot. This works fine under Windows and Mac.
* My monitor configuration wouldn't be hardcoded in my xorg config file, or swapped around manually with xrandr. I'd have a way to code up config options for whatever is plugged in, and if something unanticipated happens, it'd do something reasonable until I coded that config in too.
* I wouldn't need to reconfigure my drawing tablet to connect to the right monitor each time I plug it in.
* The system wouldn't get into an unrecoverable, unstable state with e.g. an unreliable USB cable.
.. and so on. It's designed for a fixed set of hardware, with layers on top of that to support hotswapping. I don't have "USB 4k Logitech Webcam" on the native level. I have /dev/video3. I then have layers to map names back.
Same thing with HDDs too, actually. I refer to them as /dev/sdc4, rather than by a GUID or name or similar. Layers with onions.
And so on. The /dev/sd_ is primary, with UUIDs as kind of an afterthought
It ought to be the other way around, with UUIDs as the primary, proper, canonical name and interface, and a legacy backwards-compatibility layer for /dev/sd_ devices. It's even reflected in the directory structure. Yes, I CAN list disk "by-uuid," label, id, partuuid, or path, but those are special cases with sd_ as canonical.
It's kinda retrokludged in there. I never said USB/etc. didn't work. Just that it wasn't architected for it.
It sounds like the guy is way overexaggerating and basically didn't understand the library he was working with. What he describes isn't anything special - the "Evas_Object" (aside of the weird name) sounds like a HWND in Windows or Window in X. The part about layouts sounds like similar to sizers in wxWidgets which are used so that you can avoid using heavyweight native widgets just for layout that some other toolkits do. The event callback sounds like it uses a generic messaging system - again not much different than something like WndProc in Windows.
EFL is written in C so it does have to work within the limitations of C but -non-toy- GUIs are inherently object oriented so things will be a bit more complicated than Qt that can lean on C++'s support for object oriented programming. Especially since apparently EFL also supports bindings to other languages which may also make things a bit more complicated.
I ran Enlightement for maybe a month or two to try it. I mostly ran twm, fvwm, and similar at the time. Those are obviously not satisfactory for a typical user.
I gave it as an example of something which ran rather sophisticated theming/compositing/effects on hardware at least 2 orders of magnitude slower than today, and something which modern Ubuntu/Fedora struggle with.
I'd encourage you to compare the specs of the 3dfx Voodoo (the "magic dust") to even bottom-barrel integrated graphics of 2020.
> 1) the old, mostly working thing is being abandoned in favor of
Except that X11 is NOT "mostly working" for a significant amount of modern usages.
Yes, if you happen to want to run your Terminal, Emacs, etc. remotely over a network connection, X11 is your huckleberry--but only because anything more complicated than a bitmap makes that a very difficult problem.
If you want HiDPI, subpixel anti-aliasing, color calibration, multiple resolutions on multiple displays, smooth video, no tearing, etc. X11 only works to a certain degree, some days, for some people, for some video cards. And it's not even clear how those should work over a network connection.
This is not new. After all, X11 critiques came in for a full chapter in the Unix Haters Handbook and that's 25 years old. Wayland is simply revisiting the same problems as X Server Extensions 30 years later (they aren't ubiquitous so nobody uses them so they aren't ubiquitous).
The problem is that people who use obscure features are VERY vocal about it while people who simply abandon your operating system because accelerated video doesn't work reliably are very quiet.
Maybe Wayland isn't the way forward, but X11 certainly hasn't been the way forward for decades.
The only advantage I can see is that if Wayland manages to tear out the X11-isms in everybody, what comes after Wayland will have a lot easier time of it.
None of that require throwing away the entirety of Xorg/X11, breaking every existing application while expecting everyone to rewrite their code in a completely different way, breaking a ton of existing workflows (often without alternatives) and splitting the already tiny Linux desktop in half.
Every single thing you've mentioned can be fixed with Xorg/X11.
(except subpixel antialiasing because that already existed for a long time, i'm not sure what you refer to)
> Every single thing you've mentioned can be fixed with Xorg/X11.
Then why hasn't it been fixed?
In open source, those who make the code get to make the decisions--for better or for worse.
My personal opinion as to why X11 gets so few contributors--the build system is so terrible that nobody wants to touch it anymore. I suspect they would get a lot more contributors if they changed to something like meson/ninja which doesn't need to recompile the universe to be correct.
>> that new thing which doesn't work in so many cases it's laughable, even after 11 years.
I've been using Wayland for years now and it works great. The one and only thing I use X for is OBS studio because they're still working out the window capture stuff, which BTW mean poking the right hole through security that X doesnt even have.
Most the things that don't work well on Wayland are old junk.
Do you use Chrome? Scrolling with a touchpad still doesn’t work on Wayland (it emulates discrete mouse wheel scrolling, rather than going pixel-by-pixel).
There is IIRC some chrome setting that fixes this, but it doesn’t help with chrome-based electron apps that don’t expose the internal settings (like Slack).
Another problematic area is input method editors, mostly because IMEs and applications using them have not implemented the respective protocols on Wayland yet.
> What looks like "typical Linux geeks being geeks" with this situation:
I call systems like this CADT-compliant after Jamie Zawinski's Cascade of Attention-Deficit Teenagers idea.
Wayland is a system for which CADT-compliance (and maybe security) trumps nearly all other concerns. No surprise, the primary use case for Wayland is and was always GNOME -- the very system for which Zawinski coined CADT.
I mean... having used both X and Wayland as daily drivers, you're fucking insane if you think you can EVER get me back on X.
On Wayland I get excellent multi-monitor support (mixed scaling ratios, much better automatic detection and configuration, much better plug-and-play). I also have a touchpad on my XPS that feels just as good as a Mac.
To boot, I haven't had to touch a config file related to input devices or output devices a single time using Arch/GDE/Wayland.
Honestly, I'd probably still be running linux in a VM on my laptop if it weren't for Wayland.
If X is your opinion of "stable and working" then I don't want any part of your systems.
Cannot say Wayland works smoothly on my triple-monitor setup. One of the monitors sometimes stops working randomly, and wakes up to display Plymouth screen when I reboot the thing.
Dell Precision 7520 with an AMD GPU. The degree of flakiness is different depending on whether you're on Plasma or GNOME, but it's there nonetheless.
I mean, my data point of one isn't all that helpful if you're having issues, but triple monitors do indeed work on my end.
I switch between a station that has a 4k display next to a standard 1920x1080 display as well as my laptop display (3200x1800) and my home setup with the laptop and 2 4k displays.
I had an issue on the 4k displays when I attempted to run two displays and a usb hub on a single thunderbolt line, but that wasn't Wayland, that was me being dumb: The thunderbolt protocol only support 40Gb/s and each monitor uses 20Gb/s and the hub eats another 10Gb/s. If the hub got detected last, it dropped to usb 2, if one of the monitors came online last, it would drop to 30hz refresh rate. Frankly - I was a little floored when I realized that I was the one being dumb and the system was mostly still just making things work. Just for shits and giggles I booted up an x-session after to see what it does. The answer is lots of black screen.
I'm guessing that's an AMD GPU thing and not a Wayland thing. I run a triple-monitor setup with an AMD GPU on X.org and I have a similar problem: every once in a while when I boot my PC one of the monitors just doesn't come to life. Restarting X usually fixes it.
On Kubuntu 20.04, and honestly ever since I first tried it from Kubuntu 16.04 on up, multi-monitor support has been terrific for me. Maybe Gnome has problems, but not KDE on X.
X works for me on systems with multi-month uptimes (dictated by software updates that have nothing to do with X) and networking windows over the Internet, something RDP (and, to the best of my knowledge, Wayland) is incapable of. Meanwhile, Wayland doesn't run the software I want to run and doesn't run on the hardware I own.
Well, not your typical end user routine I guess. I’d like that in mainstream distros please, without having to edit .desktop files to shove in all the commandline flags.
I mean, could it help if we mercilessly threw away all the code from Xorg that is there to support all the Unixes from the 80s and 90s, all the code paths for kernel-bypassing direct hardware access, and reimplemented the protocol on the same stack Wayland compositors live? Wouldn't it have solved the problems that were not about the protocol design?
Sure if someone wrote it in Rust it would be also safe and secure /s
Sure, but that's not how hobbies work. You can't take someone's volunteer work and turn that into work on a thing that is not the thing they volunteered their work on and expect to get the same productivity.
It's like saying "Imagine if all the time you, a software engineer, spent on taking care of your child was spent on making a graphical server". Sure, but I don't want to make a graphical server, I want to turn people into dinosaurs.
I think you might look at it with this perspective. It will take them deserting X11 in order to get stuff for Wayland working because everyone is just like "well X11 will work well enough". Now people won't have much of a choice other than not supporting linux Desktop at all.
This is the true answer. Nobody benefits from replacing x.org except large companies. I for one have already switched my day to day operations over to OpenBSD, as I've seen too much strange corporate agitation propaganda trying to get us to relinquish the things that make Linux great - C, X11, native non-containerized applications, and so on.
I am not necessarily against Wayland or new things in general.
But it bothers me when no clear upgrade path is defined ("drop your stuff" is not acceptable) and a half-hassed incomplete solution is proposed instead, and backwards compatibility is pretty much disregarded.
For what concerns my personal computing, I'll stay on Xorg until XFCE supports Wayland. Then I'll update.
I keep trying to use Wayland, but it never fully works. Everyone keeps saying how well the highdpi stuff works, but then it really only works for a subset of things. For the rest it's actually worse than Xorg.
Multiscreen is Xorg is kinda mushy so I thought maybe Wayland fixes it, but no it doesn't.
Wayland is now 12 years old and everything is still half-baked.
It quotes an intel developer saying they don't want to do any more stuff on Xorg. But the reality is that as much as I admire intels open source contributions. I don't remember a time where all the features in the Intel driver actually fully worked. But sure, maybe it's an Xorg issue, or they don't know how to do release management.
Either way, Wayland doesn't seem to solve the problems it promised to fix.
Having used Intel open source drivers for 12 years, I've never lost the opinion I gained back with X3100 gpu that Intel is the darling because technically open sourcing the driver papered over the many, many faults of their code.
Whenever I had a chance to run on nvidia binary drivers, the only things I occasionally missed were some new features, or having to wait a bit longer to update the kernel. Stability was better, drivers more performant, and I don't remember daily fighting with memory leaks.
And X.Org's driver architecture could be replaced completely (in fact, it could be made to run on the same stack as Wayland) - it wouldn't be the first compositing Xserver around, and could use methods that would deal with noticeable to many lag involved in compositor-based UI.
> Whenever I had a chance to run on nvidia binary drivers, the only things I occasionally missed were some new features, or having to wait a bit longer to update the kernel. Stability was better, drivers more performant, and I don't remember daily fighting with memory leaks.
I had a Nvidia Riva TNT2 and later on an Nvidia GeForce. I ran Linux on it.
I had stability issues with the driver, but I solved it the following way: ran one X server with a DE, and another X server with Nvidia's proprietary driver (mostly for games). This way, if I had to kill the X server using Nvidia's driver I didn't lose any work.
If that wasn't enough, all the bloody time there were massive security problems found in Nvidia's proprietary driver. I don't know if that is still the case, cause I switched away to ATi in the 00s, and Intel graphics cards + ThinkPad as laptop. ATi/AMD has come a long way ever since. Their FOSS drivers are stable, and they deliver (see various Phoronix benchmarks).
There's more to it - X.Org is based on lowest-common-denominator code from early days of X11, and the internal driver system despite upgrades is a bit lacking.
There's glamor, but AFAIK it's not as tested as it should, and is still shoehorned into old model.
An example of not following the old model is Xsgi, which was (hw) compositing and quite ingenious in many ways.
> I keep trying to use Wayland, but it never fully works. Everyone keeps saying how well the highdpi stuff works
I use Wayland daily and have for a few years. It’s clearly gotten better, and I rarely encounter problems. I do have my load of applications still running in XWayland though.
But yes. Support for varying DPI in my multi-monitor setup is handled much better on Wayland than on X11. I would say much better than on Windows too.
How's the forced v-sync? I assume all games run XWayland, which makes it a non-issue. (Otherwise, it'd presumably be an FPS hit in a world where adaptive sync [like G-SYNC but not really FreeSync since the latter doesn't really work in Linux lol] makes tearing a thing of the past and obviates v-sync entirely.)
Also, can you use xdotool for key input redirection or screen capture programs and stuff yet?
Wayland is a protocol. It has nothing to do with multi screen support. You're talking about the compositor you used. The one I use handles multi-screen setups quite well.
I never got to fully understand Wayland's model, but if this means that something that previously was handled by the display server for everybody, and now every single desktop environment (or at least something like wlroots) has to solve it over and over again... how isn't this a step back?
Wayland is like X11. Xorg implements the X11 protocol. There are other X11 server implementations — XWin32 is one example on Windows.
Nothing has changed with Wayland except we have a new thing and lots of groups writing compositors. And this is great — Mutter, Kwin, wlroots, Mir — and they will all speak a common protocol for putting stuff on the screen and handing input events. And projects with similar use-cases “desktops” are standardizing on common dbus interfaces for non-display stuff.
This is genuinely so much better than the Xorg monoculture. Wayland’s design has made it possible for lots of different groups to implement display servers and have interoperability because what we had before was “X11 actually means do what Xorg does.”
There’s lots of in-fighting about the scope of Wayland and people that want to make a protocol for putting pixels on the screen also handle “desktop stuff” like audio, screenshots, screen recording, keybindings, input automation, authentication. I think this is misguided because it would effectively turn Wayland into a generic message bus between “apps with windows” and the display server when we already have a generic message bus for every application — dbus.
which after shaking out will be promoted to org.freedesktop.* after standardization. Despite the fact that notifications have been "DE specific" in the same way for years and years nobody seems to complain about org.freedesktop.Notifications.
I agree. On one hand I can understand the need to shrink code, focus on the core functionality etc..., but on the other hand X has e.g. "xset" and "xbindkeys" which can be used for all X-desktops (or "Window Managers" or however they're called). With wayland each single desktop environment has to re-implement all that functionality => looks like wasted effort to me - the modularity (from the point of view of functionality) of X is lost in Wayland.
One of the innovations since the 1980s when X was designed are shared libraries, so you can have libweston and wlroots now https://github.com/swaywm/wlroots
While the first X11 release was in 1987, the fundamental architecture was designed already since 1984, and this architecture includes a heavyweight server that implements things that in most other window systems are done client-side.
A protocol definition could cover multi screen support, requiring implementors to do something sane. Of course one of the reasons that Wayland exists was to cut down the bloat X had accumulated over the years. Given that it is rather surprising that the Wayland spec isn't just an empty page.
> Wayland is a protocol. It has nothing to do with multi screen support. You're talking about the compositor you used. The one I use handles multi-screen setups quite well.
I use Wayfire [0], a customizable compositor based on wlroots, the same base as sway. It's quite involved and not absolutely perfect yet, but it has some features I haven't yet seen on other DEs like being able to swipe horizontally on your touchpad to smoothly switch workspaces (following your fingers) and the satisfaction of having it properly configured is pretty high.
There have been 3 issues I've had regarding it, 2 I'd call minor:
- I haven't found a way to rearrange external displays, though it is theoretically supported
- After a bug in my TV switching to the lowest possible resolution through switching the input in home assistant it would not work with 4K again until after a complete reboot (so it may not even be a wlroots issue)
- XWayland apps are unresponsive in the upper half of the second screen (4K at 1x scaling)
Using mostly native Wayland apps neither of these have been deal breakers for me. Something under-discussed is that virtual desktops are per-screen, which I find quite cool.
So that's my adventure with Wayfire, but I would assume that Gnome and KDE have perfected multi-screen usage on and off of Wayland by now.
Wayfire's developer here, have you looked into output configuration on our wiki?
Also, is there a chance your 4K screen has negative coordinates? It is well known that Xwayland does not react if an output has negative coordinates (or at least partly negative coordinates).
I'm against Wayland because it forces compositing on all windowed applications. I'll stay on Xorg as long as possible because I'm not willing to sacrifice latency for no tearing.
I think our sight is all a little different. I can clearly see 50/60hz strobing in light when others can't. Back in the old CRT days I would get strained eye sight and nausea if I worked at monitors running below 85hz, unless it had long lasting phosphors like the old monochrome CRTs.
Good LCDs are static unless you're running a dvi-vga-displayport-vga-hdmi dongle monster where the pixels are inevitably shifting a bit every instant. What you're more likely to be seeing is PWM backlight flickering at you.
If it's indeed the LCD and not the backlight, pray tell us the model so we don't end up buying it.
All your stuff is already composited, and if done well it doesn't add much latency at all.
You're still paying it in X, just badly, and with none of the upsides (eg, no tearing). Even if you're trying to avoid a compositor, none of the UI toolkits are participating in that nonsense.
You're not. It's already there and already happening. The difference is just we all stop pretending anyone is doing anything else. As in, kill the terrible X vector crap, and add reliable compositing APIs so things like video playback in apps stops being so incredibly bad (see video in browsers sucking in linux and not anywhere else).
You know, catch up to what literally everything else is doing. Windows, MacOS, iOS, Android, etc.. are all exclusively compositor-based window management systems. And they don't have latency issues, as ways to avoid what little latency compositing adds can be avoided in cases when necessary, like front buffer rendering extensions on Android for VR. Literally only X remains stuck in the 90s.
I get it, all these features that used to just be supported by the one Xorg server now need to be supported by individual compositors - but still, it is simply misinformation that "Wayland" is broken because Mutter / Kwin / Sway are incomplete.
May be it's time for people to be against new things.
Open source I know is often a work of love, but it's a bit painful that everyone is chasing after new things instead of keeping things that already work working. It's like how there are dozens of js frameworks that have thousands of contributors whereas openssl had one which lead to the infamous heartbleed bug. We need to talk about how the culture of open source is broken in this regard and figure out how to fix it.
> It’s important to remember that when you start from scratch there is absolutely no reason to believe that you are going to do a better job
This is ridiculous. Maybe that's applicable in a very short timescale, but definitely not to the protocol which appeared in 1984.
There is a thing called progress, people invent quite a lot through the years: type systems, patterns, design ideas. Not to mention that hardware has changed quite a lot through the years: X11 was done in the age of terminals and absence of hardware acceleration.
Your last comment is a true statement but it isn't an argument. Just because X11 was done in the age of terminals and before gpus existed doesn't seem to support your point unless "old = bad" in your view. I understand how type systems can help, some patterns are good, but some of those ideas existed before X11 actually, so no period of time has a monopoly on good ideas.
Technical debt i.e. "code that should be refactored, because it was originally implemented in a hurry"? Yes, technically you're right and deleting all of the code is a way to also get rid of the code that should be refactored.
The same way whole-limb amputation is a very effective way to remove nail polish.
Trying out something new and fun is what you do in your free time. Working with decade-old legacy code is what you get paid huge amounts for and the reason you want to start from scratch in your free time.
I agree that this is not good overall, but it's going to be very hard to convince people to work on not-fun things for free. Some might see it more akin to volunteer work, but the amount of people willing to do this are far outnumbered by the people simply doing things for fun. Too be fair, it is at least pretty great in so far as that they're doing open-source work :)
I think the problem with Wayland is it isn't half-assed. The insistence on being a generic protocol instead of a product fragments an already small dev community. And being a protocol so incredibly slows down velocity.
Wayland was the last straw for me. It made me switch to Macs after 20 years of almost exclusive Linux on the desktop and a few dev stints. It was cool when my time was less expensive.
I check the state of the linux world about once a year still. And obviously keep using it on the servers.
Best of luck to everyone using it for desktop. I totally get why you do it, but it's just not for me right now.
But why? X11 works. I've tried wayland in 2010, second time today. If anything today's story shows that developers do not want to work on X11 and how much we depend on Red Hat.
X11 problems explained by Daniel Stone [1]. As I understand there are two parallel architectures, one uses X11 server primitives (xfontsel), another renders on client (fontconfig fc-list etc). It is very confusing.
I was provided with a Macbook Pro 15" by my work and I struggle to understand comments like this.
Upgrades often break things, I'm forced to use brew to install software which leaves files strewn around the file system and regularly seems to have cross package conflicts.
And don't get me started on the hardware - the laptop is excessively heavy and the keyboard is awful (and I don't just mean the touch bar gimmick).
My personal laptop is a Lenovo Carbon X1 and Fedora runs very nicely on it, requiring very little thought put into management if you're running the default desktop (Gnome). I can update the firmware and BIOS from inside Linux, and Lenovo have even started shipping newer versions of the laptop with Fedora pre-installed.
There's massive scope of tinkering with Linux if you want to, but as long as you're careful with the hardware you buy there's absolutely no need to tinker at all if you don't want to.
That's fine, there's no reason why everyone needs to understand why I do what I do :) I agree with you on FOSS software brittleness though, Linux is usually better on that front. Brew is a hack that works well enough in practice, however.
Except that, in the name of progress, various things are broken in the x11 world too on fedora, as a direct consequence of the wayland updates. Which is unfortunate
We used to have a well-functioning display server that was robust and battle-tested.
The wayland people replaced that with a half-baked solution because they insisted on boiling the ocean - replacing the entire thing in one go, instead of working piecemeal (which the X protocol was explicitly designed to allow).
Which is a great pity, because now the day of the Linux Desktop is even further off.
X.Org people replaced it with Wayland. Are you going to maintain X.Org? Who is going to maintain it? Maybe you are going to hire developers to preserve purity?
> instead of working piecemeal
That's exactly what happened. Do you remember fonts without anti-aliasing? Run xfontsel, that's X11 fonts rendering. Freetype, Fontconfig, Cairo, Pango, HarfBuzz work on client side and push pixels to X Server. Entire rendering model changed, X.Org become compositor. They've faced limits, they've implemented DRI, DRI2 [1].
Now developers decided to make good compositor. And they've done it without disturbing X11 ecosystem, with clean way to port toolkits. Window Managers can't be ported but they can be reimplemented, just look how many compositors people built [2]. It is a miracle.
Linux future is bright. Video drivers moved from X Server to kernel, display configuration parts replaced by KMS, we've got modern font rendering, text shaping, we've got open source AMD GPU driver!
I still use Intel GPU, X.Org and xmonad, but the times they are a changing.
Sounds like a Fedora problem then. Pick a different distro - problem solved.
There is nothing so special going on in Fedora that can’t be done in other distros. Try an Arch based Linux desktop. You’ll get newer packages and better package management. Manjaro has worked well for me.
Have you considered using a tiling window manager? I've been using bspwm for years and it just uses X. I don't really miss anything about floating window managers, and since switching everything feels more stable and portable
This. I'm not necessarily against moving to Wayland but the display server and compositor is a really important component and Wayland has been a long time coming, 12 years now to be exact. Either we need a clear path to Wayland or just keep maintaining Xorg.
This is not true. Developers have been working actively on Xwayland, and I believe it can run most of the applications now, even with hardware acceleration. Is that not a clear upgrade path for you?
>backwards compatibility is pretty much disregarded.
Wayland compositors provide backwards compatibility with most X11 apps via XWayland, I don't think it's fair to say that they completely disregard compatibility
last time i checked remote desktop (ala vnc) was not really a thing and opening remote apps on the local display (ala ssh -X) wasn't a thing either.
That's disregard for backwards compatibility to me.
edit: which is not to shit on wayland itself, it's to complain about the general attitude which is like "just don't do that" or "oh that's old, we don't support that"
I've actually used waypipe more often in the last year than I ever used ssh -X, thanks to a shift towards WFH :P It's sometimes useful to be able to run Firefox remotely from my workstation at the office.
Due to how XWayland works, a lot of applications (sometimes critical ones) won't work under XWayland ever (pretty much everything that works on interaction between X clients fails hard).
>> pretty much everything that works on interaction between X clients fails hard
It's supposed to fail hard. That isnt just a security hole in X, it's a total lack of security. The notion that any app should be able to monitor or manipulate any other app is archaic and wrong from a security perspective.
Maybe it's archaic and wrong to you. Maybe it's even a good idea to be able to configure this on a per-application basis. What isn't the case is that you get to know what is convenient and right in my workflow.
Fair point. I think not allowing by default and having some means to grant permission is a decent idea, but implementation has to be right so people dont develop a habit of allowing it every time they're asked.
There they go htop, netcat, kill, and the whole unix pipe system. No, thanks. Programs monitoring other programs is the very essence of computing for us greybeards, and you will take this from our cold, dead hands!
> That isnt just a security hole in X, it's a total lack of security.
That "security" is useless on a Desktop where you run all programs as the same user. If you want a platform to run locked down "apps" that's fine, just don't expect others to be happy with the cost of that "security" when it provides no benefit for them.
Let me fill in with my perspective as a Mac user: I expect the vendor (i.e Apple) to supply a decent desktop environment. The did a quite good job at it. I didn’t need to pay extra, it was included in the sale of the computer.
But X.org isn’t a hardware vendor. It’s not owned by any hardware vendor. It’s not even owned by an OS vendor, or by an OS at all.
Apple is a multi-trillion dollar company. Microsoft and Google are close behind. Of course they can give away their desktop environment away for free. System76 (et al) can’t compete with that. X.org can’t compete with that.
I mean, this is exactly the reason I use Mac, because it ships with a good DE out of the box supported by a trillion dollar vendor. But that doesn’t help people who prefer Linux.
It's worth noting that when Apple first developed the (early versions of the) current desktop environment, they were not just not a trillion-dollar company, they were still considered to be "doomed" and on their way out by a large percentage of those who paid any attention to them.
If Wayland is the future, the future is grim. People often complain that Wayland is taking a long time to catch up to X11, but that actually stems from a deeper issue: Wayland has a horrible design, for an X11 replacement, a design that leads to massive fragmentation issues across the graphical part of the Linux ecosystem. Implementing a Wayland compositor requires much more effort than implementing an X11 window manager and each new compositor implementation reinvents the wheel many times, leaving users with less options for a desktop environment than on X11. Even worse, Wayland does not standardize on or is hostile to some essential features, meaning that users need to rely on compositor specific behavior for those features, if they are even available. E.g., an application that needs to grab the entire screen will need separate code for each compositor it supports screenshots on, or it must use a protocol outside Wayland to get the screenshot. Quoting Red Hat:
> Furthermore, there isn’t a standard API for getting screen shots from Wayland. It’s dependent on what compositor (window manager/shell) the user is running, and if they implemented a proprietary API to do so.
An xdotool (an input event automation tool, imagine wanting to inject or intercept input events) replacement is not possible on Wayland (without having separate support for each compositor, of course). These seem to be intentional design decisions (marketed as being necessary for security, but really being power-user hostile), this[0] Reddit comment puts it nicely:
> It has been almost a decade, why does Wayland not have a protocol definition for screenshots?" - answer - "Because security, dude! Wayland is designed with the thought that users download random applications from the interwebz which are not trustworthy and run them. Wayland actually makes a lot of sense if you don't think of Linux desktop distributions and desktop systems, but of smartphones. But for some reason we absolutely need this technology on the desktop, like we had not enough pain and lose ends over here without it.
But the lack of these features AFAIK also causes big trouble for users with special accessibility needs. Wayland is also, with its forced composition, hostile to interactive applications requiring low latency, e.g. video games.
> These seem to be intentional design decisions (marketed as being necessary for security, but really being power-user hostile)
That's an unnecessarily abrasive view: the Wayland protocol designers do not hate power users. But allowing programs to constantly take in arbitrary input & output information in the background, as well as simulate arbitrary input to other programs, is an obvious and glaring security flaw. Unfortunately, that forbids general-purpose screenshotting and key-rebinding programs - but there is no sensible middle ground.
>Wayland is designed with the thought that users download random applications from the interwebz which are not trustworthy and run them.
I appreciate that the average HN poster, and to a lesser extent the average Linux user, does not do this - but many, many Windows users do. In order to actually break into a mass desktop market, there has to be consideration of the ways people who do not currently use Linux behave.
I would even argue that lots of current Linux users are guilty of this - how many Arch users are checking the contents of PKGBUILDs from the AUR? How many Linux users, when searching for the solution to some problem with their desktop, have blindly copy-pasted commands from some support post - or worse still, just downloaded a "fix it" script to run?
Wayland is built on sane assumptions, because it also aims to cater to a not-insignificant part of the desktop market. That the response of an X11 supporter is "simply run the correct programs" show how little they have understood the goals and successes of Wayland as a project. It is not X12, and some of us are grateful for that.
>Unfortunately, that forbids general-purpose screenshotting and key-rebinding programs - but there is no sensible middle ground... In order to actually break into a mass desktop market, there has to be consideration of the ways people who do not currently use Linux behave.
Oddly, the actual mass market desktops have (and have had for a long time) solutions for the 'general-purpose screenshotting and key-rebinding programs'.
I know how bad was X11 technically, and sympathize with replacing it - but that by itself does not make Wayland a success. It took Wayland over a decade to almost get basic capabilities, and it's going to take another decade until all the compositor-based protocols are standardized (probably by eventually only having a single compositor implementation). By the time Linux finishes reimplementing its desktop, Windows and MacOS will be in an entirely different place.
Maybe this can't be helped - Open Source desktop development was always extremely underfunded and undersupported.
> by eventually only having a single compositor implementation
I'd just like to note explicity that this would probably be a compositor that tries to cater to only 80% of users (if that), with the rest of us being told to fuck off.
> That's an unnecessarily abrasive view: the Wayland protocol designers do not hate power users. But allowing programs to constantly take in arbitrary input & output information in the background, as well as simulate arbitrary input to other programs, is an obvious and glaring security flaw. Unfortunately, that forbids general-purpose screenshotting and key-rebinding programs - but there is no sensible middle ground.
Of course there's a middle ground: require special permission for programs that wish to do those things. For example, macOS has permission prompts for "[Application] would like to record this computer's screen." and "[Application] would like to control this computer using accessibility features."
> Unfortunately, that forbids general-purpose screenshotting and key-rebinding programs
Without this, you will forever be incalculably behind the proprietary OSes and the original X server. Perhaps you're happy in that corner, great! That puts you and whoever else exists there in the same conceptual space where everyone else with impractical and unreasonable restraints on their software lives.
That's a fine space to be in, but you don't get to say that it's the "correct" choice for the average user. It's the wrong choice, because it puts the Linux ecosystem at a permanent usability disadvantage. Instead of going with this "no you can't have it" approach, it would have been entirely reasonable to go with something permissions-based (perhaps even with a default that says it won't happen). Instead, we're stuck with one part of the community yelling that this is what everyone should want and everyone else trying their hardest to ignore them. It's an unhealthy situation for everyone.
> These seem to be intentional design decisions (marketed as being necessary for security, but really being power-user hostile)
>> That's an unnecessarily abrasive view: the Wayland protocol designers do not hate power users.
For the sake of civility and discourse I just want to point out the quote you're responding to does not talk about hating power users. The OP clearly just says that the "decision [is] power-user hostile." That's an important difference, as one is talking about substance (i.e. the decisions) and the other is veering into personal attacks. It's easy to conflate the two and I've certainly done it, but I just wanted to point it out to try and de-escalate the conversation a bit.
I also think it's reasonable to say that many security decisions in many contexts are "user hostile." Hostile design is an actual thing, where security and order are prioritized above convenience and functionality, and not simply an attack on "bad design"[0]. This is not to say all user hostile decisions are bad, but it's a trade-off. Asking for your password before every single interaction is user hostile (the user will never get in the flow[1]), but could be the right decision in certain contexts.
> Unfortunately, that forbids general-purpose screenshotting and key-rebinding programs - but there is no sensible middle ground.
Unfortunately, this makes it unusable for 98% of the population. Wayland also breaks screen sharing programs, which are essential for most, especially now during COVID.
> I appreciate that the average HN poster, and to a lesser extent the average Linux user, does not do this - but many, many Windows users do. In order to actually break into a mass desktop market, there has to be consideration of the ways people who do not currently use Linux behave.
Restricting Wayland feature makes sense if other permission is also restricted like Android. But it's a Linux Desktop that can easily run/install anything with `sudo`. Wayland should support advanced features.
OK then, if Wayland is meant for closed and user-hostile platforms, why is it being marketed as a X11 replacement? Why is there a constant FUD-included push to dissmis Xorg in favor of Wayland compositors?
When you dismiss any reasoning as "userhostile" and "FUD" it will be difficult to understand any change.
For the life of me I can't understand why so many people are always convinced that people who develop replacements for decades old frameworks do it only to spite users.
But the answer is: X sucks. It's 36 year old software with dozens of extensions. No one wants to write software that uses X, and apparently, per this HN submission, no one wants to maintain X.
OK, X sucks in a way (although your argumentation for that is wrong), but replacing X with Wayland is like trying to fit a square peg in a round hole. (Also, I don't think Wayland fits in any hole nicely, with it being hostile to even such basic and universally expected features such as taking a screenshot.)
I don't understand where you are going with your first paragraph, except that you are assuming things about others that you should not. Similarly with the second paragraph.
less than a minute. And it is second time in decade I've tried Wayland.
>>> why is it being marketed as a X11 replacement? Why is there a constant FUD-included push to dissmis Xorg in favor of Wayland compositors?
>> "FUD"
> Wayland is intended as a simpler replacement for X, easier to develop and maintain. [1]
"Intended" is not "ready". Could please cite your claims?
> replacing X with Wayland
Would you prefer if X.Org developers just abandoned it? We would have "X.Org is Abandonware" ten years ago.
And it is not zero sum game. These are different projects. People tried to fix X and failed. No one on this thread is going to maintain X.Org. But hope is not lost — another project have risen to maturity in last ten years. It covers some use cases and now you are bashing it because it is not perfect.
All of it while X.Org still works and I use it every day.
It is an X11 replacement. Just like CDs were a replacement for vynil records, and MP3 files were a replacement for CDs. But nobody expects to play records in a CD player.
Considering you're this deep in the discussion with your comment, you've managed to overlook quite a few expected and needed features that Wayland misses (or is even hostile to) compared to X, and which thus make your analogy obviously invalid. In fact I have to wonder if you're just trolling me.
The problems Wayland solve are those of yesteryear, sprinkled with the broken dreams that we'd all be running it on Linux phones by now. In that context a strict security model makes sense.
Trouble is, that security model makes no sense on today's desktop. In 2020, people aren't downloading and running native applications on desktops, not even (especially?) on Linux. The desktop is now solely a manager of browser windows. Everything that normal folks do is done through the browser, from email to office collaboration. Maybe a few Electron apps sprinkled in (mostly targeted for developers, ironically). Maybe the calculator? That's pretty much it.
The most important desktop apps today? Chrome and Zoom. Both which barely work in Wayland. But at least all that non-existent native desktop software can't now spy on my web mail? Too bad screen sharing is now a complete shitshow.
Actually worthwhile problems to solve on the desktop are for example high DPI and fractional scaling, rock-solid multi-monitor support, dynamically plugging in and removing displays, mixing displays of varying DPIs, high refresh rates and variable sync etc. The desktop will increasingly become a niche for high-end developer setups.
You must realise that many people use desktop applications for their work. The Office suite, the Adobe suite, AutoCAD, ArcGIS: whatever program is in use in the industry you are in.
>> In 2020, people aren't downloading and running native applications on desktops, not even (especially?) on Linux.
No, we download and run them in our web browsers because the operating systems (and X11) failed to implement security models that fit the modern world. Wayland is a move in the right direction here.
One more thing: the Wayland security model doesn't make much sense anyway, considering that running untrusted code over a large attack surface and without at least virtualization is probably a bad idea anyway nowadays (and even virtual machines can be escaped from).
> Trouble is, that security model makes no sense on today's desktop.
At least some users disagree. Flatpak and snap run applications in sandbox. I am to used for open source software, I fear installing closed source like Opera, MS Edge (not supported on Linux yet), Steam. And Steam with Proton is the next thing to bring Linux on the desktop.
Why are you comparing X11 and Wayland?
If something does not work on X.Org we are better with another project than with nothing. If Chrome and Zoom work on X.Org stick to it.
> massive fragmentation issues across the graphical part of the Linux ecosystem
This is a fair criticism. It is true that we see each of the compositors rolling its own extensions to the protocol. But I believe wayland is trying to standardize a lot of the protocols which should cover common use cases.
> each new compositor implementation reinvents the wheel many times
This is not true. There are libraries like wlroots[0] which you can build on top of. You don't have to redo the work that is already done in other compositors.
> application that needs to grab the entire screen will need separate code for each compositor it supports screenshots
There is the PipeWire[1] effort to support screenshot and screen capture on wayland in a unified way.
> Wayland is also, with its forced composition, hostile to interactive applications requiring low latency, e.g. video games.
Also not true. On a properly implemented compositor, video games frames shouldn't have longer latency to screen than on Xorg. X server has to do composition too, whether you run a compositor or not.
You're missing the point, which is that all of this should have been a part of Wayland, because "the desktops" will never agree on a common protocol.
Also, you're wrong that this isn't related to security, I can't be bothered to dig up some quotes right now, but it is/was actually very common to explain the lack of a screenshot feature on Wayland with security, and even to dismiss the feature altogether as a security issue.
Making it a part of Wayland would not have accomplished much. If the desktops really couldn't agree on a common protocol then you would end up with a bad standard that no desktop implements. See wl_shell for another reason why doing this in the core protocol would have likely been a mistake.
I don't know who was painting it as a security issue but they're mostly wrong. The security issue is in restricting use of those capabilities to only privileged applications. Maybe other (embedded?) compositors left that out entirely for security reasons, but GNOME and KDE always had plans to include screenshots, and wlroots has it too. Weston even had its screenshooter protocol for a while now but the other desktops decided not to use it and went their own way, for various reasons. (Full-featured screen capture is actually not as simple as you'd think, and it gets messier when pipewire and zero-copy capturing are on the table)
It likely won't ever be a drop-in replacement because a lot of X operations don't really make sense in Wayland. (Even in X, xdotool isn't perfect and a lot of its commands won't work if your window manager doesn't support the right hints) Unless you feel like volunteering to make a standard for this, you'll get the best mileage out of using an API specific to your compositor rather than waiting for someone else to come up with a standard. I know GNOME exports its internals with a javascript interface, and Sway has a way to run various commands using a JSON interface too.
I've added this comment to my favorites; a very succinct description of the problems with Wayland. At this point I wonder if starting from scratch was the right call vs. spending the 12 intervening years trying to (yes) dig into X11 and fix existing issues.
And it doesn't even mention the valuable features of X11 that were considered out of scope for wayland, such as network transparency.
It's really super useful being able to just fire up a program on a remote SSH session and get the window on my local computer. Without having to set up VNC and a window manager etc etc on the remote computer.
Of course this feature also needs a big review. It needs proper security (though tunneling over SSH fixes a lot of that) and there's too many back-and-forths in the protocol leading to it being really sluggish over high-latency connections. Makes sense as it was mainly invented for X-terminals on a local network. Also, more and more features like fonts are now rendered remotely instead of locally on the user's computer (the server in X terminology). NX and X2go fix that mostly but it would be great to have this in the actual protocol. As well as provisions for smooth video streaming.
As well as that, the whole computing industry is moving back from powerful endpoints (PCs) to powerful central computing (now cloud, the mainframes/powerful unix servers in the early days of X). So really, this feature will become more important again.
But yeah I would really prefer to see X11 being brought up to date rather than Wayland. Wayland is focused way too much on the local desktop.
X (at least nowadays) isn't very network transparent. Most things are done via shared buffers (it's faster than shoving bitmaps down a socket), with special fallbacks implemented by the clients, so that x forwarding doesn't break.
I know, also font rendering with anti-aliasing is usually done on the client side now (though the old way with font servers was far from ideal, it was much more bandwidth efficient!).. I know it's not perfect but they do this because the protocol lacks support for smooth video.
I'm just advocating a modernised X over moving to Wayland altogether, like the poster I replied to.
> > It has been almost a decade, why does Wayland not have a protocol definition for screenshots?" - answer - "Because security, dude! Wayland is designed with the thought that users download random applications from the interwebz which are not trustworthy and run them. Wayland actually makes a lot of sense if you don't think of Linux desktop distributions and desktop systems, but of smartphones.
The real problem of this "security" nonsense isn't even existing use cases like screenshot or xdotool but rather the cost it imposes on any future power tool that someone might think of but can't implement witout getting every compositor on board.
> Implementing a Wayland compositor requires much more effort than implementing an X11 window manager and each new compositor implementation reinvents the wheel many times, leaving users with less options for a desktop environment than on X11
At least if this helps to reduce fragmentation, so that we can have a decent desktop environment, instead of 4.000 half backed ones, could be something positive for Linux.
X works perfectly for me, and there is nothing I would want it to do that it doesn't do now. Why should it change?
I have many programs I wrote years ago that I don't change and I use every day. Constant changes are not a measure of utility.
But again and again, you'll find users looking at repositories and deciding that something is "dead" because there isn't any recent commit, often blaming developers for not doing more free work for them. This is a toxic attitude. When we have a software that works well and solves our problems, we should celebrate it, not complain it doesn't find new problems to solve.
> X works perfectly for me, and there is nothing I would want it to do that it doesn't do now. Why should it change?
I'm not sure whether a program that works for you is a good indication that it no longer needs to change.
> When we have a software that works well and solves our problems, we should celebrate it, not complain it doesn't find new problems to solve.
I think anyone can agree that, at the very least, screen tearing and proper support for mixed DPI setups are problems that fall squarely in the responsibilities of X and yet it still didn't manage to solve them after so many years.
So it's hardly the case that X is just so good that users nowadays have to try really hard to find new problems for it to solve.
We do need some form of signal that indicates a project has a maintainer though. It doesn’t matter that he has been inactive for 4 years (on that project), but if I submit a PR, it’s nice if there’s someone at the end of the line.
Repos could really have exactly that. A dead man's switch that asks you every, I don't know, three.to six months - via email even - "you good for this repo still?". You answer with a click "yup" and that's it - a signal on a repo on github or whatever that says "still alive". Otherwise "uh oh - we need help" and then a mechanism there to immediately offer alternative forks with a good enough signal "strength". It's like a pinky promise instead of actual repo activity.
You wouldn’t even necessarily need git/github to implement a new system! Agree on a standard file name like .githeartbeat containing a timestamp. Every few months (or w/e), active maintainers could push a commit to update the timestamp.
It sounds like a good idea, but I'm afraid it may be a nightmare for packagers (like the ones providing packages on GNU/linux distros), as they see updates to upstream only to realize they are just pings and don't need to be repackaged.
It wouldn't be that often, though. And maybe they would actually love to have such heartbeat. I would love to hear a packager on that.
Personally I'm a fan of zero touch where I as a developer submit my code repo to app store like play store, apple app store, flathub or something and they just build it using a standard definition that the store defines and make it available on the store. Kind of feels like a lot of effort for every distro to look at every change in every application...
Repos could also have a notice like "It's been X days since last interaction" which would track the last commit, merge or even just comment in the issue tracker made by the maintainers.
On github? I can't do that. Process has to be as frictionless as possible - hence not in a repo in files itself. A simple email with a button, not to bother maintainers too much / at all.
If somebody discovers a security bug, what are the chances that somebody can cut a high-quality release with the fix in it, if it hasn't been done for two years?
While not wrong, ignores that this isn't the norm.
I think we could use the terms releasing and maintaining. Constant releases is not the same as constant maintenance. And it is hard to agree that our industry sees that.
By way of analogy, we seem to think we can improve the roads by building new bridges every year.
Okay so Wayland is a no go and X.org is too crufty to attract contributors.... Where is the "rewrite it in rust" crowd when you need it? :)
The X11 protocol is the surface you need to maintain but swapping the internals should be do-able. Maybe we should have some call to action or reverse-auction or something. I'd love to support a viable path forward (I feel this effort would be a bit like neovim).
Personally I think it could start as a Xephyr or Xnest type project (to allow you to run rootless X) and then extend it with a from-scratch protocol that slowly replaces X (starting with support for simple but useful applications and going from there).
But clearly I only barely know what I'm talking about. Probably the reason things are the way they are is because of how the whole OpenGL / Vulcan etc. thing is not resolved, so any potential replacement has no foundation to build on (but this is something I don't know anything about).
The problem is that if you were going to rewrite a display server from scratch you would probably want to implement Wayland over X11. X11 is barely a protocol anymore because Xorg is so dominant, you either do what Xorg does bug-for-bug and with all the extensions or apps will break.
This is a rather worrying stance to be pushing on something which is relied upon day in and day out. The phoronix author has lost credibility with me by the opinion expressed in this article.
With this kind of FUD, it is no wonder Linux has a hard time being accepted on the desktop. What enterprises -- and everyday developers like me -- need, is a stable desktop to run IDEs and the like. As an example, Debian with Xorg has been fantastic for me for several years for JetBrains tools and GSuite for mail and docs, which is a pretty complete setup, and in the WFH era, Zoom and Teams just work. This is what we should be striving for -- boring, predictable, reliability, not juggling with chainsaws on the bleeding edge.
X11 comes from a different time, but any successor must be worthy, not just have a different approach. It's also worth remembering that much of Windows' practical longevity is due to its backwards compatibility. It's not shiny, but it works, and that begets loyalty.
Apologies, but I don't understand your comment. The original article said X.org is unmaintained. Your comment basically replied "it works for me and I like it, Wayland doesn't cut it".
Sure - great opinion - but doesn't change the state of the world, which is that X.org is unmaintained and all the real development firepower has moved on to Wayland.
We can all have opinions and I kind of agree with yours, but the article is about the realities of life and facts which we need to accept.
2% after 30 years in the making, and hatred for the GNOME and KDE communities, always pushing for PDP-11 experience with multiple xterms like I was doing in 1994 on IBM X Window terminals on the university campus with DG/UX, I already gave up around Windows 7 release.
It doesn't matter one jot that the "development firepower" has moved on.
We have a 30+ year legacy of using X11. I know the CADT development model doesn't care about that. But all the people using X11-based applications do care. We aren't going to move off X11 on the whim of people who care more about "the new shiny" than they do about the real-world needs of people who use this stuff to run their businesses.
These people have done more to destroy the viability of Linux as a desktop platform over the last 15 years than anything else.
But most of this stuff is basically irrelevant in a post-KMS/DRM linux world.
So few apps were ever written targeting directfb and libggi it's as if they never existed.
SVGAlib apps frequently performed direct hardware access requiring root and disrupting graphics hardware state WRT other graphical apps like X or fb. Unfortunately we have a significant collection of old demos and games targeting SVGAlib, but at this point it's probably best to just run them in a virtualized linux environment lacking any graphics drivers so SVGAlib can run the show on a faked VGA. For such apps where source is available, it's better to just port to something like SDL.
Mplayer on the fbdev2 driver works perfectly. So is DirectFB Links.
>Unfortunately we have a significant collection of old demos and games targeting SVGAlib, but at this point it's probably best to just run them in a virtualized linux environment lacking any graphics drivers so SVGAlib can run the show on a faked VGA.
> Or a wrapper trapping SVGAlib calls to SDL/SDL2.
That works for programs limiting their operations to SVGAlib calls.
As I mentioned, but you omitted in the citation:
> SVGAlib apps frequently performed direct hardware access requiring root ...
How do you trap those directly accessing VGA IO ports via inb/outb instructions? I clearly recall writing modex routines in assembly for SVGAlib demos, and I'm pretty sure I wasn't the only ex-DOS graphics coder doing that to make things happen on Linux in the 90s.
Sure X.org is abandonware but there are serious problems Wayland still hasn't solved that X has.
1) Wayland is really slow. I don't know if it's the compositing or what but it's unusable on lighter hardware that X ran fine on.
2) Widget toolkits handling window decoration is awful. Before the large number of toolkits just meant some controls were a little different but now basic behavior changes based on how programmers decided to build an app. And if you don't like the window decorations (say, they take up too much screen space) your choices are suck it up, or if you're lucky and willing to spend a bunch of time reconfigure every different toolkit your apps use.
3) basic stuff that worked fine on X11 doesn't work on wayland in the name of "security" (screenshots are a big one, there are extensions but isn't that the complaint about X? And if there's a security problem with something isn't hacking around with it because people need it a really strong indication that the idea is broken and probably making the situation worse?)
I think you don't mean that the Wayland protocol forces slowness but that the compositor you used was slow. The one I use is fast.
> And if you don't like the window decorations (say, they take up too much screen space) your choices are suck it up, or if you're lucky and willing to spend a bunch of time reconfigure every different toolkit your apps use.
No, there are protocols to negotiate whether an app has server side decoration or not and the compositor has the last say.
> basic stuff that worked fine on X11 doesn't work on wayland in the name of "security"
On only the core Wayland protocols you're absolutely correct, you can't even implement desktop shells with those. This means there must be extensions and there are 3 different classes: KDE, GNOME, wlroots, with tools for and incompatibility between each. The wlroots extensions are designed to be accepted into the standard and for wider Wayland acceptance they probably should be.
Screen sharing is still a particular issue that we are starting to see solved with the advent of Pipewire.
Wayland desktops are definitely usable. I've been happy with a wlroots based one (Wayfire) for a few months and KDE before that (from what I see GNOME has been perfect for a while).
One of the last standing real issues I'm facing is kerfuffle with Nvidia drivers, especially the ones paired with Intel GPUs, but I think (surprisingly) Nvidia are actually working on fixing that.
> No, there are protocols to negotiate whether an app has server side decoration or not and the compositor has the last say.
The compositor can implement that protocol and just respond with "this compositor doesn't support SSDs". GNOME does this, so all toolkits must use CSDs if they wanna work on the most popular Wayland compositor.
Incidentally, that's hell for all the simple libraries out there which just exist to get a GL window on the screen. GLFW, GLEW, SDL, etc. all have to implement CSDs now if they wanna work with Wayland. It also means that it's no longer feasible to just make a Linux application which uses the windowing system directly; everything must use a huge toolkit now.
EDIT: To clarify, I don't think this is an issue with Wayland, but with GNOME. There's no reason it couldn't have supported SSDs like every other Wayland compositor. But as it stands, it hurts the Wayland ecosystem.
GLFW and SDL are looking at using libdecoration to draw CSDs. Yes it's an additional dependency but most applications won't need to worry about it unless they need it.
The way GNOME is built it's not really possible architecturally for them to support SSDs in Wayland. Maybe that will change if they ever get around to redesigning mutter and gnome-shell, but I wouldn't wait for it.
I've seen libdecoration, and yeah, that seems like their intended solution to this. It doesn't seem like a terrible solution, but last time I looked at it at least, libdecoration was a long way off being production ready, and GNOME with Wayland is shipping _right now_ and must be supported.
You can use a library like that or you can draw your own decorations. There unfortunately is no other option. I don't see it as being likely that GNOME will support SSD any time soon, it just isn't designed for that.
What would it take for something not to qualify as being "unreasonably obstructionist" to you? If there's an open source desktop project that has infinite resources to redesign and rewrite everything at will at a moment's notice, then please let me know. I'll gladly use it. Hell, I'll sign up as a developer.
Have GNOME devs yet acknowledged that XFCE exists? In one now infamous exchange with Transmission developers a GNOME dev claimed that he didn't even know what XFCE was and, furthermore, that cross platform applications like Transmission should choose whether they would be "GNOME apps or XFCE apps", suggesting that they can't be both. The context of this exchange was that GNOME developer asking that a feature be removed from Transmission because GNOME would no longer support it.
That's just one example, but GNOME has a reputation for not respecting Linux desktop diversity for a reason.
I had to dig up the comment you're talking about but just to put this in perspective: You are forming your current opinion about entire communities based on one comment made by one developer 10 years ago. I'm fascinated by all this open source that's out there but if you told me that a developer had to know every single open source desktop out there in order to contribute to one of them then I can't really get behind that. It's very hard to keep track of what everyone's doing all the time.
But in any case I don't understand what is contentious about that. Some pieces are shared between GNOME and XFCE, but a lot of pieces are different and applications have always needed to choose. If they weren't trying to be different they wouldn't have made a separate desktop environment with their own separate libxfce libraries and components. (To GNOME's credit, they have gotten rid of most of the "libgnome" things since then, but now an application that wants to speak to certain GNOME-specific pieces is usually expected to use their private dbus protocols, things that XFCE would never implement anyway)
> You are forming your current opinion about entire communities based on one comment made by one developer 10 years ago
Incorrect. I am using a single example to illustrate a trend. A single example to illustrate what "unreasonably obstructionist" looks like. That particular incident made everybody roll their eyes but it hardly surprised anybody because GNOME has earned this reputation. Even back then nobody was particularly surprised by the GNOME arrogance, and I've not seen this change.
(Also, maybe I'm getting old but 10 years ago really isn't that long ago.)
Please don't use this kind of hyperbole. It's really not an interesting conversation when it leads into platitudes like "everybody did this" and "nobody did that" and "an entire group of people is arrogant and obstructionist" which neither of us have any way of proving or disproving. I wish this open-source-holy-war type of comment was not so common on Reddit and HN, it's about as constructive as the endless Emacs and Vim flame wars.
If you're trying to illustrate the trend that GNOME and XFCE (and KDE, and LXQT, and MATE, and Budgie, and Mint...) all have different goals and ideas for how applications should behave then yes, I would say that much is obvious by now, and it has only gotten more obvious over the last few years. If you have a technical solution to this then I'd love to hear it, but aside from that I don't care to bicker about whose fault it is that your applications broke, because honestly no answer is going to pleasing for you to hear. Unless I'm mistaken then we aren't paying customers, this is all just random no-warranty open source and there's no support hotline that's going to care about what's broken.
Transmission never broke. GNOME devs requested that Transmission break itself on non-GNOME platforms. Falling back on 'well you're not paying anything' really does nothing to dispel the perception that GNOME devs don't play nice with others. If anything, it cements it.
If nothing broke then I don't see what the problem is. And, your assertion doesn't even seem to be correct. I re-read the bug and it's one GNOME developer suggesting for changes to be made only in GNOME 3, because support for a particular feature was deprecated upstream. The developer is apologetic and later goes on to make suggestions about what can be done in Ubuntu and XFCE. I see nothing there about "GNOME devs requesting that Transmission break itself." https://trac.transmissionbt.com/ticket/3685
I'm not trying to dispel any perceptions either, if you have a deep seated belief that GNOME (or XFCE, or anything really) are "not playing nice" because they removed features from upstream, there's nothing I can really say to you to change that view. If you want to challenge your own perceptions, I'd suggest you start developing a new open source desktop and application platform yourself over a period of many years just to see how much work it is, and how it's practically impossible to support everything that every app developer asks for when none of them are paying you a cent.
I was trying to be clear here: There is a reason and it's that it's not really possible architecturally for them to support SSDs in Wayland; doing so would require a redesign of significant parts of the compositor's code. I'm sorry if that was misunderstood. If you're trying to say they should have anticipated this and made a different architectural decision years ago, maybe that's true, but that also offers no practical solution to the applications that need to support it right now.
I am trying to say that maybe they should have made the architectural decisions which wouldn't prevent non-GTK applications from working in GNOME. That's all.
I don't know what you're trying to get at. We can sit here and argue and say they should have done this and they should have done that, but that doesn't help anybody who wants support for them at this current moment.
And please don't exaggerate, non-GTK applications do work in GNOME. Qt5 has native CSDs that will show up in GNOME. If the application is a game or something that doesn't support CSD then the experience is somewhat degraded but they still work. You can resize and move any window by holding Super and Left/Middle clicking. I won't pretend the situation is ideal but I also don't believe it's totally unusable or on a bad trajectory at the moment—like I said the other libraries are working on getting CSD support too.
> GLFW, GLEW, SDL, etc. all have to implement CSDs now if they wanna work with Wayland.
I honestly don't understand what you're talking about here? If you want to get a window on the screen with OpenGL or Vulkan you don't implement anything special for Wayland.
In fact with Vulkan+Wayland, it's fairly easy to do it without any third-party dependencies, with OpenGL you'll probably need EGL which is a Khronos dependency. Here's some slightly outdated example code [1].
If you just create a Wayland window, GNOME will draw _only_ that window, with no decorations. The window can't be moved, minimized, resized, etc. because there's no decorations. The window's content shows up, but that's it.
I don't believe that's correct. The move, minimize, full-screen and resize seems to be part of the xdg_shell protocol (which has no extra methods for decorations and is only for window roles etc...) under the configure event [1]. I suspect that's how the tiling window managers like sway do it without extra server-side decorations.
Possibly this is a subtle distinction, but I do think it matters.
I believe you misread my comment; I said that the full-screen, minimize, re-size isn't part of the server-side decoration extension, but was part of a more basic xdg_shell protocol through the configure event.
I was objecting to the fact that you said that windows did not have this capability without the server-side extensions, whereas I think they can be through configure. I did not dispute whether sway had server-side decorations or not.
> The [compositor] I use is fast. [...] I've been happy with a wlroots based one (Wayfire) for a few months
wlroots performed very poorly on my i5-3427U with 4000 series iGPU. Very poorly. I ended up using neither X nor Wayland and instead having mpv render straight to the frame buffer (--vo=gpu --gpu-context=drm)
I'm a wlroots developer, and I'm using an old Sandybridge mobile i5 CPU daily. So I'm pretty surprised about this. I'd be interested in a bug report if you were to try again.
I'll agree with this post. Wayland is not pleasant on my core i5 machine either. I should point out that the GPU it has is very weak, I don't know what it is but most 3d games are unplayable.
X.org, Firefox, and Xterm all work very well on it though. As do other apps like gschem and openscad.
"Wayland" is just the protocol, can you give more details about your setup? For instance, GNOME is more heavyweight than say Sway (and gives a more feature-ful and eye-candy user experience in return).
I am forced to use ms teams for work. Every time I attempted to share my desktop, teams would crash. I thought it was just because teams sucks. As it turns out, it seems to be that running under wayland was actually causing it. Everything works fine under x.org. After I was running with x.org for a couple days, I realized there are quite a few little oddities that I just lived with and didn't realize it was because wayland was actually messing things up.
Teams is an Electron app (like Skype, Signal, VS Code, Atom, etc), which is based on Chromium. Chromium doesn't have Wayland support yet so those run via XWayland. Things should get better once Chromium supports Wayland properly (soon, as it finally entered beta in 2020-09):
Electron running on Wayland natively isn't relevant to its ability to screen-share. Even if it were running on Wayland it wouldn't be able to screen-share just based on that alone, because having a Wayland window doesn't have anything to do with screen sharing.
Native applications these days have the option to use pipewire and xdg-desktop-portal for screen-sharing, and it doesn't matter whether they use that from a Wayland window or an Xwayland window. Both Firefox (under Wayland) and Chromium (under Xwayland) use this method today.
Unfortunately Teams does not use that method and instead calls an X function directly, which crashes, as I wrote in the sibling comment.
Yes, Teams uses a function that is unsupported by Xwayland (XGetImage, if I remember the backtrace correctly), so it crashes. I worked around it with a rather convoluted setup that involves running Xephyr under Xwayland, then running Teams under Xephyr (via setting `DISPLAY`), and running vncviewer under Xephyr connected to a VNC server (wayvnc) sharing my screen on the parent wayland instance. Then I tell Teams to share the "whole screen" (ie the entire Xephyr window) and quickly maximize the vncviewer window (within the Xephyr window).
To be able to maximize vncviewer you do need a compositor running under Xephyr too. I picked i3 to be the most minimal. Again, i3 can be told to run in the Xephyr window by setting `DISPLAY`.
One caveat is that I still use Teams under Wayland to join calls for the audio, so this setup means I need two Teams instances to join the same call. This works for meetings but not for individual calls. Of course if you want to use the Xephyr'd Teams to do audio too there shouldn't be any problem; I just prefer having easy access to the Teams window to mute myself, etc instead of having to reach into the Xephyr window and unmaximize the vncviewer window.
People on the #sway IRC channel on Freenode suggested an alternative might be to use v4l2loopback to create a "camera" device that is sourced from the screen and then have Teams use it as a webcam, but I couldn't get v4l2 to work on my distro (it kept insisting my distro's ffmpeg couldn't encode video even though it could) so I didn't investigate further.
I salute your creativity and tenacity but what the actual fuck the need for a rube goldberg setup in order to make an application work is ... unfortunate. If I did such a thing I would take one last look to take pride in my work before deleting the whole affair and installing an OS that isn't broken.
Well, I knew what I was signing up for when I decided to switch from X to Wayland, so I'm personally okay with it. It's the same with doing anything that isn't mainstream, such as my decision to use Linux in a workplace where a lot of stuff is Windows-first, and sometimes Windows-only. The best thing the iPhone did to the world was to make web applications more popular, and as long as Safari lags on web standards it pushes websites to not use Chromium-only features and give Firefox users like me a chance.
Anyway, a less convoluted setup might be to use the browser version of Teams in a browser where it allows you to screen-share, ie Chromium and not Firefox. IIRC when I tried it a few months ago people on the call said they could only see me broadcasting a black screen, but I know the browser is fine so it had to have been a Teams issue. Maybe it's fixed now.
Hopefully these applications will catch up soon. As another example, Zoom's native Electron application uses a GNOME-specific method of screen-sharing which doesn't work in non-GNOME DEs, but I've heard the browser version works fine.
Re: black screen when using Teams from browser.
Check you have environment variables set correctly: XDG_SESSION_TYPE=wayland and XDG_CURRENT_DESKTOP=sway.
I set up them in my .zshrc before starting sway. This helped me to fix the black screen issue. Hope it helps you too!
If those env vars weren't set screen-sharing with xdpw would be broken as a whole because pipewire wouldn't invoke it, which wasn't the case. Other applications like that one python gstreamer script could access xdpw just fine, and Firefox and Chromium could screen-share in other websites (like Mozilla's gUM test page) just fine; the only one broken was Teams.
The v4l2 trick "works", but usually the application will use a lossy video codec optimized for faces, not screens. wf-recorder to v4l2 as a poor-man's screenshare to Discord ends in a blurry mess. YMMV with Teams.
- Use teams in qutebrowser: the dev (The-Compiler) added support for screensharing and it works great. Just like the unofficial app above, it also has native notifications, not the weird popup notifications of the official app.
as a counterpoint: my sway (Wayland) system is the first one I've ever had where I can watch youtube, or vlc/mpv at 60fps without flickering, dropping frames or tearing
I've used nvidia and radeon cards, plus intel onboard in this machine with X and none of them allowed all of the above
the display server also remains responsive even if one client starts going nuts
generally sway is considerably more responsive on the same system than the Xorg it replaced
> And if there's a security problem with something isn't hacking around with it because people need it a really strong indication that the idea is broken and probably making the situation worse?
If there’s a security problem and people are hacking around it, that’s an indication that the either those people have bad needs, or that the security model is flawed — but it’s not an indication that it’s wrong to attempt to secure that resource in the first place.
If a janitor needs the nuclear-missile launch codes to clean the missile silo, you’ve probably fucked something up — but that “something” isn’t the fact that the missile silo doors are code-locked. (Instead, it’s probably 1. the fact that you’re using the same credentials for physical missile access as you are for missile launch, and 2. the fact that your regular janitor is expected to clean the missile silo.)
In this case, the security model of Wayland is the same kind that Windows and Android already have: preventing “low-integrity” apps from screen-scraping “high-integrity” apps. In other words, preventing a random webpage running in Chrome from stealing your credit card details sitting visibly in a sibling text-editor window.
(Or, of course, preventing you from Twitch-streaming your playback of DRMed Netflix video. That’s a use-case that “needs supporting” too, given that the alternative is that type of video not playing back on the platform at all.)
I guess my question is: Why shouldn't "take a screenshot, either of a window, the whole screen, or a portion of the screen" be something that Wayland itself be expected to handle?
I mean, if you're worried about security of apps from other apps, don't make it something in the apps' domain. Only allow Wayland to take the screenshots, but support that, make it easy, and you won't need to worry about either the security model or users complaining.
I mean, this is the way other OSes do it...to the best of my knowledge (which is certainly not comprehensive) neither Windows nor macOS allow arbitrary apps to screenshot arbitrary portions of the screen; that's handled by the OS itself.
I think that (the compositor doing the screenshotting) is already the way it works. But "screenshots" (i.e. fed to the user as files / on the pasteboard) are the exception, not the rule, of the use of this API.
The normal, non-edge-case use of the relevant screen-capturing API is for screen-sharing ala Zoom, or for remote desktop using RDP et al. These are use-cases where one app wants to see what's going on in other apps — exactly the thing you wouldn't want a malicious app to be able to do, but also exactly the thing that you do want these apps to be able to do.
I have to wonder if this is distribution dependent? On Manjaro my experience with Wayland has been fantastic. I cannot perceive any speed differences vs X.org; Wayland just flies.
It would be interesting to poll users and see just what factors are contributing to speed issues. GPU? CPU? Distribution? Wlroots vs Gnome? etc. Learning more about what users experience could be helpful in making Wayland better. Has anyone done this?
This is a great idea. So many Linux issues go undiagnosed because the response is so often “it works fine for me, on my <insert insanely overpowered system specs here>.”
Yeah, I think there are a lot of distro bugs, and a lot of old or mismatched packages, outside of the Arch/Manjaro ecosystem. This hurts the impression of Mesa, the Kernel, and basically anything else like this where there's no good reason conventional distros shouldn't ship new feature releases at least a bit more frequently than they do.
Wayland is really slow. I don't know if it's the compositing or what but it's unusable on lighter hardware that X ran fine on.
This is crazy when you think about it. I remember running an X server, Hummingbird I think it was called, on 386 and 486 machines connecting to Suns and it was fine, this was a perfectly acceptable way to work. Couple of xterms, an Emacs, xbiff for email, maybe some xeyes just for fun. Developing with Tcl/Tk and running those applications. Now we have several orders of magnitude more CPU, memory, network and Wayland doesn't even perform as well as that! My mind is truly boggled.
Yeah - I have no clue why you're being downvoted. I have the exact same professional experience where Wayland is slow out-of-box vs X. I also share the experience of getting X to work on ancient hardware without much difficulty.
Let's be real here - if you're needing something to "just work" you're going to install X. Sorry Wayland, you're just not there yet.
> Yeah - I have no clue why you're being downvoted.
I didn't downvote but it's because parents comment are anecdotal and not providing further data one might be able to engage/confirm/refute ... and therefore I learned nothing from reading it.
I'm also running a dual setup of i3/sway and the only reason why I still keep i3 around is screen-sharing in jitsi and similar. and my experience is that wayland has a lower use of CPU/memory than when running X (i3) but it's not why I prefer sway. (I'm using sway with "xwayland disable" so maybe this is where a lot of resources are saved). But the whole discussion is pointless without verifiable benchmarks.
Well, the anecdotes are kind of real, and vast in numbers. I suspect many readers here never used X on a 90s PC or workstation, so they could be forgiven for not realizing it was up to the task on hardware that is now pitiful.
Another example I like is the Nokia N900, which ran X on a phone no less, phone hardware from 2009, and it was pretty good there.
Part of the problem is surely software bloat over time on higher parts of the stack, rather than X itself. You couldn't get the 486 in the comment above to run recent gnome or a recent browser. But you could run software of the era well.
Maybe someone could try to install Wayland on an old device, where latest Xorg works fast, and see if Wayland also does. Comparison video would be nice.
For what it's worth, whenever I tunneled X over a non-local SSH connection, it was slow as molasses. That's because almost all contemporary GUI applications render to a bitmap anyway. Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995. Frankly, I'd rather my applications are at least somewhat aesthetically pleasing.
Of course it does, just like video-based remote desktop systems work great these days.
That's the point, X's claimed advantages here long since stopped existing, and nobody noticed because the "inferior" approach is perfectly fine with modern internet connectivity.
Honestly, X forwarding doesn't work that well in my experience, unless you have a very stable connection, with quite a bit of bandwidth (~1Mbps at the very least). I've had more success using xpra for forwarding, as I'm often connecting over Wi-Fi (hostel, campus rooms...).
It's also rather complicated to set-up on the server side (xauth, magic cookie, etc).
waypipe, on the other hand, was a breeze to use, even though it's very young. I tried with Firefox and 500Mbps of upload capacity, it worked fine as long as the window wasn't too large.
No, that is the client side. I went back and looked at the documentation, I was wrong and conflated two ways of doing it:
- X forwarding over SSH: this only requires changing X11Forwarding in OpenSSH sshd's config
- Plain X over network, which is secured with `xhost`, insecure, and needs transfering the magic cookie or other authentication information
So, not nearly as complex to setup as I recalled, though it's much simpler to run a nested wayland compositor (which waypipe does) than a X11 server (which xpra does). The difference between X11 and Wayland remote access thins when xpra is involved.
Outside of major metros, in the US a lot of towns only offer up to 5Mbps down, and only then if you pay out the nose. Not sure if it matters for X forwarding, but upload caps are also ridiculously low even on otherwise reasonable connections.
Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995. Frankly, I'd rather my applications are at least somewhat aesthetically pleasing.
That is entirely subjective no? Personally I think Motif is one of the pinnacles of GUI design.
I also agree, and even if I did think that the modern stuff with the impossible to see borders and the buttons that you can't tell were up and down looked better (I don't), I'd still prefer smooth remote operation.
Alas. That isn't the way the world has went, and it's extremely expensive to be weird.
> Those that actually use the outdated X vector graphics operations look like utter garbage compared to anything post-1995
Modern UIs could do with being a bit more like 1995. Most of my work is performance sensitive so the first thing that goes are all the desktop effects that try to barf pointless rainbows and glitter in my direction.
For what it's worth, Exceed was never fine in my experience, and you were better off with the Cygwin server. At one time, the first thing to ask about certain sorts of Emacs problems that were reported was "Are you running Exceed?", with high probability the answer would be "Yes". I never understood why "we" paid for it. But, yes, X did always run on relatively low-resource machines. (I can't comment on how Wayland compares.)
Part of it is toolkits throwing sometimes multiple megabyte bitmaps to draw, which starts to have problems the moment you don't have a zerocopy, gpu accelerated method to draw them.
The old core protocol approach extensively used optimized graphic operations on the server side, with clients sending things like "draw me a rectangle/fill a rectangle/draw a bunch of lines" etc. - today you're going to get pretty big bitmap (especially with high dpi).
It's the same problem that mobile devices faced, and is related to a lot of issues on how android devices were "janky" (and related to the various hacks that Apple did to make sure your application wasn't capable of overstressing the early iPhones - cause just displaying basic UI was close to doing that.)
I personally don’t know, but most likely when you are not forced to think about memory, efficiency, etc... most people just don’t. So if you aren’t actively developing on those lightweight systems your code won’t run efficiently on them.
Technically what Wayland is doing, using 3D GPU for everything, is the best way forward. Windows is using it since Vista. When done right, gradients mentioned in other comments are free, GPUs have hardware to interpolate values (such as colors) across vertices of triangle, for free. Many other effects are either free or very cheap.
Engineering-wise it's really hard.
Microsoft reworked GPU driver model introducing WDDM. They invented a new user-facing API for that, introducing Direct3D 10. They did that in close collaboration with all 3 GPU vendors. They made user-mode components like desktop compositor itself, dwm.exe, and higher-level libraries to benefit from all that stuff. Initially they were optional things like WPF, Direct2D, DirectWrite, then with Win8 they introduced WinRT later rebranded to UWP. That one is no longer optional and is the only practical way to render "hello world, GUI edition" in modern Windows (possible to do with DirectWrite or legacy GDI but neither of them is practical).
The problem "render nice high-resolution graphics, fast" affects everything, the entire stack. Modern Linux has decent kernel infrastructure (DRM/KMS), but even so, remaining challenges are hard. Linux has less luck with user-facing GPU APIs (Vulkan is not yet universally available, neither is GLES3+ or OpenGL 4.3+). For some GPUs, quality of drivers is less than ideal. OS maintainers oppose stabilizing kernel ABI for drivers. There's no high level GPU-centric graphics libraries, I tried once with moderate success https://github.com/Const-me/Vrmac but that only supports one specific Debian Linux on one specific computer which happens to support GLES 3.1, and some important features are missing e.g. no gradient brushes or stroked pens.
I don't see any large party interested in making that happen. At least not for desktop Linux. Valve started to do relevant things when they thought Windows 10 is going to kill their Steam business model, then it became apparent Microsoft won't make Win10 into an iOS-style walled garden, and they no longer have much motivation.
> Technically what Wayland is doing, using 3D GPU for everything, is the best way forward.
It's great for the common desktop case.
It's not as great for some other cases in which Linux is the preferred platform (headless servers, repurposed old hardware, etc).
The problem isn't, of course, that there exists a solution for this on Linux. It's that that solution is being pushed as the only one that should be maintained—and thus, exist—going forward.
Also phones and tablets. Also embedded devices who have a GPU + LCD, I have personally shipped Linux firmware where I used NanoVG on top of DRM/KMS to render touch screen GUI. Also for kiosks, cars, smart watches and many other applications.
It’s great everywhere you have a high-resolution screen. And it’s mission-critical for ARM devices who don’t have CPU power to render that screen on CPU, at least not at 60Hz.
> headless servers
Why would you want a GUI there? Even Microsoft has console-only “core” editions of their Windows Server, since 2008. They made it because competition from Linux, who had that from the very beginning and was thus way more suitable for cloud use cases. It still is due to other reasons, but that’s another story.
> It's that that solution is being pushed as the only one that should be maintained—and thus, exist—going forward.
I get why some people would want the X to be maintained, but the thing is, it’s very expensive, and not fun.
Developing game console emulators is expensive, but fun and people do that in their free time with great results. Moving forward GPU-targeted Linux GUI is expensive, not too fun, but there’re commercial applications with healthy profit margins (automotive, embedded, etc) so people from these areas are working on that tech. Patching x.org for repurposed old hardware, on the other hand…
Wayland’s mistake is assuming every application has a local GPU. In reality the user has just one GPU attached to his monitor, which is fine for email and gaming, but serious tools run miles away in datacenters. We wouldn’t be rewriting everything in javascript if only we hadn’t forgotten how cool remote X was.
I work on a serious tool, specifically it’s CAM/CAE stuff. Despite Google, Amazon and MS sales people apply pressure to upper management (they want us to move to their clouds and offering gazillions of free compute credits), our software works on desktops and workstations, and I have reasons to believe it gonna stay this way. With recent progress of HPC-targeted CPUs, and steady downward trend of RAM prices, I believe our customers are happier running our software on their own computers, as opposed to someone else’s computers.
> We wouldn’t be rewriting everything in javascript if only we hadn’t forgotten how cool remote X was.
It was cool in the epoch of OpenGL 2. By the time Metal, Direct3D 12, and finally Vulkan arrived, it stopped being cool. Essentially, these APIs were designed to allow apps to saturate PCIe. You can’t transfer that bandwidth over any reasonable network.
What could be the reason behind this? Asking as a noob
Abstractions piled on top of abstractions. Something that might have been 5 function calls deep on either side with a carefully crafted packet in the middle is now 100s on each side.
We have a culture that prizes programmer happiness above all and this means everyone thinks "this is a mess, I'll put my own layer on top to make it nice, then work above that layer". Repeat 100 times and now you have processors literally 2000 times faster that struggle to even keep up with keypresses. But what noone wants to admit, is that it's messy because the problem domain is messy and sometimes you just have to live with the mess and get some real work done. The programmers of old understood this.
Anecdotally I run X11 on several Sun workstations (Motorola 680xx), a Dec Alpha (some RISC), HP X terminals, etc. All of them were reasonably fast or at least not any slower than the Windows and Mac boxes of the time (1990/95.)
We played videogames on them. Does anybody remember Netrek, Xtanks, and a F16 vs MIG flight simulator which I can't remember the name of?
I think much of the problem is that today's systems have vast amounts of eye candy that was all but nonexistent back in the 80s and 90s. X terminals and 486s don't have the resources to throwing fancy visual effects on the screen, and sometimes might not even be color displays.
It took about a decade for NetworkManager to gain CLI, all the time it tried to eat more and more control over network stack. It was also opinionated in the worst possible way, like responding to requests to support ad-hoc wifi mode (back when NM was only about WiFi) with "your request is dumb, you're dumb, and we will never do that".
It got usable within last few years as somewhat general thing (after having already wrestled control over network from you first), of course by the time it got useful work started on replacing it with new thing.
Wish that 2005-me was interested in keeping the citation, but I strongly remember it because I was dealing with first WiFi-enabled device at home at the time - and ad-hoc was the only form of network connectivity for me for a long time.
At the time, NetworkManager was gaining steam as "the" solution to wifi woes, and well, I tasted dirt ;)
NetworkManager was literally just a GUI initially, wasn't it? IIRC in 2006, connecting to wifi from the CLI on a system with NetworkManager involved running the wpa_supplicant commands that NetworkManager wrapped.
It already had beginnings of current architecture, but at the time it only wrapped iwconfig and wpa_supplicant, yes.
But the decision to leave people who asked for ad-hoc support (especially when, outside of USA and possibly few other countries, access points were still not as common equipment) was done by design, not because it would require any significant increase in code (IMO).
This isn't true. It was never as good as Intel's Connman, which was designed to be modular from the start. NetworkManager started out as a UI app and then was evolved into what it is now. It's still not as fast as connecting to WiFi as Connman.
I guess it was adopted instead of Connman because RedHat.
Maybe Intel was not cooperative about Connman. Trying to contribute a patch to Connman was the worst patch contribution experience I've ever had. On the official IRC channel, over several days and times of day, there were only people who could tell me about the developers, but the developers were not there. The bug tracker required writing an e-mail to someone at Intel to open an account. I don't remember details about the mailing list, but if I did write to it I was ignored as well. In the end the patch never went in, I had enough. And yes, the patch made sense. Years later I later saw one that seemed to fix my problem in a similar way.
Connman is better at least insofar that it is less code than NetworkManager and that it connects to a Wifi network in under a second instead of several seconds. But I believe it can also do less, for example regarding VPNs and such.
NM doesn't connect to wifi by itself, it uses wpa supplicant for that. If something is slow, it is wpa supplicant. Fortunately, nm backends are modular and you can use Intel's iwd instead.
A while ago, maybe about a year ago, after upgrading Debian I discovered that I was no longer able to connect to my 5GHz wifi network, but could connect to 2.4GHz networks. Switching from NM to wicd fixed the problem. Same kernel, same wifi card and driver, same wifi AP with the same configuration. As baffling as it seems, getting rid of NM was the only thing I had to do, or could do, to make it work again. NM is now dead to me.
Or systemd? I've read about complaints regarding it but never had any issues myself on the many dozens of servers that I've managed. On the contrary I find systemd very easy to work with.
That's a workaround, not a a fix and might break applications which legitimately need more time to stop!
A proper fix is to integrate startup and shutdown of all applications with systemd. That's something not properly supported everywhere yet. For example for KDE that's currently in the works: https://blog.davidedmundson.co.uk/blog/plasma-and-the-system...
it took me about 10-15 minutes to turn my eyes towards SELinux, after some initial debugging.
i turned off selinux temporarily and activated the connection successfully, and determined that it was indeed SELinux that was preventing NetworkManager from doing its job.
then i re-enabled SELinux went to look at /var/log/audit/audit.log to see what it had to complain about and indeed some files created by NetworkManager in /root/.cert had bad contexts.
I set the proper contexts (semanage fcontext -a -t <context> <pathregex>), applied them (restorecon -Rv /root) and all was well.
I had to study this stuff in order to get Red Hat certified (RHCSA, passed with 300/300).
Getting certified is absolutely worth it. Getting certified is the difference between "10-15 minutes to get a diagnosis" and "I gave up on SELinux about 20 years ago".
You can have SELinux in a learning mode where it gives you a notification when it blocks something, and a command you can run to make it not block that action any more.
selinux is at the heart just about labels. If something tries to do something but doesn't have the right label, selinux will block it.
I agree working with selinux is a bit of a PITA but if you learn sealert, ausearch, and/or audit2allow it can severely reduce the pain and allow you to keep selinux enabled. I really like this page personally: https://wiki.centos.org/HowTos/SELinux
So for example, I applied updates to a set of fedora systems recently.
The updates to network manager decided to change network interface names from looking like enP51p1s0f0 to enP51p1s0f0np0. The rename broke the (also networkmanager) channel bonding configuration, resulting in them being unreachable and requiring a physical visit to get them back online.
Networkmanager adds a lot of automagic, but outside of simple widely used configurations ("laptop with wifi") it causes unpredictable and unreliable behaviour.
I especially like the standard fedora server install where the nics present during the install all get DHCP enabled on them, but only those nics. So if you move a network card to another PCI slot after the install it will mysteriously not work. ... I see nothing wrong with not automatically bringing up interfaces on a server, but mysteriously bringing up some and not others makes for mystifying and difficult to diagnose issues that no one seems to know how to fix.
If you are cable-connected and on the edge of your wifi range it will periodically drop your cable connection to check if wifi is good. Not great.
Connecting to wifi takes way longer than without it.
Simple dhcpcd for ethernet + wpa_supplicant with handcoded config work way better.
> it will periodically drop your cable connection to check if wifi is good
I don't know when you encountered that behavior, but NetworkManager has for a long time handled wired and wireless as two independent connections; it just sets routing priority to prefer wired if available. I believe you that you observed that behavior, but to the best of my knowledge this does not match any current behavior of NM.
As long as Red Hat continues to contribute to the community, and the majority of users consider their output superior to the alternatives, the Linux ecosystem will be dominated by Red Hat.
The entire Linux userland is pretty much a Red Hat thing at this point. Deal with it or find another OS.
Today I've updated to the newest Kubuntu release 20.10 and considered switching to Wayland, but then abandoned the idea when I found out that middle click copy paste is only implemented in Plasma 5.20, and I only have Plasma 5.19. I've seen they've fixed the screenshot program though (which I remember to have been an issue on older versions).
I'm looking forward to 21.04 for my next attempt to switch, maybe by then Firefox's native wayland support will have progressed as well. After all what's the point of Wayland when most of your software uses xwayland :).
Firefox native support is pretty much here already (I've been using it for quite a time now and it works really well). The only missing piece that is not merged yet is screen sharing unfortunately. It's available on Fedora's Firefox build.
All GTK3 apps runs well on Wayland too.
Qt apps runs well but it doesn't feel as polished (I mainly have issues when I use 2 screen with different scaling).
The last big things are chromium and electron. Once those have ozone merged and enabled in stable, it's gonna be a massive step forward. Those are the last apps I can't run natively on wayland. Ozone is in the beta branches now so, hopefully, it will happen next year.
After, it's not that much of a big deal to run those on xwayland as long there's no scaling. As soon as you're on a hidpi screen or just use scaling, it's when it becomes blurry and is quite annoying.
I personally still prefer wayland over x because the experience is just better. No flickering, it's smooth and feels more snappy.
> The only missing piece that is not merged yet is screen sharing unfortunately. It's available on Fedora's Firefox build.
In my experience, it doesn't work yet; Firefox can select the IDE window (though you have to select it twice for some reason), and I can see the shared screen on my side (so the Wayland part seems to be working fine, since Firefox can get the window contents), but to my coworkers it appears frozen (they don't see any changes I make to that IDE window). I don't know if it's a bug in Firefox or a bug in Google Meet.
It is blurry only if you have fractional scaling enabled (enabled is enough, even if you use integer scale). With fractional scaling disabled, even xwayland apps are sharp.
Yup, I've been running the ozone AUR release of chromium for a while, and outside of a couple remaining rough spots (mainly menus used for extension development/debugging) it's been great.
I run a 4k display and a 1920x1080 display side by side, and X is utter garbage at handling it.
> I run a 4k display and a 1920x1080 display side by side, and X is utter garbage at handling it.
I'm running 3 physical monitors, split to 4 "virtual" monitors with overall 3 different resolutions and I have no problem whatsoever (minus dealing with nouveau on 1060).
Funnily enough earlier today I checked the status of Wayland on my setup- Arch - with KDE and my main issue is still mouse gestures. Currently using easystroke, which is itself an abandonware, but until I find something that works under Wayland I can't change yet nomatter the quality of DEs etc.
Yes and something akin to easystroke would have to be implemented in each compositor under Wayland. You are thus dependent on the window manager you want to use to implement it. I asked the maintainer of the window manager I would want to use whether they would accept a PR for something like that and they said no, they would not.
So, I would have to maintain a fork of my window manager and compositor to keep using something like easystroke under Wayland, instead of using a finished tool that hasn't needed any significant maintenance in 7 years. All in the name of ‘security’.
I don't think we'll ever get into that situation. Mir is gone now and Arcan is a pretty niche solution, so I think that Wayland will continue replacing X.org unabated now.
Happy to see Ubuntu make the switch, it'll pull in a good number of daily users and we can iron out the last few remaining issues.
Frankly, Wayland has been excellent to me. It's hard to describe how nice it is that I can't remember the last time I had to open an xorg conf file to try to get monitors working, or get even basic functionality from my touchpad.
Oh, I totally agree—Sway is my favorite window manager on Linux by far. systemd and Wayland and other similar projects are controversial, but they're making huge improvements in the Linux desktop space and the lack of fragmentation is refreshing.
I have been using sway on Fedora for a year. It's been really good so. It's a smaller niche than X.org's, but I get to do everything I need. HiDPI support is just right, just throw output eDP1-1 scale 2 in the config and you're set.
Clipboard works perfectly splendid, screen-sharing works (not as perfectly splendid as clipboard does), input works, chromium/electron is getting support for native wayland. Qt and GTK Wayland support's quite good.
I have had no problems whatsoever and I invite you to try it. I have no hard-proof evidence or numbers to support my opinion, just try it.
This is sway, i3 on wayland, a tilling window manager. Not some commercial piece of software that has to target the lowest common denominator to survive in the market. It's meant so that you configure it and it's very configurable. So it doesn't any defaults bar what's needed to launch the manager. The rest is up to you.
Configurability is great, but that isn't an excuse for having poor defaults. A child comment to yours mentions that Sway now uses a heuristic to find a "reasonable" default scaling factor, so it appears that Sway currently does the right thing.
The "standard" for anything today should be the physical display size divided by the pixel count to give the exact resolution for X and Y in DPI. Scaling should be relative to that physically correct resolution.
We've inherited a lot of baggage and odd conventions, some of which were wrong to begin with. I don't think we should be carrying on with it if we can do better. Having these scaling factors directly correspond to physical reality would be a start.
I'd rather just have the configuration file be simple and well documented and let me make the decision. My monitor running at maximum resolution is about 163 DPI, so an automated system could guess both ways. 200% scaling works well for me, and it's a single line of configuration, done once, and I don't have to worry about heuristics changing behind my back.
Why do you think it needs any of that? For that matter why do you think windows, mac, android, chromeOS etc. all need "explicit configuration"? They all simply set a reasonable default scale and then give you an easy way to pick a different one if you want.
Just out of curiosity, how does screenshare work for you? I've been trying setting it up for work (slack/teams), but to this day it just doesn't work. Using debian sid, so pretty up-to date packages tbh.
I just start new xorg session for screenshare, which, frankly, sucks.
Well, it works on FF and certain builds of chromium (those built with ENABLE_PIPEWIRE flag on). I am using xdg-desktop-portal-wlr and it works quite good. I was able to present my desktop to others without significant hiccups. Once Electron enables Ozone (and builds with PIPEWIRE on) we will be able to use screen sharing also on teams and other electron-packaged apps. This obviously works for my workflow and I understand it won't work for everyone. I don't know about Zoom, but I heard (and thus not entirely sure) it works only on specific distros with some specific GNOME versions.
Yup, this is sort of here with pipewire at this point, but it hasn't propagated through to the electron apps.
Slack is still a pain point for me as well, but mainly because Slack continues to demand that I install the desktop app for calls/screen sharing.
Zoom's desktop app also doesn't work, but I can use zoom-redirector and have the calls immediately open in my browser (you can get the same thing without the extension, but it requires you pretend that you can't install their desktop app and several button presses for every meeting).
My guess is that we're about 12 months away from having it work by default in most places.
The Electron apps are not there yet. I too have to use Slack ATM and my workaround is too have Pipewire working with Browser (Chromium and Firefox in my case) and use Slack in the Browser when I need to share my screen. If the target you want to share is a X window, you won't even need Pipewire since WebRTC will just work without it.
I fear you are right. While there might be commits in the Wayland repos, featurewise progress has ceased. Waylands broken architecture has made progress hard to impossible. Porting of popular window managers is extremely slow since there is just no thought given to X compatibility. Input handling by each application on its own is insane and broken. Feature-consistency across compositors on things like screenshots is a pipe-dream. A promised easy ssh -X replacement doesn't work right after a decade.
Waylands broken architecture makes progress slow through unnecessary duplication, incompatibility and the lack of a smooth migration for many software packages (usually it's rewrite-time). Wayland should be abandoned and the design redone.
The design overview slides are their own critique: Wayland does almost nothing besides render buffer handling. Input? Applications job. Window decorations? Compositors job. Application talking to the Compositor? Somebody elses job. Clipboard? Maybe compositor or toolkit. Screenshots and remoting? Somebody elses job, but only after Wayland has bored the appropriate holes in its security model. This all leads to a ton of incompatibilities between compositors, toolkits and applications. And beyond Gnome, the full "featureset" is still not implemented, where "featureset" is barely adequate as an X11 replacement.
But the buffer handling is great, no more flickering...
... so long as you don't care about latency and don't mind a $3000 top of the line 64 core desktop feeling slightly slower than a machine from 20 years ago.
This depends on the user. Personally I never minded the extra frame of latency.
And some of the newer Gnome desktops have even removed that, thanks to some tricky work by one guy, as I understand it.
I may have misunderstood the explanation but it seems to involve some nice timing getting all of the application buffers swapped just before the main GPU screen buffer swap. This gives applications long enough to draw updates, for the most part, and gets all updates into the next screen buffer update instead of the update after that.
As I understand it the extra latency is relatively hardware independent, and caused by extra whole frames of delay from additional compositing layers. I expect an anaemic SOC to be slow, it's less fun when extremely high end machines are also slow.
Every time I pull a old system out of mothball and start it up I'm disappointed at how much less responsive the feel is of modern systems sitting right next to them.
This comment is mostly correct (as a daily Wayland user), with a few exceptions.
> Wayland does almost nothing besides render buffer handling. Input? Applications job.
Applications don't do more work to handle input on Wayland as opposed to e.g. X11. It's still event-based, and the compositor feeds input events to applications that can process them as normal. Keyboard, mouse and touch input are part of the core Wayland protocol, and tablet input is part of an extension that all major compositors fully support.
> Window decorations? Compositors job.
Kind of, it's the job of the application (client-side decorations) or compositor (server-side decorations). The compositor can choose which to use. CSDs give more custom look-and-feels to applications that have them (think Firefox or Chrome); SSDs provide consistent looks across all apps. GNOME only supports CSDs, but is an exception in that regard.
Canonical announced Mir out of nowhere in an attempt to gain control just like they are trying now with Snap. After the announcement of Mir their developers went to IRC and made abundantly clear they had no idea how Wayland works and that Mir was useless. Where is Mir now? Using Wayland.
People who were in the fence about supporting Wayland were now even more convinced they should ignore it. That's one of the reasons a decade later you still have this much FUD.
Nice ignore the part where they realized Mir was useless. They even went on an edit spree in their wiki page.
If Mir was so good and superior they would just keep developing it. Isn't that obvious? If the company who already spent all this dev time aka money on Mir doesn't believe in it, why would anybody else? They were even eating their own dog food and had some major industry pull at their disposal. The answer is they fucked it up.
> If Mir was so good and superior they would just keep developing it. Isn't that obvious? If the company who already spent all this dev time aka money on Mir doesn't believe in it, why would anybody else?
Not that I care or know a lot about Mir, but do you seriously believe that it was always the best technology that became successful and triumphed over its competitors?
> Nice ignore the part where they realized Mir was useless.
> If Mir was so good and superior they would just keep developing it. Isn't that obvious?
No, that's an arbitrary conclusion you've made. There are multiple reasons why a project might be killed, even if it's a good one. Canonical killed Ubuntu Touch but now it's gaining traction again because we have things like the Librem5 and PinePhone. Premature if you ask me but it makes sense as a business decision.
If we look across industry, would you say killing Google Reader was because it was inferior to others? I wouldn't.
the problem was that the "whole" open source community sponsored by RedHat, Intel etc... bashed the project to oblivion and left Canonical as the only contributor and put all the efforts on Wayland. This is just one of the reasons why Linux never reaches highs as a Desktop OS.
Kind of. The Chrome browser itself doesn't run on Wayland, it runs on a custom compositor (I believe Aura?). Sommelier[1] is a Wayland compositor used for Linux apps on CrOS (Crostini), but Chrome doesn't use it. There is an ongoing effort (Lacros) to make the Chrome browser itself run under Wayland on CrOS, but it's not public outside of development builds (and not yet on par with the "native" version).
> This should hardly be surprising but a prominent Intel open-source developer has conceded that the X.Org Server is pretty much "abandonware" with Wayland being the future.
I have been using Fedora with Wayland daily for over 7 months now and I it works pretty well.
I see that there are a lot of complains about Wayland here on HN. About input, screenshots and other stuff. But I have not experienced any of that. Input works perfectly and I have no problem with screenshots or screencasts.
Maybe it's that I have well supported hardware (Thinkpad X1C7) or is it something that I'm missing?
"Works for me" is a risky defence; if you are a slightly demanding Wayland user then it is fine, if you have unusually simple needs it isn't a useful contribution.
It isn't anything to do with the hardware, it is the design assumption that isolating application's input and output should be mandatory.
In hindsight; that was a design mistake. The correct design is probably something like isolation by default but optional (ie, allowing sharing). The current design means further protocols and de-facto standards are required to support, eg, streaming and screenshots. That is bad for an ecosystem that relies on low barriers to entry to get good software written.
Basically, there needed to be a security model but the developers skipped it because it seemed like it shouldn't be the compositor's job. And after a very painful couple of years, seems quite likely that it was the compositor's job.
>In hindsight; that was a design mistake. The correct design is probably something like isolation by default but optional (ie, allowing sharing).
I'm not really convinced it was. Frankly, Wayland does a great job handling the tasks I want my display server to handle. I don't have to wade into config files every time I plug in a new HID, or a new monitor, and my touch pad is a joy to use.
I think how screen sharing works is actually very dependent on the system in question ( I want a different set of prompts on my desktop from my laptop from my server), and that leaving that complexity out of the display server was a rough, but correct, decision.
That said, I'm with you - I held off on Wayland for a long time because screen sharing and screen recording just weren't there. At least for me, Pipewire is now a working solution. I won't go back to X.
You are probably only using Gnome and GTK3 applications. Everything else lags behind (because in Wayland you need to reinvent everything for each WM and toolkit). Everything is also incompatible because of all those reimplementations, so if you don't just stick to the one true Gnome way, it will be broken. If you just do what Fedora is designed for, I agree that it can be fine.
Well, I am using sway, and I have found that not true in my experience. Qt has also good support for Wayland and so do SDL-based apps. I don't use any GNOME native applications and I manage just fine.
It depends on your desktop environment and the applications, because the only compatible parts of Wayland are the dumbest "draw a rectangular window" and simplest input support (assuming your "WM" implemented the input right).
Essentially, what could depend on shared standards and implementations in X11, can't do so in Wayland, and there are two major forks when it comes to protocol extensions, as well as major fork between GNOME and everyone else on topic of Server-Side decorations.
It's evolution of the stance that started in early GNOME 2.x time, and crystallised with GNOME 3. Similar to how GNOME 3.8 was used to push systemd one everyone, similar to how they tried to push their own idea about input methods on everyone (since I don't use IBus, I don't know if they finally succeeded - fortunately UIM and XIM still work).
And yet, they have built the most popular desktop environment for Linux. To me, GNOME (on Fedora) feels more polished and visually consistent than Windows 10, which is impressive considering the massive imbalance of resources between those two projects. Would that have been possible if they listened to the zealots online who complain if they don’t support every possible configuration under the sun? I don’t think so.
It's not about supporting every possible configuration under the sun. It's often about not supporting the bare minimum that would make it a good environment, based on "know better" from people who have no relevant experience.
The IBus case is classic example - There was high-handed declaration that having one single global IME state is "easier" for users. The problem is when you regularly have to use languages that are incompatible in writing systems and input methods. Whether its one of the CJK or switching between one of the cyrillic variants and latin, life is much easier when you can have separate input state between let's say an Instant Messenger and your IDE.
For me, I recall the "canary in the coal mine" was when they refused (despite earlier promises and roadmaps) to re-implement certain things related to printing, again in a way that probably didn't bother the developers.
A similar case involves all the very deep integration with systemd, where they essentially declared that there's one Operating System under the Sun and its name is Fedora.
And it might feel more polished than Windows 10 on surface, yes. But then it's much less capable and the resources in Windows go towards things like not breaking people's software and behaviours.
Being better than Windows 10 is a low bar to clear, and doing so doesn't make your software not shit.
Honestly, the current state of mainstream desktop environments -- open source or proprietary -- is pretty awful with the exception of perhaps KDE and little ones like XFCE and LXDE. It kind of makes me glad I didn't hop on the GNOME train in the late 90s -- I could see the awful coming even back then -- and just stuck with a bare WM.
My personal conspiracy theory: the GNOME/Freedesktop/Red Hat crew is pushing Wayland so hard to prevent people from using 30 years' worth of lightweight X window managers that exist and are better than GNOME.
I last tried Wayland on Ubuntu 19.10, but quickly went back to Xorg after discovering some issues trying to share my screen on Zoom. I don’t remember what the issues were specifically, but given that Xorg was working perfectly fine, it wasn’t something I was willing to spend much effort troubleshooting.
It sounds like screen sharing is a known problem area? Does anyone know if they have fixed these issues in later versions of Zoom or Ubuntu?
Screen sharing and recording are a pain point because the default security model of Wayland doesn't allow applications to see what other applications are rendering.
That said, Pipewire is a working solution for Zoom today on Wayland. I'm not on Ubuntu, so I don't know if the Chromium package they ship has Pipewire enabled by default, but my guess is that they do.
I use a small extension to automatically default Zoom to opening calls in the browser (https://chrome.google.com/webstore/detail/zoom-redirector/fm...) and there I can share screens just fine (Full desktop, application window only, etc - For the most part, things work fine).
I'd give it another 12 months if you don't want to have to think about it at all, but I'll be honest, Arch/Gnome/Wayland is the happiest I've EVER been on desktop linux.
I think the solution is not Wayland but an X12, that is, a protocol that solves the problems of the aging X11 protocol while also not completely breaking compatibility and requiring the use of a separate Xorg server within Wayland. I wish people had gone that way instead of fully dismissing Xorg.
I disagree and think a clean break was the way to go. X had acquired a whole lot of legacy baggage and compatibility with that should be provided by a separate piece of code.
Wayland should have been a completely new thing built with the lessons learned from X but vastly simplified for how modern display systems are actually used. Unfortunately it was built with no intent to handle many of the common use cases X already handled just fine, leaving that up to third parties to develop their own ways handling it, leading to some fragmentation (Linux really needed more fragmentation!) and very slow adoption.
A lot of this is due to a design culture that seems common around the Gnome project.
Victims of this mimetic disease have caught on to the idea that 80% of usage needs 20% of the features.
This might well be true in a literal sense, but it ignores that 99% of the users need one or two items from the remaining 80% of the features and its just a different one or two items for each user.
The result is something that isn't completely functional for all but a tiny portion of the user base. :( Workarounds exist to expand that somewhat, though they're often extremely poorly maintained.
For example, I had a gnome3 using system that suddenly started insta-crashing anytime a GTK dialog was opened on it. I eventually had to blow away all its gconf to recover it.
It turned out that at some point someone decided that 300% and 400% scaling had no purpose and caused issues because in some cases they messed up UI layout. They removed them and the removal was just shipped along with security & bugfix updates in fedora. The way it was removed caused instant crashing for people that previously had them enabled!
I'm fixed now, though with the display at 200% I have difficulty reading it (It's a 4k TV that I need to read from a long distance away) ... but since I can't use gtk interface stuff on it at all now I guess I won't be opening any bugs on minor layout issues that might be caused by increased scaling. PROBLEM SOLVED :(
The big mistake with W was leaving so much of function specification to the implementor. W was a spec, but totally incomplete for what was needed to build a usable power user desktop. The Linux DE landscape was already fractured to the degree of inefficiency for such a small user base, and with W this fracture actually deepened due to the Great Unsharing of implementation details. Nothing global. Everything local, from decorations to whatnot. Now, besides competing implementations of an entire display server stack, you have the huge communications & politics overhead between the camps that is required to agree on such simple "protocols" like "inhibit screensaver start" (the "idle-inhibit wars") --- not a good use of resources.
I think the future of X11 will be that if a vendor --- likely Nvidia --- sees any point in it down the road, they'll fork Xorg and provide, complete with their own driver bundle, the display server.
For now, no vendor of drivers like Nvidia is likely to be concerned about X11 stabilizing because that's less toil for them to keep their drivers stable on Linux. They are busy enough with keeping up with the Linux kernel breaking their stuff every release <--- not a great advertisement for vendors to even support Linux; looked the same with X11 to me during the 2008-2015 period. Changing X11 was not economical to support without a great justification.
Some software is finished --- maybe it's time to call X11 finished.
With W I assume you mean Wayland? You comment is confusing because the W Window System [1] is in fact the predecessor of X11 and that's what I thought you were referring to.
Those standards need to exist, but those standards do not need to exist as part of the display protocol.
People make this mistake over and over and over. Nothing prevents the compositor writers from deciding on a common API for screenshots and similar things (and there has been movement in this direction)
Wayland is in many ways the exact opposite of X design principles. It's a giant rebellious-teenager "fuck you, daddy!" to X11, not really a successor. A successor technology would be great.
If Wayland is the future, then the future was 12 years ago.
Since it hasn't really caught on or solved the same problems that X.Org accomplished a long time ago, it seems kind of pointless to continue pursuing it at this point. In my opinion, the best thing about X.Org is that it's no longer changing. I remember installing updates for X.Org all the time and booting to a black screen on multiple occasions.
The issue is trying to implement a radical change in the userspace Linux ecosystem. It's not possible without a ton of effort, so it takes an incredible amount of time, sweat and tears. That's the reason it takes 12 years and counting.
The utopian philosophy of "Linux is about choice" has doomed any idea of a Linux desktop.
> The utopian philosophy of "Linux is about choice" has doomed any idea of a Linux desktop.
If it isn't about "choice" - ie. user control and freedom - then what is the point of using Linux in the first place and not stick with Windows where things are already chosen for you and way more often than not work out of the box because it is by far the most tested against desktop environment?
Libre/open source is the point. Not being restricted by proprietary software and walled gardens.
The only truly successful open source software running on a Linux system is the kernel, because there's NO choice. No talented teenager can write their own Linux kernel that does Y instead. Imagine what would the world look like if there were 50 half compatible, community-managed forks of the Linux kernel.
The year of the Linux desktop won't come because apart from the kernel the ecosystem is incredibly fragmented and reaching consensus is pretty much impossible, so in 2020 we're still deciding whether to do client side or server side decorations.
> Libre/open source is the point. Not being restricted by proprietary software and walled gardens.
And the point of libre/open is the word from FLOSS you forgot to add: freedom, ie. being in a position to decide and control your software.
Libre/open/free software isn't an end goal by themselves, they the means to be in control.
> The year of the Linux desktop won't come because apart from the kernel the ecosystem is incredibly fragmented and reaching consensus is pretty much impossible
Until Wayland came along, X11 was the only defacto window system for Linux - if you wrote an application targeting X11, it would work on all Linux desktop system.
Wayland fragmented the window system landscape.
> so in 2020 we're still deciding whether to do client side or server side decorations.
This wasn't a question at the past, everyone agreed that server side decorations are better because they allow users more control through their window managers - with exception for special cases, of course (the WMs didn't forbid it after all, applications could do both).
It wasn't until some GNOME "designer" saw iPad, got jealous they didn't thought of it and then mad that people could actually have choice in how their Linux systems looked and behaved that we got client side decorations.
I'm kinda annoyed this is the case before the GBM/EGLStreams argument got resolved...
I've done some programming against GBM directly (wanted an OpenGL ES application to be in a "kiosk mode," didn't want to have to install X / a Wayland compositor + configure it), and the whole DRM+GBM stack is kinda _terrible_. Generously, one could call it barely documented; the majority of the useful and correct documentation I found was on Mesa contributors' blogs, and there were still edge cases in the API that were getting ironed out in the 5.9 kernel release.
I haven't needed to write against EGLStreams, but I might give it a try to see if it's as much of a pain or not; from the 1-page overview on the nvidia docs, I suspect not -- it sounds quite similar to the VK_KHR_swapchain extension.
Yeah, the KMS docs need some love. I'm trying to help with that. But note that even with EGLStreams you'd still be using KMS. And GBM's API is pretty small, basically just gbm_bo_import/gbm_bo_create/gbm_bo_get_*.
Interesting, probably depends on hardware then. On Pi4, that same workflow (drm/kms without desktop managers, with GLES on top) delivers superior image with no tearing (tearing does happen in windowed mode under load), while consuming less resources than windowed mode. This library: https://github.com/Const-me/Vrmac
NVIDIA's EGLDevice / EGLOutput / EGLStream API isn't too bad in and of itself. The annoying thing is trying to have backends for both it and DRM / GBM in the same application. The programming models are just so different.
The fact that both X.org and Wayland are unusable and have horrible architectures for various different reasons, plus both seem to be abandonware, does not give me hope that desktop Linux will ever become a meaningful thing within the next two decades.
Another proof that open source is only half of it, it doesn't matter if the code is available when there isn't anyone around to actually do something with it.
Nvidia doesn't care about proper support of Wayland, many devs don't care about proper support of Wayland, Wayland's configuration itself is very limited, for example, forced software compositing. As much as I want to switch from X.org, I simply can't due to these limitations
One important point to note in these discussions is that X.Org is a specific implementation of the X11 protocol (the canonical implementation as it happens).
Wayland is the protocol and compared to X11 in this context. There are multiple implementations including:
1) Weston (the reference implementation)
2) Mutter (Gnome)
3) Kwin (KDE, also implements X11)
It's important to draw the distinction as many/most of the limitations people come across are in the implementation not with the protocol. People using different implementations will come across different issues too.
> People using different implementations will come across different issues too.
...which is the biggest problem of Wayland in my opinion.
By defining protocols only, we now have the development fragmentation problem. The desktop experiences will be more inconsistent between DEs than the era of X11, and minor DE users eventually are forced to switch to major DEs like Gnome because other DEs won't have enough devs to maintain its low-level implementation.
Wlroots as a library for implementing a Wayland compositor has done wonders to help develop small-scale Wayland "desktop environments". Sure, you pull the whole wlroots things, but under X, you pull xlib, plus some X11 server.
Is there anything to admit, though? X.org has been in the so called "maintenance mode" for years and any development on the current main project is dead in its tracks, this is a well known fact. But no "drop in" replacement exists as of now, nor in the foreseeable future it seems there will be one - Wayland is an alternative, if you limit yourself to anything that is properly supported, with fundamental design differences (necessary for a more "modern", efficient approach to the current technologies), but I personally can't but think it's dead on arrival. There was some quote from the original developers of X.org about only few people in the world being able to grasp it in its entirety and I believe it's gotta be true.
You can thank Zoom for using the proprietary GNOME D-Bus API instead of xdg-desktop-portal. (And you can also thank Zoom for being bad in general, I guess.)
Yep. It works ok with some apps, but Bluejeans and Webex refuse to play nice. Google Meet seems to work ok. I'm sure these are complicated issues and that Linux desktop users are a tiny minority, but dang it's painful.
I try to steer everyone toward Google Meet if possible, and unfortunately for the others, I have a decent amount of sway :-D
X.org is one of my longest-used pieces of software. I've used it for 25 years (at the time it was called XFree86 and you had to calculate your own modelines to get hi res), and it has worked incredibly well for me. I've written software some 20 years ago that still runs just fine (some xscreensavers) and I still use it today.
Oh, man. "It this a Sony Trinitron 19in CRT running at 65Hz, or is this a Gateway 17in monitor running at 60Hz? Why can I only see the left half of my desktop on the right side of the screen?", etc.
Most work on X.org happens on its extensions and drivers. Just because the main server is stable (i.e., no new features are introduced) doesn't mean that the project is dead.
If Wayland would work out the box, I would love to switch. Last time I checked (a few months ago), it didn't work too well for me and after a week or so I returned to my working X.org setup.
The beauty of the software world is that "abandonware" can live on for decades.
Let the impatient get on with beta-testing today's developments, and I'll get around to using them 20 years from now, when only the good stuff remains.
I still do most of my writing and publishing work from Windows 95 and Me, and I love it, because everything is a solved problem, and no new patches to break things are coming out.
Keeping it within NAT and VM is plenty secure enough for my purposes, and IE6 is plenty enough modern for me.
Yes, I agree. X11 is a historic artifact that is supposed to be honored in the museum now.
The C/S architecture of X11 hits the spot when terminals and thin client are the norm, that means 20-30 years ago, but today, we all have dedicated graphics display devices (GPU, monitors) even in our pocket smartphone, and the way X11 works is holding Linux desktop scene back.
But without X11 there you can't show how much improvement Wayland has. We shall not forget X11.
Counterpoint: while I would've jumped ship from X to Wayland or anything else 15 years ago because I kept having to mess with my Xorg.conf every other week to resolve some breakage or tweak anything, I haven't really had any significant issue with Xorg in the past 5 years at the very least. It just works for me.
I think X11 not seeing a lot of development doesn't necessarily mean that it's abandonware, it's probably more that people like me who still use it feel like it's effectively feature-complete.
And I won't take the word from some graphics hardware vendor that it is abandonned. Over the years they've always done the bare minimum to support the Linux desktop so of course they'll take the first opportunity to claim that X is "abandonware" so that they have a plausible excuse for dropping support.
Where is that supposed Wayland progress after a decade? Input is broken and inconsistent, screenshots don't work, remoting is broken, every WM has to be rewritten or abandoned, trivialities like c&p handling are not yet there. Wayland is still in the early phase of catching up to X11, for any progress it will take another decade or maybe even a Wayland replacement. We should face it, Wayland is a dead end.
It’s this kind of endless stream of broken and incompatible technologies that makes me use Linux in server and embedded scenarios only. Every time I’ve had to use desktop Linux it’s just been one problem after another.
Not sure what you're trying to use it for but I've been on Mint for the past 4 years and I've had no problem with it, other than the fact that every time I turn on the computer I have to run a shell script that fixes my resolution. I'm using it for development, I also have a personal Mac and my work PC is Windows and I can say the Linux is miles ahead of both of them. And the Mac and Windows computers have been $1500 and $700 respectively, while the one that runs Mint cost me $300. Linux is #1 for me for development just because of how fast and non intrusive it is, no resources spent on user tracking, no unwanted updates shoved down your throat and so on.
> I've had no problem with it, other than the fact that every time I turn on the computer I have to run a shell script that fixes my resolution
I can’t tell if this post is a parody or not. The fact that it’s 2020 and minor annoyances like this are still fairly common in desktop Linux is telling.
macOS SSHing into a Linux VM (either locally hosted or on the cloud) is the sweet spot for me.
I have far more issues with my OSX laptop than having to run a script once every few months (presumably that script is called automatically)
I've just had another popup from OSX wanting a password for google, sophos pops up saying it's upset a fair bit, on occasion the entire machine just hangs, and wireguard doesn't set my search domain. There are other niggles but those are the ones that have affected me in the last 30 minutes.
On the other hand the biggest hassle from my desktop is ssh connections time out if I suspend the machine overnight.
(I've used ubuntu LTS on the desktop since 2006, before then it was debian testing since 1999)
Following is just an anecdote, I am aware that I may not be a representative sample, so read accordingly.
My current desktop PC has been on Debian (mostly Stable, sometimes Testing) for about fifteen years now, and apart from some minor bug here and there, everything works. Including gaming (Steam, as well as some standalone games), work, software development, multimedia.
From where I'm standing, I find Linux desktop much less bothersome than Windows or Mac these days. Every other week, there is an outcry about some new Bad Thing that Apple or Microsoft has done to their OS and half the tech community is up in arms about how all of their workflows are broken.
Or random Twitch streamers often having to fight against Windows more often than I thought reasonable, in order to get their streaming setup back under control.
Or work colleagues annoyed every other month about some VPN app not playing nice with Windows TCP/IP stack and locking them out of company network until they reboot.
Meanwhile, I'm in my little Linux corner, quietly doing my thing and not really having to fix anything other than mistakes I make, and bugs I cause.
And Debian is ironically hard mode! Packages are so outdated and Debian is so unfriendly and clunky. Like a worse Ubuntu.
IME, Manjaro/Arch unironically provide a better, less buggy experience. Maybe the bugfixes come in faster than the bugs and the devs pay most of their attention to current versions.
And more anecdata: I wouldn't say scripts to fix your resolution are "common" pains on Linux. (Though crashes on Cinnamon are ;p -- stick to KDE or GNOME if you want polish.) The closest I've come to that lately was having to reset the sound daemon due to a Manjaro bug, but that's the only thing in two years I've had to do. Meanwhile, on Windows, the internet dies when I turn my VPN off (the same VPN I use on Linux, at that). And, for that, the scriptable solution's more elusive. "Reinstall and pray" is the only way to go.
> I've been on Mint for the past 4 years and I've had no problem with it, other than the fact that every time I turn on the computer I have to run a shell script that fixes my resolution.
That's funny. I was on Linux Mint (windows PC with dual boot) for a few years, then I bought a Dell XPS13 with Ubuntu installed from the factory. I really wanted to like the XPS13, but both the hardware and the OS were just so poor compared to my Mac (that I used at work) that after a few broken things (power source plug broke, the fan was very noisy when I was coding on an IDE, the trackpad was not nearly as advanced as the Macs', shortcuts broke when I upgraded to Ubuntu 19, then to 20, language switching suddenly started taking 2 seconds for no reason, etc etc etc I hope you get the point) that I decided to finally hit the bank and get a little MacBook Air... what a life changing experience: even though the specs of the MacBook Air are a lot lower than the XPS13, it's just a incredibly superior UX. No fan noise even when using the most out of my IDE... trackpad is awesome... even the keyboard is excellent (after the fiasco of the previous Macs, they did get it right), quite superior to the XPS13. The OS itself is just much prettier in all aspects. I feel a small amount of delay sometimes when putting some pressure on the processor, but that's still not something I would call remotely annoying (as opposed to the incredibly annoying Linux UX).
As much as I don't like using Apple stuff due to price and their closed-garden policies, I just can't pass on the superior UX.
Even though my Windows and Linux machines are still available in my closet, I just never had the desire to touch them again since I got the Mac. Unfortunately!
Waylands design is even more broken and leads to common functionality being duplicated and broken all over the place. Each app has to do input handling on its own? Comeon, a 5yearold could tell you that that is a huge design flaw. There are equally problematic design flaws in X11, just less of them and in different areas. Where Wayland tries to get rendering right and botches all the rest, X11 is weird for rendering, but at least has kind-of-ok answers for remoting, input, clipboard, screenshots, etc.
There is no reason why window managers can't share functionality via libraries. In fact, some do (wlroots).
X11 and its separation of graphic server and window manager encouraged code reuse by placing it in the server, but with Wayland that separation (and the extra context switches) are gone so there is less incentive to share code.
Which would be fine if we had something like COM to enable portable interfacing between libraries and certain level of separation, especially when a library plonks something breaking your runtime by messing with global resources (for example, threads and signals).
But we don't, and the libraries push other issues into your design as you often are forced to follow their specific idiosyncracies.
Not sure what you referring to with regards to screenshots as they're working fine for me. I've been using Wayland for a few months now (wayfire which I switched to from bspwm) and overall it's seemed like a huge improvement in terms of smoothness and I have yet to run into any issues. Input seems to work fine even with things like multitouch gestures. I've never tried remoting into my machine graphically but there seem to be working vnc servers for Wayland.
While technically correct, this misses the point. Very few features are part of the core Wayland protocol; off the top of my bind, there's the input methods, and a few ways to describe shared memory with the compositor and some callbacks to handle device registration. That's it.
For example, top level windows and popups themselves are an extension in Wayland (xdg_shell protocol rather than the defunct wl_shell), and so is the rather basic feature of compositing on the GPU (dma_buf) rather than going through some shared CPU memory.
> Very few features are part of the core Wayland protocol;
but that is the main critique. Most of the things not being part of core means that there is a lot more fragmentation of the linux desktop than there was with X, which is unilaterally a bad thing.
the fact that the system provides a choice is the issue that leads to fragmentation (which is the main problem).
Saying "people could just do / not do X" absolutely never ever ever works, not in politics, not in programming, not in "not being an asshole to each other", not in "not using firearms", etc - things have to be enforced & unescapable at some point if we want sanity.
I’m not entirely sure that I understand your point. Here are the facts which I think we can agree on.
* X11 is a protocol
* Wayland is a protocol
* X11 and Wayland are not compatible protocols
* Wayland protocols are all public
* XOrg is an implementation of the compositor of the X11 protocol
* wl_roots is a toolkit used for creating compositors
From this, it follows that:
* Anyone can theoretically write another X11 compositor which implements a subset of the functionality
* Anyone can write a Wayland compositor which implements a subset of the functionality
I really don’t understand where this supposed extra fragmentation is coming from — unless your objection
is that we have more than one Wayland compositor? I don’t see that as a particularly bad; in the same way
I don’t see having GNOME, i3 and XFCE existing is necessarily problematic.
The wayland protocol (with the very few standard common extensions) does far less than the X11 protocol. There are extensions to Wayland that add missing functionality, but those are compositor-specific. Meaning effectively that each compositor has its own incompatible variant of the Wayland protocol. In that sense, it is not one Wayland protocol but a whole cesspool of them...
It will also get worse, because the architecture of wayland forces an implementer of a compositor (which replaces an X11 window manager) to implement a lot of the display functionality all over again. Wayland itself is just a lib that helps a little with it. In X11 terms, just imagine every window manager developer doing development against their own fork of X.org or a reimplementation of it. Wayland is designed in such a way that it causes incompatibility and fragmentation.
> There are extensions to Wayland that add missing functionality, but those are compositor-specific.
Meaning effectively that each compositor has its own incompatible variant of the Wayland protocol.
Well no — this is where I disagree. The extensions are standard and hosted within the repository [1]. A repository
may support their own proprietary protocol, after all it’s an XML file, but practically speaking without distributing
it through the repository means that you won’t get any clients to actually use it. It is wrong to suggest that there is
a huge proliferation of interfaces.
It is true that the core Wayland protocols support less functionality than X11. It’s also true that developers implementing
windowing managers need to do more work — however, again I point you towards wl_roots, libinput etc. as examples
to show that you don’t need to implement anything from scratch unless you want to.
I don’t believe this particular design doesn’t have trade-offs, but to pretend that it has no benefits is also incorrect. The
fact that there are less core protocols means that you can implement a simpler compositor if you should so wish. There
are comments in this thread pointing to the usage of Wayland in different devices as an application of this.
if you go to r/unixporn there are a ton of custom X11 WMs with very very small userbase - sometimes a dozen individuals. Screenshotting works with all of them.
yes, that's the issue - writing a different desktop metaphor like the various tiling WMs was something that took as few as a couple hundred lines of C. Now with Wayland the person who wants to write his own desktop environment has to rewrite much more to get to something that doesn't even provide half of what Xorg gives.
Also, compositors are not mandatory anyways on X (I don't use one personnally and prefer it like that) so it's a weird remark to make.
Wayland is a protocol, like X. X.org is an implementation, like a compositor.
Screenshots require a privileged application to have access to the whole screen. The X protocol doesn’t provide that, though some implementation might.
> The C/S architecture of X11 hits the spot when terminals and thin client are the norm, that means 20-30 years ago, but today, we all have dedicated graphics display devices (GPU, monitors) even in our pocket smartphone, and the way X11 works is holding Linux desktop scene back.
I used X11 forwarding just yesterday to open & control my linux desktop's music player from my mac - which other 2020 technology allows me to just run
$ ssh -Y my_desktop
> my_music_player&
and being able to do that without lag (scrolling through the list views was much more fluid that my experiences with e.g. RDP or VNC even though it's a Qt 5 app, strawberry, which likely does most of the drawing server-side) or blurry jpeg-compressed pixmaps, and with the ability to resize, minimize, etc this individual window without any issue ?
What about e.g., being able to run guis like pycharm/clion that run on Linux in corporate environments where you only have direct access to a windows box?
I'm all for Wayland - I agree with its direction and focus.
However, Wayland will take a long time to reach the maturity that X.org has had for decades, including remote support.
As a result, having both X.org and Wayland available on your disto is probably going to be the norm (and should be the norm) for a while. X.org is less "abandonware" and more "transitionware" in my opinion.
> Wayland will take a long time to reach the maturity that X.org has had for decades
Xorg (the server) is from 2004 (16 years ago), the X Consortium was founded in 1988, and Wikipedia puts X11 itself at 1984 (36 years ago). Wayland's initial release was 2008 (12 years ago). Wayland is either 3/4ths the age of Xorg, or 1/3rd the age of X11 - bluntly, if they don't have their act together now, why should I expect them to ever get it together?
Developers willing/able to work on X11/Wayland plumbing are a (very) finite number. A single developer can have a large impact. With more of them switching their primary efforts to Wayland, the past few years have seen large improvements in the landscape, and this trend is bound to continue.
Granted, there are rough edges, and I wouldn't claim any Wayland compositor is as polished as an X11 one -- but we're not that far off, and for many people the benefits of running a Wayland session today outweigh the cons.
The thing is, that's also been true since at least 2015 - the claim is that Xorg is dead and all the devs who were working on it are on Wayland now and it'll be working any day now. And that's been the claim for at least the 5 years that I've been following it. And in fairness, they have made progress in that time - by the time Wayland is as old as Xorg, it might even reach feature parity!
To make the comparison closer to apples-to-apples, the Wayland analog to the Xorg server would be something like GNOME's mutter compositor, which had its first Wayland support out in 2013[1] -- 7 years ago. And the rate of progress has only sped up since then -- take a Wayland compositor from a year ago and compare it to the same one today, and things tend to be much more polished.
> Then again, that coming from an Intel Linux developer isn't too surprising considering it's been more than six years since the last xf86-video-intel DDX release
I don't quite follow? modesetting was supposed to replace xf86-video-intel, so it shouldn't be surprising if the latter isn't getting updated.
If the maintainers do not want to maintain it anymore... why not just fork it? I've heard people saying that it is big and complex but some time ago i downloaded the code of the X server itself and it didn't seem that big (i've worked in much bigger codebases myself).
Ah, I red "fork" as in ffmpeg-libav case, it is not what you meant, I would say "contribute".
I am too surprised by critique. Anyone can contribute to X.Org [1], maybe there are no stable releases but it works, have active contributors [2] and recent commits [3] [4].
Wayland lives, ten years ago it was demo, now it has a lot of compositors [5], wlroots shared among many projects.
I dunno. I use Fedora, and do a clean install of Fedora every new release. That's often. I try Wayland every time. And every time, I'm implementing several work arounds because things don't work, and when I hit 5 work arounds (the magic number), I disable Wayland with a script I wrote, that kills it at multiple levels, it doesn't even rear it's ugly head in the gdm. Then it's back to Xorg, and no more work arounds, everything works. I support several thousand servers. I only have so much time to dick around in Wayland.
The fact that the Windows desktop works reliably with basically zero issues and the Linux desktop is a complete mess is quite a compelling reason to use Windows surely? I can't see Microsoft giving away that advantage, even if it were technically possible, which it probably isn't.
I tried Wayland last year. It was nowhere near ready for prime time. Perceptibly slower, and I had video tearing when watching video in the browser. This is with bog standard Intel iGPU in a laptop a couple years old.
In its present state it can't beat "abandonware" I'm afraid. X.org works. Wayland does not. And that's all there is to it at the moment.
What's the tiling WM landscape like on Wayland these days? I know of Sway (i3-like), but is that the best game in town? I started with i3 but now love StumpWM (Common Lisp) and XMonad (Haskell), and can't imagine being without something very similar.
Rather I believe, X.org server is really stable. I think its in a stage where it can be leftout without updates for years. For me its stable, works fine and as expected. I don't mind long duration updates with stable systems.
The long term plan was to abandon X.org and move to Wayland.
Of course Wayland is still not there, but X.org is mature and stable enough to keep users happy for the time being, until the whole ecosystem catches up with Wayland.
As a matter of fact, abandoning X.org (except for security patches) would be a good strategy to incentivize the ecosystem not to build on top of it anymore.
Maybe X.org should do what request.js and momen.js did and call it done at some point.
'[...] but a prominent Intel open-source developer has conceded that the X.Org Server is pretty much "abandonware" with Wayland being the future.'
So? A "prominent US politician" recently conceded that global warming is a hoax by the Chinese, with coal being the future. Should I be doubling over myself to dismantle my solar panels?
Very weasel-wordy article if you ask me.
X11 is fine.
It's fine if you're stuck in time. No more releases, no changes, no fixes. It's great if you seek stability and don't want to upgrade anything for the rest of your life.
I was just thinking that, if X.org truly were abandonware (I have no basis to accept or reject the opinion reported in this article), OpenBSD becomes a bit of a haven.
My understanding is that Wayland does address some security issues in X11, but that Xenocara (OpenBSD’s branch of Xorg) also attempts to address the security of X11 in a way that integrates with the rest of OpenBSD’s security mitigations.
OpenBSD is a great example of actively developed software that has exceptionally good taste when it comes to change. It’s not that OpenBSD never changes; it’s that OpenBSD only makes changes that feel organic 20 seconds after you experience them.
That's not correct - OpenBSD is being actively maintained. Although they're using a lot of old, stable and proven software, they're also doing security and bug fixes and even creating new functionality.
So x.org is abandoned, Wayland does not work/is not mature, Linux for desktop is dead? Adopting the upper-half of Android AOSP UI would be the escape hatch?
Most of X.Org is horrible and the entire system is unmaintainable. It is hard to even get maintenance releases for it released, as the article alludes to.
Development hasn't stalled because people think X is good. Development has stalled because the fundamental design is so misaligned from the modern graphics stack that improvements are not worth attempting.
The thing is, X.Org server is not X11 the protocol.
X11 could be easily implemented on new driver framework supporting new graphic stack. Instead we got a piece of sh*t that is fitting for a custom embedded device or some closed environment, but not an X11 replacement.
Nobody in particular wants to use the X11 protocol. It is complicated and supports a bunch of weird functionality that is rather niche. The trend seems to be to use OpenGL and call that done.
I haven't seen many serious complaints that Wayland doesn't support the X11 protocol since people can run an X server directly using XWayland.
As I mentioned elsewhere, XWayland isn't actually compatible if you want to support modern applications, for example ones that might expect a systray. ICCM is, iirc, broken (I haven't spent time checking after finding out some apps critical to me didn't work at all).
X11 could be updated a lot. But some of the "weird functionality" is stuff that is slowly becoming available for normal people that was thrown out with the bathwater by GTK3, like ability to use more than 8bit per colour channel.
> As I mentioned elsewhere, XWayland isn't actually compatible if you want to support modern applications
Yeah, Wayland is a bad protocol. It isn't flexible enough to do what X11 does. But if it was a good protocol, and capable of implementing X11, then it would get X11 for free through XWayland.
They don't need to reimplement X11. They can use X.org or whatever for that.
My point is that XWayland, for various reasons, does not replicate X11 fully. So yeah, I can open something like Xnest/Xephyr. At that point, the utility value of Wayland drops ridiculously heavy from my point of view.
It is theoretical until you've done it. People who worked on XOrg decided to abandon it and create Wayland. X11 is not good enough for them but is good for you...
Wayland is not just focused on performances but on addressing security flaws of X11.
Those benchmarks were benchmarks of gnome 3.36, it doesn't represent "wayland" but gnome 3.36 implementation of it.
If your concern is performance, it's still pretty much on of the point of focus. Gnome 3.38 already brought some good improvement there and there's still a few things to come in the area.
The best Linux desktop environment is Windows 10 (with WSL2)
That's why some journalists have already called 2020 "Year of Linux On Desktop".
Let's be honest, Linux has a good (but not great) kernel, good-to-great apps on server side, and the crappy UI side (Wayland, windows managers, desktop environments)
The best way to interact with Linux is throught API or commandline. Leave UI stuff for more competent folks (Microsoft or Apple)
Please don't take HN threads into flamewar. However wrong others may be, it's not ok to break the site guidelines by attacking them personally or impugning bad faith. If you have evidence of abuse, that's different—but the bar for 'evidence' obviously has to be higher than other commenters simply holding a different view.
Believe me I understand how frustrating it is when the community or a large subset of commenters seem to be repeatedly and perversely wrong about something. But that is the internet doing its thing. We all run into it on some topic that we know a lot about and/or feel strongly about. It can't be stopped or fixed. All you can do is share some of what you know, and yes, it's Sisyphean because the whole thing is stateless and has to be repeated every time. It won't stop because you ask it to or want it to; "it" is impersonal and doesn't have consciousness to begin with. One needs to accept that for one's own sanity (I hope it's clear that I'm talking from personal experience about this) and then patiently supply corrective information wherever you can and have the energy. Telling people to "fucking knock it off" (https://news.ycombinator.com/item?id=24889242) is only going to hurt both yourself and your cause. We also can't allow it in comments here for obvious reasons (https://news.ycombinator.com/newsguidelines.html) and we've been pleading with you for years already not to do it.
> we've been pleading with you for years already not to do it.
The same years during which you've done nothing to prevent the spread of seditious misinformation on Hacker News. If you were better moderators I might take your guidelines more seriously.
My thread with counter-arguments is detached and languishing at the bottom of the page, meanwhile all of the misinformation dominates the conversation comfortably from the top. This is a gross failure of the moderation on HN. Are you a human script, enforcing the guidelines with blinders on? Or are you a moderator, helping to craft thoughtful and good faith discussions, and to combat misinformation and propaganda?
Detaching flamewars so that they languish at the bottom of the page is standard HN moderation and the reason we do it should be obvious. If you don't want that to happen, nothing is easier to avoid (from a moderation point of view): simply provide corrective information respectfully. Of course that is not so easy from a personal frustration point of view—that is something every HN user (certainly including me) has to work at.
I definitely don't want to penalize your counterarguments, but if you can't or won't decouple them from guidelines-breakage, what choice do we have? We do just the same with users and threads that are arguing the opposite.
There are two very different issues here: (1) Wayland; (2) protecting the commons. Important as the first one is, the second has to take precedence because it affects every topic, every thread, and the survival of the community.
The commons are full of propeganda and misinformation at the hands of your policies to "protect" them. Your guidelines are not holy writ, and are unfit for this task.
If you had responded with rebuttals to the misinformation in the article and/or comment thread, plus a link to your previous article, without the snarky tone, your comment thread probably wouldn't be languishing at the bottom of the page. It's not enough to be right; tone matters too.
So, there are plugins somewhere that support all kinds of basic functionality. I can already see how nice will be to keep a system like this up to date.
Do the developers expect to incorporate those plugins at the main code at some time?
Wayland is both a code base and a protocol unless you are being unreasonably technical about terminology (distinguishing between wayland and libwayland) to the point where you are deceiving casual readers.
It is for all practical intents not possible to implement a useful wayland compositor without relying on libwayland, because mesa links to libwayland and expects to be passed pointers to data structures defined in libwayland.
I do believe that the parent comment was unfair; it is possible to implement a compositor without libwayland —
but I do think that there is some truth in that the C structures are the de-facto protocol from the client-side. For example graphics drivers
seem to expect wl_display and wl_surface [1] rather than, say expecting the object id.
It’s not as though this is different under X though and isn’t a criticism of Wayland.
Indeed, my comment is merely observing that wayland is both a protocol and a code base for all practical purposes, I'm trying to leave the criticism to other people ;)
Maybe a more concrete example will answer your question. The Wayland protocols are found here [1]. Your compositor
will implement some of these protocols and not others. If I’m writing an application I will; possibly through some toolkit, ask the compositor what features it supports and configure how my application works accordingly.
This isn’t particularly different to say, an toolkit not supporting a particular form of input, such as GLEW not supporting tablet devices or certain GPU’s having more extension methods — which you can see if you look at Khronos specification. Or an even more drastic example, is an application having both a terminal display and a X11 display like Emacs.
So, it's quite possible that one gets a distro or a machine set up so everything works locally, but you can not tunnel a GUI application or run a remote desktop there. Or that you get a machine where you simply can not take a screenshot to report a bug?
So... I was one of those people rooting for anybody to replace X11 when wayland launched. But given the current situation, I'm just hopping that Red Hat finishes dying off before I have to work with somebody stupid enough to set some GUI system that I need to access, but only runs on a computer where I can not run a GUI.
Well yes — in the same way that you could have had a half-baked implementation of X on a remote desktop; but
it’s unlikely because most GUI-based interfaces will bundle with some level of support. For example, I don’t expect
Canonical’s Wayland compositor to not support screenshots; so while I don’t think your worry is unfounded, I do think
it’s disproportionate.
Yes, they're optional, and no, they're not universally adopted, but this doesn't damn Wayland. An example of where the screenshot protocol isn't supported (and this is fine) is where Wayland is used as the driver for the dashboard display in a vehicle (which is one of the major places where Wayland adoption is strong in industry). Wayland is designed to accomodate a broader variety of use-cases than X11: it's not just for desktop systems. That's why these protocols are optional and separate from the core Wayland protocol: it gives us greater flexibility, by design.
Among desktop systems, GNOME is really the only one who maintains a concrete objection to these protocols. KDE supports most of the wlroots-sponsored protocols in theory, and a handful in practice - patches welcome for the rest. The remainder of the major Wayland implementations for desktops, and many for mobile, support most or all of the necessary protocols.
I am really getting tired of explaining this stuff, over and over and over again. Can we please just stop spreading FUD for technologies that we don't understand? I'm just so sick of it.
Why do people do this? What can be done to stop it? Obviously nothing I've done so far has been working. This feels like talking to conservatives about climate change.
From the linux-on-the-desktop perspective, I have the impression there are 2 major groups today: Red hat, going all in on both gnome and wayland, or most others on X11. So gnome having an objection to these basically means wayland does not have them. All the other wayland clients together are mostly background noise.
Stated from a programmer perspective, what I can count on being available. What is the baseline?
Desktop linux is a mess compared to windows or OSX, with gnome/KDE as major frameworks, and a ton of minor but still relevant frameworks. The one thing they have in common, the one thing that makes GUI applications work more or less together is X11. Wayland causes a split here: yet another painfull technology reset that will probably cost us a decade before everyone has migrated. Now if wayland itself is fractured between its major player and everybody else, there is a 3-way split.
It seems X11 will die, Gnome as 800 pound gorilla will dictate the technological baseline, hence end users will lose basic functionality after suffering trough wayland's maturisation.
I can understand your frustration. You've probably built something great, X11 seems a dead end, and I presume you can't do much about Gnome. But I'll either have to live with this mess or run back to windows. I've dealt with pulseaudio and systemd, and both were arrogant low-quality projects that took years to stabilize back to their original levels. I can live without beeps on my desktop for a week, or the occasional service weirding out. I can't live with a usable UI.
Maybe you deal with climate change conservatives by demonstrating they still can get their groceries without their CO2 spewing SUV.
The main code of what? Wayland doesn't have a codebase. There are protocols( just like x11) and different compositors support different extensions. The point is that it's much more modular this way.
GNOME is not a good representative of Wayland. Wayland is just a protocol - it's up to the compositors to have good performance to distinguish it from Xorg, and GNOME does not do well in this regard. Other compositors, particularly wlroots, enjoy excellent performance.
Wayland also opens the door to many performance improvements which are not possible on Xorg, and which take advantage of newer GPU features, especially on embedded systems but moreso every year on desktop and laptop GPUs as well.
Drew, if someone posted a comment saying "there's this Linux feature that's incredibly slow on x86-64", it wouldn't be a refutation to say "it's fast on RISC-V". That would rightfully just produce a response of "that's nice for you, it's still slow on x86-64 and that's what I'm using, so from my perspective the feature is slow in an environment many people use".
If you want this to stop, make GNOME's performance better in the ways you envision, because it is the experience most Linux users get. (Quite a lot of work has been going into GNOME Wayland performance lately.) Arguments about Wayland protocol feature politics don't necessarily make the out-of-the-box experience worse; for many people it's a good "Just Works" experience. Stop telling people they need to switch desktop environments to get a better experience; some fraction of people will go "if I have to switch, I'll switch back to Windows/macOS/etc". A vanishingly small fraction of people will go "oh, sure, I should switch to a different window manager and environment that isn't what I'm used to and doesn't necessarily have the integration I'm used to, and switch my apps to match too; this makes me happy", and those people know who they are already.
"there's this Linux feature that's incredibly slow on x86-64"
This isn't the appropriate comparison. What they said is "x86-64 is incredibly slow", and then when you teased them for details, what they meant was "this Linux feature is incredibly slow on x86-64". To which the answer isn't "it's fast on RISC-V", but rather, "that's a problem with Linux, not x86-64".
If you have a beef with GNOME, then bring it to GNOME. Don't pin it on a tangentally related technology which bears none of the fault, and which has had hundreds of thousands of hours of work invested in it by volunteers all to make something nice for you to use.
Is your goal to fix people's terminology or to fix the actual problem?
It's not an end user's job to tease apart what specific thing is the root cause. If something changes, and their system feels slower, they're going to reasonably assume the thing that changed is at fault. That's doubly true if there's an easy switch to turn that thing on and off (which there often is, by picking a Wayland or non-Wayland session at login), and they can easily evaluate the difference in isolation. It might be cathartic to spend time yelling at people about who is actually at fault, but fixing the root cause would make there not be a fault to seek blame for.
People aren't going to stop running GNOME en-masse. Distributions are not going to abruptly abandon GNOME. If (and I do mean "if") there's some issue with GNOME's Wayland implementation, that's going to be many people's primary exposure to Wayland as a technology.
People will continue working on optimizations, to many places in the stack. It doesn't matter where the fault lies or where the fixes need to happen, the net result is people saying things like "I switched to Wayland and things got slower / less smooth / etc", and they're going to continue saying things like that. It'd be nice if people phrased it more that way (slowness associated with switching to Wayland, rather than Wayland being slow), and provided more details about their environment, rather than implying that "Wayland" is a single piece of software which should incur their ire. It'd be even nicer if there were less ire to go around because more things Just Work.
It's also entirely possible that some of the people in these various threads have issues with some other piece of software in the stack.
> If you have a beef with GNOME, then bring it to GNOME.
I'm not the one with a beef with GNOME; you seem to be. If you have a problem with GNOME, take it to GNOME. GNOME works great for me, and I don't care which Wayland protocols it does or doesn't choose to implement. You haven't even specified what precise change you think ought to happen there, just some general complaints about protocol extensions.
Regardless as to whether Gnome "does well" in the regard of performance, you must admit that for some of us choosing a competitor like wlroots over one like Gnome introduces a whole new adventure in replacing all the features of Gnome with standalone applications to use with wlroots. Plus a tiling window manager is a totally different workflow.
Not everyone has the time for curating a desktop environment with individual utilities, and not everyone likes tiling WMs. i3 gave me an RSI.
It's not a particularly arduous "adventure". We maintain a list[0] of programs which use these protocols. And wlroots != tiling window manager: there are multiple wlroots-based compositors which do not use the tiling paradigm, the most developed of which is probably Wayfire[1].
Apologies; I misunderstood one of your links and thought sway had been renamed to wlroots.
To be clear the "adventure" I'm talking about is for replacing desktop environment niceties like the ones provided by gsettingsd and the tight integration provided between devices and the system management tools offered in full desktop blown environments like Gnome and KDE. I used i3 for six or seven years and it was never as nicely integrated as Gnome and I spent a LOT of time yak shaving to get it nice and keep it that way. Eventually I gave up. Gnome is really nice these days.
But maybe I missed the point of your original comment.
Gnome hasn't provided a good desktop experience since 2011. By the time they figure out how to stop sucking insofar as Wayland performance they will surely have discerned new ways to err.
There are other environments that have worked well for a decade and will work well for the next decade.
Which embedded systems actually run GLES on Linux stably enough to do anything with Wayland? My experience is the choices are:
Use a binary blob that only works with a 3 year old vendor kernel, and works most of the time (but you can't fix it when it doesn't work, and if you are compositing most of the time isn't good enough)
Use a mainline Linux kernel that is so buggy that fixing the bugs in it is a full time job
I am not aware of any up-to-date benchmarks which make a fair comparison (i.e. not just benchmarking GNOME), but I am intimately familiar with the technology and its performance characteristics.
It's not a fair comparison. You're benchmarking GNOME, not Wayland, and making generalizations about Wayland based on GNOME benchmarks is a false equivalency.
Is GNOME on Wayland worse than GNOME on X11? Perhaps. Is Wayland worse than X11, based on that answer? No.
Protocol limits what is possible. Legacy defines performance. Gnome is X11 first. Wlroots in Wayland first. It should be possible to optimize Gnome.
Wayland was created because of horrible X11 performance [1]. It is not Waylands prime time yet but X.Org still works and maintained. Phoronix.com should have checked contributions [2].
Most of the pragmatic solutions were built by contributing to existing things, not starting over.
Think risc v cisc. It isn't that there are not points to be gained from the alternatives. It is that leaving the past behind is not necessarily the best way to get progress. And even when enough time passes that the alternative gains ground, it often looks more like what it was replacing than less.
You clearly have not watched video by X.Org developer, have not you? I expect you are better informed, worked a lot with X.Org codebase and can show some links on your commits.
Or if you truly believe there is nothing wrong with X.Org you would become maintainer.
> My (often incorrect) views and opinions are my own and not those of anyone I currently or have ever worked for. Please help me make them more informed (and hopefully more correct) whenever you can!
Goal shift much? My point here has been to point out that gnome is the benchmark that matters most. That one side seems bent on ignoring that is baffling to me.
Sadly, for the most part I have been discouraged from Linux desktop usage in recent years. Shame, as I have been on Linux for a couple of decades now. That said, I confess this is opening my interest. Would love to get myself and my children contributing, and I will start looking for ways to make that possible.
> I only really care about performance and Wayland hasn’t been very convincing [0] with no discernible improvement over X11.
there would be no controversy with
"I only really care about GNOME performance and GNOME Wayland hasn’t been very convincing [0] with no discernible improvement over GNOME X11"
GNOME matters for you, it does not matter for me (xmonad, xterm, browsers). If all user see is GNOME he can decide it is Linux that is broken as well.
"The real story behind Wayland and X" by Daniel Stone (link above) specifically shows X11 performance problems, it is view from developer what is wrong with X. The story which we, as users, do not know. We can't blame developers for trying to implement something sane.
I have no contributions to core projects but I don't blame them either.
Ah, fair. Taking back to the root, I see the connection. Since you directed specifically at me, I took it just back to my entrance.
Continuing in that vein. I stand by pointing that the choice of benchmark matters. I've been burned by my own choices and choices from peers too often to agree that hypothetical benchmarks will see improvements for everyone.
I also find it dubious that there are many use cases that are better served today than in the past. I want to believe you, but the evidence is coming in weak with a ton of argument from authority. You don't get a pass just for being a developer to tell users they are wrong.
It would be nice to reproduce phoronix results. Do they run in GNOME shell? What if I run in Sway? What exactly they expect of Selenium? I've heard gedit startup example.
Sorry, I can't continue until you've watched presentation [1]
You are asking us to literally draw conclusions from hypothetical benchmarks where the opposite results will exist.
I am sympathetic to the idea that things needed to start over. I'm annoyed with the lack of honesty and self critical approach. As framed by you, Wayland is above criticism. Which immediately raises my suspicions.
I have not said that Wayland is above criticism. I have said that the criticism which has been raised thus far is largely invalid, and that the benchmark you pointed to is flawed. If you insist on using flawed benchmarks as evidence for the inferiority of a technology simply because no less-flawed benchmark exists to provide a counterpoint, you are wrong.
How is it wrong? You felt it was just unfair, earlier. :)
To an extent, I actually agree. I just don't care, though. Pointing at comparisons that are not real world user cases is... Annoying. And feels ridiculously bad faith in argument.
Worse so, when it has been a prominent argument in this space for a long time.
So what? What percentage of Wayland desktops are on Linux? Do comments on Wayland generalize to comments on Linux? What percentage of Linux installations are on x86_64? Do comments about Linux generalize to x86_64?
If the majority of new users are exposed to linux via wayland and exposed to wayland via gnome and the gnome experience sucks people will perceive that linux wayland and gnome suck regardless of who is at fault.
Have anyone ever bought a car or piece of electronics that was bad because of a particular component and though wow <component oem nobody has heard of> really sucks instead of <name on the box> sucks?
Desktop linux may be a niche but window managers are a niche among niches. Interesting window managers implemented via Wayland are a niche in a niche in a niche.
On the small point wrt "stuff not working" for a user:
There is a future blog that will point out all the "support myths" about Wayland. E.g., how, contrary to popular misconception, obs and other standard Gnu/Linux applications do work out of the box with Wayland at the time of this future blog's writing.
I imagine the impossibility of writing that blog today is the reason Ubuntu 20.04 ships with x11 as a default.
Not it is not, that's the whole point of the article.
There's been no release for the past 2 years, even if there's bug fixes and updates waiting to be released. There's no release planned.
If another company steps in an decide to take over support from Redhat, it can come back to life though.
Abandonware means "not actively supported" / "no people involved in the project anymore". It does not mean "the maintainers do not adhere to a regular version-bumpy release format that I approve of".
The major X11 implementations always had phases of stalling development and disagreement. The last big phase lead to X.org taking over. Maybe this new big stall is the end, maybe it is just a signal for another change of direction. But there have been doom-and-gloom announcements about X11 before, yet we are still stuck with it.
Yes, it's quite important, the fact that there's no official release means that the project is stalled.
It doesn't prevent to create custom builds by applying patches on top of the last release but that will only work for small bug fixes, customization and security fix. Bigger pieces of work will never be tackled this way, such as a good handling of hidpi screens, fixing security flaws such as being able to grab the output of any graphical application and read all inputs.
I wonder why we don’t see anything out of China for x.org ?
They want to be self reliant in hardware, why not start with software ? Then I wonder if the us government will come around and ban open-source like they want to ban encryption in the name of protecting the economy...
not that I want the last thing to happen of course.
1) the old, mostly working thing is being abandoned in favor of
2) that new thing which doesn't work in so many cases it's laughable, even after 11 years. How many years was it between the concept of X and a working release at Palo Alto?
Note that the new situation is so perfect for passing the buck from the windowing system to the compositors, and compositor folks are busy fighting feuds over which one another's private protocols or even public ones they are not going to support.
Oh, and the browsers. Chromium is making its first shy bumbling steps towards actually working on Wayland! A mere decade after!
I've heard it was so much easier to write Wayland clients, what could have happened?
Upd: This toxic development culture in a nutshell is exactly what this https://news.ycombinator.com/item?id=24165445 is about. Well, we know it's not limited to Google.