I wrote a (userland) general purpose and driver/hardware-agnostic multitouch daemon w/ gesture support for Linux that works with the existing input stack (i.e. doesn’t require switching to libinputy but also supports it), if anyone is interested:
The biggest benefit is that you can use drivers with actually correct acceleration curves like xf86-input-synaptics (if you’re on X11) instead of the offensively bad, NIH reimplementation that ships with libinput.
Oh wait, I’m on HN so I shouldn’t neglect to mention my project is written in rust!
Not OP, but when I tried Wayland way back when, I discovered that libinput didn't let you remap tap buttons, something Xorg does support. That feature is incredibly useful for me, since my touchpad has no physical buttons but does have a dedicated right-click zone, so it's useful to remap two-finger tap to middle-click, since I use that all the time for opening links in new tabs and pasting from the X buffer.
The libinput dev's response was a combination of incredulity that someone wouldn't have physical buttons (in like 2014) and a statement that the project would never add this feature. I haven't gone back to Wayland since.
That "dev" you're criticizing here has been developing libinput solo since the beginning. He asked for help numerous times and never received any. So either step up and do the work, or stop complaining.
libinput is yet another of those fundamental libraries that receive almost no developer attention, but get all the blame when something goes wrong: https://xkcd.com/2347/
He could also not have written libinput to begin with. libinput became the default because of political pressure from Red Hat and now we can’t complain if it’s worse than the project it is replacing?
There isn't any political pressure from Red Hat. They're often the only company that does any work on the input stack in userspace, without them it would probably not even exist or be stuck in a very old state. You can complain but it's unlikely to be of much use when the real issue is lack of manpower and there just aren't enough people to respond to all the complaints users have. A surefire way to solve that would be to start contributing, Red Hat won't stand in your way.
> libinput became the default because of political pressure from Red Hat
I don't know if this is true, but when comparing the KDE configuration screens for both, it sure seems like it must be true. libinput is missing much that synaptics has and doesn't seem to do anything better as far as I can tell. Unfortunately, just because a developer is working hard and pouring his heart and soul into something, doesn't mean the result is actually worthwhile.
It's not really true at all. The libinput developer blogged about why having more configuration options has historically not actually been good for the project, or for users of the synaptics driver:
Particularly, once you add a configuration option, you're now on the hook to support that option indefinitely as long as the project exists and users expect that option to be there. With more maintainers and testers, it may become feasible to have more configuration options, but it's still not a good idea to just keep piling them in. It might be satisfying to see a big configuration panel with a lot of settings but it's a lot less satisfying when you figure out a lot of the configuration options don't work correctly because the underlying system has bugs or is under-maintained or was just never tested with your specific configuration because input is hard and requires near constant testing against an extremely large number of hardware devices.
One-size-all doesn't fit me. synaptics gets the job done and libinput doesn't, it's really that simple as far as I'm concerned.
An example from elsewhere in this thread: "In libinput the edge scrolling is hard coded to 7mm" Somebody please tell this dude that the size of a human finger varies dramatically from person to person. Not everybody has his hands.
As for developer workload, the libinput developer could save himself a lot work by not starting his project in the first place. It doesn't seem to do anything better than synaptics so I don't see why it even exists.
>synaptics gets the job done and libinput doesn't, it's really that simple as far as I'm concerned.
From a maintenance perspective it is unfortunately not that simple. I sympathize with your frustration and personally I too wish it was that simple but it is not. The synaptics driver might be able to do some things better but those things can also cause bugs to manifest in other areas, and have historically done so.
>An example from elsewhere in this thread: "In libinput the edge scrolling is hard coded to 7mm" Somebody please tell this dude that the size of a human finger varies dramatically from person to person. Not everybody has his hands.
Well this is an open source project so you (or anyone else) can tell him if you really want. Have you checked if there is an open feature request on the tracker for this? Or better, can you propose a configuration API that would work well here, and can you help maintain it and test it on the hundreds of devices that it might potentially affect? I checked and I couldn't find any feature requests or proposals for this but maybe I missed something. If you don't want to do this then you may have to wait until somebody else makes the proposal and until the maintainer gets bandwidth to do that non-trivial amount of work, which is prioritized against all the other feature requests that might have been received.
>As for developer workload, the libinput developer could save himself a lot work by not starting his project in the first place. It doesn't seem to do anything better than synaptics so I don't see why it even exists.
But that's not the case at all. I'm sure you understand, there are a lot of other devices out there besides your particular touchpad and synaptics touchpads in general. You may want to read the rest of the libinput developer's blog about some of the motivations behind libinput, it solves very real problems that were directly caused by the synaptics driver. If you never encountered those bugs, that's great for you and you can continue to use it, but this was not how it was for a lot of other users. Remember we're still in this area where a critical project like this is only really being maintained by one developer, if they quit then you're left with no developers working on the input stack at all.
> "In libinput the edge scrolling is hard coded to 7mm and previous attempts at making it configurable were rejected."
I guess I should have quoted that full sentence. If somebody else does the work for him he'll still reject it. I've read the blog post you linked, he's most concerned about keeping the software simple, not making it actually work for real people. I think I have better things to do with my time than try to bring this guy around to my way of thinking. When I said "somebody should tell him" I was being facetious; that different people have different size fingers should have been obvious to him from the start, so he's clearly a lost cause.
> The synaptics driver might be able to do some things better but those things can also cause bugs to manifest in other areas, and have historically done so.
Well hypothetical or historical bugs don't concern me. Synaptics is working for me and always has. And I'm certainly not the only one.
> But that's not the case at all. I'm sure you understand, there are a lot of other devices out there besides your particular touchpad and synaptics touchpads in general.
It's a common misconception that the synaptics driver is only for synaptics touchpads. From the manpage:
> The name "synaptics" is historical and the driver still provides the synaptics protocol parsing code. Under Linux however, the hardware-specifics are handled by the kernel and this driver will work for any touchpad that has a working kernel driver.
Edit: I realize I probably sound unreasonably annoyed by software I don't even use, so let me explain: because of Red Hat, distros are now defaulting to libinput and I now have to work around this to continue using synaptics. It's caused me inconvenience despite me never wanting to use it in the first place. I earnestly wish it would just go away.
> he's most concerned about keeping the software simple, not making it actually work for real people.
You think your use case represents "real people"? Your problem is so niche it's not unreasonable for the developer to recommend you fork it and maintain your change as a fork for as long as you wish to use it. Real people don't fiddle with their touchpad settings to that extent. It's quite alright for you to prefer more complex solutions as well, but the vitriol towards libinput, a fantastic solution for 99.999% of users is undeserved.
This niche is not "real people" because "real people" don't use Linux. At the same time there are a lot of unreasonable requests, and the developer makes something free and shouldn't be bothered with such a low reward. Rudimentary functionality in Linux is quite good, maybe it will be a real issue in usage, but "real linux" users use keyboard. ;)
I just saw your edit and I'll respond to it separately. I don't understand what you mean by "because of Red Hat". Maybe Fedora and RHEL are shipping libinput but other distros don't have to do that, it's their decision to use it and not default to synaptics. You could probably find one that doesn't use libinput, or you could ask your distro not to use libinput by default, but you may have limited success with that because of the other issues with the synaptics driver that weren't really ever fixed. It's an unfortunate decision that distro developers have to make and it's solely their decision, not Red Hat's.
Although it's not really a coincidence if they make a lot of the same decisions that Red Hat does as they also have to field the same type of bug reports in these components and generally they will make similar decisions focused around minimizing bugs, sometimes at the expense of no longer getting to say that they support a giant feature matrix. I'm using debian for example and I don't think there are any other efforts or desire from downstream to focus on fixing the long standing issues in these old xf86 drivers for hardware that a lot of distro developers may not even have access to so they can test their changes. Wishing libinput to go away isn't really an effective problem solving strategy as that won't solve any of the issues with the old drivers that caused them to get removed as the default.
>If somebody else does the work for him he'll still reject it.
I would like to see this statement where he said he would reject it, I searched and couldn't find any statements to that effect. If it was rejected for some technical reasons then a way to proceed would be to address those reasons and then make a proposal from there.
>I think I have better things to do with my time than try to bring this guy around to my way of thinking. When I said "somebody should tell him" I was being facetious; that different people have different size fingers should have been obvious to him from the start, so he's clearly a lost cause.
I don't think you are operating in good faith here. It seems a bit ridiculous to suggest the person who maintained the very synaptics driver you're using doesn't know that finger sizes can vary. Again, if you want to change minds, the best way to do that would be to make a proposal that will actually work and offer to help shoulder the maintenance burden. That goes a lot farther towards convincing people toward your way of thinking than anything else, if you haven't done that I don't think you can honestly say you've made a complete effort. At least that is my view on how these things go when somebody says they're overwhelmed and they need help. This is pretty much exactly what happened with the recent touchpad changes, somebody else raised some money and offered to take up a lot of the work, and that's why we're commenting on this article :)
>Well hypothetical or historical bugs don't concern me. Synaptics is working for me and always has. And I'm certainly not the only one.
Sure, but for some others it hasn't worked, and for the maintainer the historical bugs always concern them. If you intend to work on this and contribute in a meaningful way, you can't ignore those. A good way to start contributing might actually be to go and look at some of those historical bugs so you don't cause a regression.
>It's a common misconception that the synaptics driver is only for synaptics touchpads.
I am aware of this, the scope of libinput is still much larger than the scope of that driver. That was what my point was.
> I would like to see this statement where he said he would reject it, I searched and couldn't find any statements to that effect.
It follows from his reasoning that allowing the user to configure the driver to their personal preference would make his job harder because it increases the number of possible configurations. And also from him rejecting such a proposal already.
Can you please share the email or issue where the proposal was made and where it was rejected? I still can't find it. I'd like to continue this conversation and discuss what can be done about it, but we can't discuss this much further without that because I don't really know what you're talking about. I will even look at the proposal and tell you what can be improved about it so maybe it can be made again in a better way. If the reason it was rejected is it because it made his job harder, a way to solve that would be to offer to contribute so you can make his job easier and take some of the maintenance load off him.
I exclusively use the trackpoint for mouse input and configuring the edge scroll to occupy the entire touchpad meaning that I can use my thumb to scroll without moving my hand away from the trackpoint.
In libinput the edge scrolling is hard coded to 7mm and previous attempts at making it configurable were rejected.
Some HP laptops (EliteBook 8xx) have a track point but only have two physical buttons.
They're also quite large and flimsy, so I expect them to develop enough play that reliably pressing both at the same time becomes frustrating after a while.
You’re most welcome. Please read the article before downloading or cloning from GitHub: this “solves” the acceleration issue by not breaking it in the first place; imho what libinput should have done by developing gesture support on top of the existing, working driver/hardware stack instead of throwing away the userland/kernel separation and all layers of abstraction.
The linked project is only the userland daemon that uses the raw Linux Multitouch Protocol events to interpret gestures; to get the acceleration curves working again I recommend using this in conjunction with xf86-input-synaptics as your actual input driver (which provides the correct acceleration curves for compatible hardware - basically all touchpads since they all cloned synaptics’ hardware once upon a time - but doesn’t by itself have gesture support).
Side note: Syngesture (this project) isn’t dependent on X11 or Wayland, but I don’t know if anyone has ported or will port xf86-input-synaptics to Wayland, which is/was pretty much developed along the same lines as libinput: rewrite everything from scratch without taking into account that some things you’re throwing out actually work really well (regardless of the fact that they are built for X11 or not), get rid of all abstractions that made it possible to plug in better components/replacements at various points in the stack, and without a concern for actual feature parity for what they are purportedly replacing.
>rewrite everything from scratch without taking into account that some things you’re throwing out actually work really well
This is not what happened. The synaptics driver and libinput were/are maintained by the same person, who has written about it extensively and explained why the approach in the synaptics driver does not work well. Specifically, some of the problems with the synaptics driver have directly led to decisions in libinput. You may want to read some of their blog posts if you haven't:
It may seem like a good idea on its surface to have tons of abstractions and features and extension mechanisms, but that fades away when you get deal with the bug reports over a period of years and get to see all the problems it causes.
You mean click with three fingers once and let go to initiate a drag, move the cursor with one finger on the touchpad to its destination, then click and let go with three fingers to end it?
That’s a very good question as I didn’t consider stateful gestures (where state doesn’t reset when all fingers are removed). It can be shimmed (without touching the code) in the configuration file by replacing the invocation of xdotool (or whatever) with a wrapper script that provides state enabling a “toggle mousedown” rather than just explicit, separate mouseup and mousedown events but I wonder if there’s a clean way to model that into the event loop as serialized to/from the configuration file directly. Feel free to open a GitHub issue if you like.
> You mean click with three fingers once and let go to initiate a drag, move the cursor with one finger on the touchpad to its destination, then click and let go with three fingers to end it?
Not quite.. on macOS, you can enable three-finger drag which means that when you touch down with 3 fingers and drag them on the trackpad, it's the equivalent of clicking-and-dragging with a mouse.
There's no laborious use three fingers then one then three again. Just like swiping with 2 fingers does scrolling, using 3 fingers just grabs whatever's under the cursor and moves it (whether that's a window, or starts selecting text, moves a scrollbar, anything).
That seems.. harder to use than what I described because being precise with three fingers (eg precise enough to drag to the correct row in detail view) isn’t easy. Also gesture accuracy decreases as the number of tools (read: fingers) goes up.
In all cases, that is easier to implement in code than a stateful transition like I described since there is no state persisting beyond the gesture itself, it just needs gesture_start and gesture_end support (with the difficulty being in determine if a three-finger gesture has started or if it’s actually a three-finger gesture on its way into becoming a four-finger gesture).
It may hurt accuracy, but there's a lot of times using a mouse where you don't need to be accurate. I've been using 3-finger-drag since it's introduction and can't go back.
I picked up the habit when a “real button” (before the full glass surface Taptic Engine) broke.
I now find it less of an effort not having to press down to register a click. 3-finger-drag now makes for a more relaxed, less RSI prone experience for me.
> That seems.. harder to use than what I described because being precise with three fingers (eg precise enough to drag to the correct row in detail view) isn’t easy.
On macOS the accuracy is pretty much the same as with a single finger. As in with a single finger you reliably point to a single pixel and with three fingers you can reliably point to a 2x2 area, even with them spread far apart.
(Not trying to dismiss anyone's effort, just pointing out that Mac trackpads set the bar really high.)
And there are techniques, like, when you need to be more accurate, instead of moving the three fingers, you move two of three or one of three, so the average position calculated by the trackpad still moves in the direction you intend, but slower and more accurately than moving the entire hand.
Whoa, TIL. This feature seems to be hidden away in MacOS under Accessibility settings -- it's not available in the regular trackpad gestures settings menu. So for some reason Apple seems to not want to show it very prominently. But I just enabled it and at least initially it seems really useful, so I wonder why.
I believe three-finger drag used to be a first-class citizen before the introduction of Force Touch trackpads. Those made click-and-drag much easier than the older springboard design.
I'm not sure if it was default but it was definitely prominently present in pre-Force Touch trackpads. I never liked the new click and drag gesture because I don't like to exert force when using the touchpad.
The gesture is very well implemented actually. It's surprisingly precise and feels really natural in the sense you could release and resume dragging even after all your fingers leave the touchpad. Nothing else comes close
Help me understand what I'm missing. It sounds like what I do with one finger and my thumb.
I move the cursor over what I want to move, then click with my thumb and keep it pressed down. That initiates the drag. Meanwhile I can use my pointer finger to move the cursor wherever I want, dragging the whole time. Once I've moved the cursor to where I want, I remove my pointer finger so it stops moving and then release my thumb to drop.
Is it another way to do the same thing? Why would I use three fingers instead of just using a normal mouse click with my thumb?
Because you don't have to click, and you can drag over a longer distance. With your approach, you are limited by how much you can pivot your wrist around your fixed thumb.
I personally don't like this and very much prefer the "tap-hold with a small delay" for my dragging needs. I'm very happy to see this has been implemented on Linux, too.
You might be interested to learn that there is no requirement to leave your navigating finger on the track pad. No need to pivot your wrist at all. Pick it up and move it around as much as you want only comfortable distances.
You are obviously used to the three finger way, and to each his own. But for me, having it described as “it's the equivalent of clicking-and-dragging with a mouse” makes me wonder why not just click and drag? But who knows? I’ll have to try it and see if I like it.
However, my post may have been unclear, I don't like the three-finger drag, so I'm not used to it, I'm just trying to find a reason why some people may like it, so I may not have the best ones.
The thing is that you don't "click" anything, you just put your three fingers on the touchpad and slide them around. Exactly the way you would scroll with two fingers: no clicking involved.
After a quick glimpse it seems this only supports simple actions upon registering a gesture, however it should be possible to extend for that use-case.
To abstract it a bit:
You would need to
1. Trigger an event on the gesture start (detect 3 fingers -> hold alt, mousedown)
2. Have mouse movement enabled
3. Trigger another event on gesture end (lift fingers -> mouseup, lift alt).
You’re correct, but in all honestly the devil is in the details. The difficulty here being in determine if a three-finger gesture has started or if it’s actually a three-finger gesture on its way into becoming a four-finger gesture - the sort of issue that plagues first input delay on mobile devices because it’s not clear if a tap is a “tap to scroll” or “tap going to become a pinch to zoom.”
The solution is simple but somewhat ugly, just like on Android and iOS: set a maximum time limit (up to 250ms) before going with the number of fingers detected.
Also, I’m not sure if macOS is distinguishing between tap and click? The online documentation says it’s with force click enabled.
Being more serious, that document has a bunch of graphs of libinput's acceleration curves for different devices. I'd love to see a write-up of the curves you've implemented and why they're superior in your opinion.
I'll make the same point I make every time this topic comes up: there are Chromebooks with great touchpad experiences, and that's been the case for a long, long time. This is not a "Linux" or "open source" problem; this is a problem of ignorance and/or insufficient interest on the part traditional (i.e. non-ChromeOS) distros and their users. It's nuts that this was and continues to be a high-profile, multi-year effort spawning discussions that end up framing the whole thing as elusive—rather than, you know, a solved problem that is pretty much not even worth mentioning but for the long tradition of poor execution.
> This is not a "Linux" or "open source" problem; this is a problem of ignorance and/or insufficient interest on the part traditional (i.e. non-ChromeOS) distros and their users.
Heyas! I worked on a touch screen driver, related but obviously not the same thing.
So, consumer hardware, end to end control over every part. This gave us the chance to calibrate our driver to our specific hardware.
Originally myself and another engineer worked on making the touch screen driver work, and it was "ok". After awhile, we did a complete rewrite, and another couple amazing engineers joined in, and it became "pretty good." We had some fancy state machine for gesture detection, and the code was about as good as we were going to be to make it.
Then a team of experts came in and threw a bunch of math and ML at the problem to make the experience amazing. ML was used to fix up gestures so they actually worked properly, reduce latency (motion prediction!), and fix jitter, because touch screens are inherently analog devices and therefore noisy as heck.
Could the two of us on our own have made an amazing touch screen experience? Heck no. No matter how many hours we put into it, we were never going to be able to match what that team of dedicated engineers brought to the table.
And besides that, building an ML model up for the exact hardware we used is something that you just can't do when writing a generic driver that has to work all over the place.
There is a reason Apple gets a better UX out of a hardware stack they own end to end.
> It's nuts that this was and continues to be a high-profile, multi-year effort spawning discussions that end up framing the whole thing as elusive—rather than, you know, a solved problem that is pretty much not even worth mentioning but for the long tradition of poor execution.
That’s what I thought too until I actually started trying to solve the problem. Could you offer a more specific prescription for how you would go about taking the touchpad experience of ChromeOS and getting it shipped in a Linux context, if you were running this project? You seem baffled that this has not been solved yet, which makes me wonder if you know something that we don’t about the specifics of how Google’s source for ChromeOS touchpad could be trivially co-opted?
As an addendum, I also think this isn't exclusively a software problem.
I use a Magic Trackpad 2 connected to my Linux Dell (work laptop) and it's great. Indistinguishable from using the same device connected to my macOS Macbook (personal laptop).
I do use Touchegg for multitouch on Linux, but the responsiveness & behaviour hasn't changed otherwise with/without Touchegg installed so it's not that piece of software that's improving things. Also, the built-in touchpad on the Dell is ok-ish, but definitively inferior.
Android is also technically a Linux distribution, and as you could imagine, its support for touchscreens is great. You can do amazing UI stuff with Linux if you throw away the "standard" desktop stack.
When I use my trackpad on GNOME 40, the jitter is downright embarrassing. On a bad day, you can leave your two fingers in place and it'll jump around 25% of the page in a web browser, or the cursor starts to get drunk shakes.
It's so much better on Windows. This is a software problem, and right now the Linux ecosystem is doing a pretty poor job.
It's one thing to make a great touchpad experience on hardware you control, if you're willing to optimize the driver and stack for your hardware. It's quite another thing to support arbitrary hardware, most of which you can't test on.
I agree with you and would include most or all Chromebooks I've experienced on that list. My experiences include the Acer C720, the Acer R11, the HP 11 (the white version that was recalled) the HP 14, and the original Chromebook Pixel laptop.
Maybe the HP14 was not quite as great as the rest although I think it was fine, while the Pixel experience was exceptional. I don't think it's a matter of a specific one, I think it's generally across the board a very solid to good experience.
You don't need gestures or touchpad at all in a terminal. I surprised the touchpad doesn't go against some sort of ethos, then again, maybe that's why they suck so badly.?? Real coders don't let their hands leave the keyboard!
But coding is not about maximising keystrokes. If you were a copywriter I'd imagine this matters a lot. With coding the actual amount of code written is not very high, it's the thought that goes into it.
I don't code an awful lot but I don't find reaching for the mouse a productivity issue. Also the different motions are good at using the hand/arm muscles in more diverse ways.
However whatever works best for each of course. I love this about open source, everyone can pick what works best for them.
And using a terminal for everything blindly is just dumb. It’s a tool, it has its uses, but filling text buffers instead of manipulating pixels directly for general purpose GUIs is a hack and just dumb.
I use two-finger scrolling on my touchpad when using a terminal. Is it necessary? No. But it's certainly easy and convenient, particularly since it works the same as it does in every other application. I can scroll faster in man with my trackpad than I can using the j and k keys. I know I could also use pgup/pgdown, but with those I have to move my hands anyway and I'm more likely to visually lose my place in the document (same problem with d/u, and the others.)
> Bill believes that the biggest opportunity to improve Linux touchpads is to adapt their acceleration curve to better match the profile of a macOS touchpad. How do you feel about the acceleration and precision that your Linux touchpad offers?
Is this work only going to be for touchpads? I personally hate the X11 curves with mice and vastly prefer Apple’s. It seems to be hard coded last I checked and not easily modifiable (there are two parameters now but it’s still a very different curve). If trackpad curves also benefited mice (particularly those of us who use Apple’s Magic Mouse on Linux), that’d be amazing!
Thank you for your efforts to improve these ergonomics — it’s thankless and hard work but benefits many.
For now we only focus on touchpads. I think that if we're successful in delivering touchpad improvements then we will gain credibility and trust that could be useful when working on other input devices.
Yes incredible what they're getting done with a shoestring budget.
From the announcement "The number of people keeping this project going is tiny (currently just 121 supporters), but this small group of passionate Linux users are creating meaningful forward progress to improve the touchpad ecosystem for hundreds of thousands of Linux touchpad users. For those who don't want to rely on a future beholden to Apple, we hope that you'll consider supporting us? We could be getting more done if we had 250 supporters. "
Your report says Firefox gestures are working on Wayland, and two finger swiping left/right appears to be configured in the Firefox prefs to go back/forward in history:
However, Firefox doesn't respond to these gestures. Do you know what's up with that? (I'm on Fedora 35, if that's relevant.) Two finger scrolling up/down works just fine.
Interesting, this feature may only be for touchscreens because two-finger swipes are registered as scrolls on Wayland. This will indeed need further work.
What does work though is two-finger pinch gesture to zoom in/out of a web page.
IIRC these weren't for two finger swipes at least on macOS (which is the only place I saw this working). I think these were handling 3-finger swipes. Unfortunately now GNOME chomps 3-finger swipes so IDK if that is where the problem is (I guess someone without GNOME can try and see).
Fedora and Gnome is only one desktop environment and one widget toolkit. For example, Qt-based apps didn't have touchpad gestures at all anywhere until my work on Wayland gestures landed this year.
It is true though that if one limits oneself to Wayland and only to Gtk-based applications that touchpad gestures mostly worked before. We now switched focus to adding touchpad gesture support to more applications, so there will be measurable progress even for this case too.
In short - there are no binaries and it's relatively hard to compile these manually, so I recommend to wait until the Linux distributions picks these projects up. This usually takes around 6 to 12 months.
Holy cow, that was a LOL from hell. Are you serious? 6-12 months is the biggest tease. "Here's this really cool thing, but maybe, if you're lucky, you'll be able to use it in a year or so." That's the quarter super glued to the floor kind of frustrating.
It's not super glued to the floor. It's on a train, on its way to your station. How far away your station is from the train entirely depends on the length of the release cycle of your Linux distribution, and is completely outwith the control of this developer. That's how Linux distros work. If you want it sooner you can always use a rolling release, such as Manjaro.
The new X server is already in Debian experimental [0], with a bit of luck it trickles down to unstable just in time for Ubuntu to pick it up for the 22.04 release.
It uses a custom widget toolkit. Adding touchpad gesture support is certainly doable, but it would benefit only single application, so we haven't prioritized that so far.
I can't wait until all proprietary software becomes obsolete due to FLOSS becoming good enough for everyone. From what I see the proprietary operating systems are in decline now, whereas Linux is getting better and better all the time.
As part of the FLOSS rooting crowd, I'd really like that to happen. But we have to face reality: on the desktop, after decades we're still fighting for 2% of the marketshare.
Not that progress hasn't happened. Quite on the contrary. Linux on the desktop experience nowadays is VASTLY superior to even 5 years ago. Of course, users from other camps will complain of features that have been lacking for many years (thumbnails in the file picker!) but alternatives aren't without their problems either: get a seasoned linux to try MacOS or Windows and you'll recognize weaknesses on the other approaches too.
For me, linux on the desktop has been ready for a long time. I don't care if the market share is still in the single digits if it is good enough for me. Since I like the freedom, privacy, security and (yes) ease of use of desktop linux and do not depend on non-multiplatform software and services, there's just no better OS for me. Not everybody has this choice though.
So, linux on the desktop not being popular is not a problem. If it continues popular enough not to be ignored by hardware vendors and service providers, everything else will continue to improve over time.
Gnome's file picker is notoriously, offensively bad - to the point that it makes me completely lose trust that any thought has been put into the human factors of the rest of the system. Plasma, on the other hand, continues to surprise with its thoughtful and accommodating design choices.
Life is too short, and brainpower too limited, to justify wasting either on frustrating software.
I love KDE too. It's so configurable and just plain powerful. And looks great as well.
I really felt so empowered when I left macOS for FreeBSD with KDE and I still do. I found I really hate opinionated software because usually I don't have the same opinions :)
It's great for me. It runs for months as a daily driver without any issues. No crashes. In HiDPI also.
However I run it on a desktop NUC with wired everything. I don't use WiFi, Bluetooth or Suspending. I'm not sure if the more laptop-related usescases will be ok. For example I had to turn off my bluetooth controller because it hangs the boot process. I think this is just a peculiarity of my particular controller though.
Also I have one weird key repeat thing I still have to investigate :) But I think my Apple keyboard is just a bit funny.
Not just manual installation but fully automated remote installation too. From a manageability perspective, the 2000 series was a leap forward, with Active Directory and Group Policy. I always found it odd that Apple never even attempted to provide a proper solution for remote management of their OS and just left it to 3rd parties like JAMF. (I know they've made inroads into "enterprise" since then, but this definitely hamstrung them there for a long time)
It's funny you have concerns with Windows using too much memory at idle, I actually have no idea how much memory my daily-driver Ubuntu install uses at baseline, but my #1 issue with desktop Linux is how badly it behaves when low on free memory. Windows and MacOS will at least warn/prompt you to kill programs. On Linux, by default....the mouse just stops moving and you can't do anything. You can fiddle with settings and install tools like earlyoom to mitigate it, but the default behavior is insanely bad for a desktop OS. Despite having 32GB of RAM I still run into it occasionally and it drives me nuts.
The more I am using Linux the more I wish I had gone with Fedora for my personal desktop when I was forced to switch from Windows to Linux a couple of months ago (forced by hardware issues with an unused onboard video controller which I can set Linux to fully ignore but sends Windows in an infinite reboot loop).
Don’t get me wrong. Out of the Windows, Mac and Ubuntu machines I have, the Ubuntu one is still my favorite (especially since switching from 20.04 LTS to 20.10) but Fedora is doing things so much more along the lines of what I want.
It’s more cutting edge but still very stable and well tested. It doesn’t force snaps onto me. I’d much rather install flatpak software. And it has a cleaner Gnome interface. Even though I kind of like the Ubuntu sidebar dock, it’s not worth all the other awkward behaviors with Ubuntu trying to override Gnome (for example, the existence of the Ubuntu Software store really annoys me since it doesn’t support Flatpak and simply seems to run a lot worse than the Gnome software store that I also have installed).
Indeed, I think I'm going to make the jump to Fedora from 20.04. Really hard decision too as I've been an Ubuntu guy for the majority of my computing life.
Agree on all points as well - Vanilla GNOME > Unity, Flatpaks > Snaps. The Snap Store is just GNOME Software with some plugins/tweaks FWIW, this is changing (has changed?) soon though. I learned this by investigating a memory leak https://gitlab.gnome.org/GNOME/gnome-software/-/issues/942
Get a good amount of swap on ssd, the default 2GB on most distros is not nearly enough to save you when you really need swapping. IIRC you can even make it sparse to not occupy disk space when you're not using it.
> IIRC you can even make it sparse to not occupy disk space when you're not using it.
While this sounds great, not allocating storage for swap means either you'll have that space unallocated or you won't have any swap because your system is full.
Administrative overhead is simpler though, and sparse files don't have any performance overhead in later kernels (Citation needed, read somewhere on some wiki)
I have a couple of sparse swap files handy for when I know I'll need them.
And it happens with some very specific and predictable use cases.
Usually it's when I need to load some big timeseries csv files from pandas and parse a datetime index. Or when I need to process big images. It's useful to be able to attach some additional swap when you need it and drop it afterwards.
For everyday use I never even exceed 16GBs... Guess everyone has different needs, but there is no way it swaps over 32GB for normal applications, even taking into account overcommitting.
I'm gonna go wild and crazy and assume you work with IT in some way since you're on HN.
How on earth does idle RAM consumption matter to you when a GB of RAM is cheaper than a beer?
I'm not a Windows fan, I don't let Windows control any hardware other than a GPU on any of my systems but i do run it in a VM for gaming. I just don't see the problem, more RAM usage could even be better for performance. It's a useless metric.
I have 40GB RAM (Weird number indeed, 8GB soldered, 1x 32GB stick) in my machine. The problem I have is that even with 2 WDS500G2B0C-00PXH0 NVME in RAID1 it takes awhile for the machine to hibernate and resume from hibernation, It's faster to boot the system cold (but then I don't have state).
I've been reading but never really figured out. Can ZSWAP or ZRAM write compressed to disk somehow too?
>to the point that it makes me completely lose trust that any thought has been put into the human factors of the rest of the system.
This doesn't really make any sense. The issues with the file chooser are known, what's missing are design and development resources to do a redesign. The main problem is actually that the human design effort is being put into the rest of the system and not here; losing trust because of this is somewhat of a self-sabotage that you probably want to avoid.
I can't stand KDE/Plasma for what many would consider a silly reason: I want Confirm/OK/Next actions to be in the lower right side of dialogs, and Cancel/Back in the lower left.
I have a vague memory of this being configurable a long time ago (maybe KDE 3), but this setting disappearing in later versions.
Is it possible to change this in later versions of KDE?
KWin is great, as are Dolphin, the 'System Settings' GUI, and many other KDE applications. But for the life of me I cannot understand the praise of Plasma. It crashes or otherwise breaks regularly for me (for instance, my autohiding panel will sometimes disappear completely, forcing me to kquitapp5/kstart5 plasmashell to get it back) Edit mode is a usability nightmare, I feel like I need to be super careful about where I move my mouse while using it because otherwise it does things I didn't intend with errant mouse hovers. And while it may not be the fault of plasmashell itself, many of the plasmoids are broken, janky, or generally just not very good. While I use KDE, I try to avoid all the plasma stuff as much as possible.
> after decades we're still fighting for 2% of the marketshare
I was using desktop linux for several years and recently switched back to a Mac because of the massive hardware improvements Apple made with its M1 chip.
As much as I loved Linux, it's no longer just a software UX thing. The battery life and speed of the new Macs beats any Linux machine I could have bought for a comparable price.
This is only a temporary thing though. Linux will be made to work on M1 hardware (getting pretty close already) and Intel is already heading in the same direction.
Not trying to take away anything from the amazing Asahi Linux project, but as far as I know it doesn’t include the embedded GPU as of yet — so pretty close may unfortunately be a little bit further away still.
>get a seasoned linux to try MacOS or Windows and you'll recognize weaknesses on the other approaches too.
The tricky part of identifying “weaknesses” coming from and going to any two OSes or desktop environments is figuring out exactly what qualifies as a weakness and what is simply unfamiliar.
Like for example, a lifelong Windows user is probably going to think that anything that’s not the spitting image of a Win9X style desktop is chock full of “weaknesses”, and a diehard tiling WM veteran Linux user is going to see anything not built around tiling as “weak”, when neither is fully true in an objective sense.
This is a bit of a frustration for me as someone who primarily uses macOS but dabbles in Linux: most people haven’t used macOS extensively, and so Mac-style desktops tend to be seen by the individuals who develop DEs as full of “weaknesses”, and as a result there are no Linux DEs that are mac-like beyond the surface. There’s no shortage of Winlike DEs though.
FWIW I've (mostly) moved from Mac to Linux for around a year now & decided on elementary OS which has Mac as a primary influence: https://elementary.io
I don't like everything about it but it has been the most usable/value-aligned Linux distro I've encountered so far.
Sometimes I wish everyone in Linux just came together and worked together in one big kumbaya. Arch it’s rolling release base, with Nix for configuring it, Elementary its UI and UX and pop_OS’s willingness to put users ahead of principle. Top it off with Red Hat backing and Ubuntu popularity and oh baby.
I know I know, strength in diversity. But a man can dream..
I’ve been keeping my eye on that and it will probably come closer than anything else to date, but I’m skeptical that it’ll nail everything so long as it’s using Qt. Much of what makes macOS interesting is rooted in Cocoa, so a good macOS clone is going to have to be built in something that’s a close analogue (if perhaps modernized in some ways) to Cocoa.
graphics drivers alone are in the tens of millions of lines of code (counting both kernel and user mode). there's no way open source community can do that amount of work for free every couple years as long as new hardware is being developed.
this may become the case once hardware is good for 20 years. we aren't there yet, though we're closer than a decade ago. (typing this on an aging 8 year old desktop, which is really due for an upgrade).
The FOSS community depends on contributions from GPU vendors to have their products supported on Linux in a reasonable timeframe. As long as the contributions are released under a FOSS license, the vendors are part of the FOSS community.
Intel provides excellent Linux support for its GPUs through open source drivers,[1] and is about to launch a line of dedicated GPUs in early 2022 with the same level of support.
AMD has provided good support for essential GPU features through its open source Linux driver since 2015,[2] but compute features such as full OpenCL support (for most AMD models) are still locked in its proprietary drivers.[3]
Nvidia's open source Linux driver contributions are minimal, and they've earned a bad reputation for that. The reverse-engineered open source Nouveau driver is an incredible effort, but falls behind Nvidia's proprietary driver in performance and feature support.[4] This is what happens when the hardware vendor doesn't cooperate with the FOSS community.
I've got to agree, nvidia have lost any love I might have had for them, and I'm definitely not going to be buying another product from them any time soon.
I'm still very impressed with the great work the nouveau people are doing to work around nvidia's stubbornness though, so congratulations to them.
So does Intel also include their compute features in the mainline drivers, unlike AMD?
That'd be something to push me towards them when GPUs become available and somewhat reasonably priced again
Intel's compute features are in a separate open source package called Intel Graphics Compute Runtime.[1] Integrated graphics from 2014 (Broadwell) and later are supported. Support for discrete graphics (2020 onward) is in development.[2]
AMD provides an open source OpenCL solution for Linux called ROCm, but the project is too limited to be helpful in many use cases. ROCm supports a total of 6 GPU models, all from previous generations,[3] and does not support GUI-based software applications such as Blender.[4]
It will rise up to 5% if they just fixed their god damn installers.
I've installed Linux tens of times and the gui installers have NEVER left the system in a bootable state, not even when I choose the 'wipe everything' option.
Just a note that linux is ridiculously unsecure in the default daily user mode. You have no sandboxing, everything runs as the same user with the same privileges, etc. Just a rogue app writing a single line of code (with no permission problem at all) to bashrc can basically do whatever it wants on your computer, including screensharing, key logging, encrypting your whole home folder, sending ssh folders to somewhere. The only thing it can’t do is install a goddamn video driver (as per the relevant xkcd comic) — but with keylogging it will see the sudo password sooner or later.
Fortunately, Fedora does have SELinux, which makes it a bit better but a proper sandboxing solution is way overdue (firejail and flatpak are not necessary the best solution, it should be automatic). Like, at least copy what android does on the exact same OS.
I agree completely I don’t care if it’s not mainstream. Desktops are usually gaming computers from what I see in my groups no reason to have Linux on there. Linux is on all the servers, most mobile devices, and I mentioned even Sony cameras run Android.
I don’t see any benefit to Linux on desktop being mainstream. I like that it keeps away a lot of users since making things too easy since they’re unwilling to compile from GitHub is a feature, not a bug. It’s a mostly a programmer OS for programmers.
I find this attitude completely elitist and counter productive. Why shouldn’t Linux be for everybody, usable on every type of device? Why should we need to rely on proprietary software for our day-to-day tasks? Don’t you think there is benefit to this?
I have to agree with the OP. If Linux OS going to be for mainstream users, it will eventually succumb to the same issues that plague proprietary OSes. Like low configurability (gnome is already doing this). Mainstream users want very different things from a computer than we do. And in many cases they don't even want a computer anymore, they'll just use their phone or iPad for everything.
I also don't think the mainstream really care about privacy and control. Most of them are really happy in their walled gardens. As a power user I don't want to 'dumb down' my experience and work towards that, so if they really want it they'll have to buy it from a vendor that does it for them. Most of the big names are trying to get there already (like canonical)
And yes eventually a company will make another walled garden based on Linux but it will suck because of the vendor lock-in, limited access to its internals ("because otherwise we can't support it"), and commercial to subscription services. Basically this is exactly what ChromeOS is already.
Personally I just think that a Linux that's for everyone is simply not Linux as we know and love it anymore.
I think the problem is mostly that mass popularity is sort of at odds with the cowboy attitude of a lot of Linux desktop users. Creating a "standardized experience" like Windows usually means that configurability goes right out the window. It's how you get abominations like dconf or the GNOME music player that won't let you change the directory to read your mp3s from. And a lot of people see things like Wayland this way. Sure, maybe its easier for the average user to have all these formerly separate components like hotkey daemons or screenshot software integrated into one compositor. But why shouldn't I be able to run xbindkeys or sxhkd or whatever hotkey dameon I want? (I know there are reasons for/against, I'm just summarizing the argument.)
Wayland doesn’t prevent you from doing that. It just prevents you from doing that without privilege, because let’s be honest, who thought it was a good idea to let any random program snoop every single keypress?!
>Creating a "standardized experience" like Windows usually means that configurability goes right out the window. It's how you get abominations like dconf or the GNOME music player
I don't understand how you connected these dots and I'd suggest against calling things abominations. You don't have to use dconf or the GNOME music player, those aren't standardized. If someone does like them I think they're perfectly fine, they do exactly what they're advertised to do and nothing more than that. It's also fine if you don't like them, they're just two options from the many configuration databases and media players that you can choose from.
>But why shouldn't I be able to run xbindkeys or sxhkd or whatever hotkey dameon I want?
In some ways you actually can but it depends on the hotkey daemon and how it's implemented. The reason for that is technical, those are implemented with X grabs which are an X11-only API and they have a number of usability and security issues. There are a few key rebinding daemons that use evdev directly so they work with both X11 and Wayland, and also on the console:
But these also do have similar security issues to X key grabs, in that they effectively operate as keyloggers. If you're looking for an API that works purely within Wayland and lets unprivileged clients request key rebinding, that doesn't exist yet. Somebody would need to specify what that API looks like and figure out a good way to make it secure. What would the end goal of the API be, and how could the system (and by extension, the user) tell the difference between a legitimate hotkey daemon and a malicious keylogger? And would it actually be any better than the approach of snooping evdev? I don't know the answer to these questions but you may have more experience with this than I do.
>Why shouldn't Linux be for everybody, usable on every type of device?
It is, but you often have the effort to put into it. Windows isn't usable for everyone, but that doesn't make me elitist, OSX is hard for others, but I don't see that being a problem. Some people can't do everything you do, and there are differences in our abilities. If you expect everything to be easy, and people not willing to put in work to have a functional OS, you should not expect them to use linux, it offers no benefits to most users, its made by hackers to hack on, Android is linux for everyone, and even then they find it hard. Don't expect people to run when they can't crawl.
>Why should we need to rely on proprietary software for our day-to-day tasks?
Because the free stuff sucks, GIMP still sucks. They stuck to having 3 windows open forever, GNOME doesn't like being configured, GTK always breaks stuff, and I just want stuff that "just works". When FOSS does it well like KDE being better than Windows's UI, or Firefox being better than IE6 I will choose it. I don't pick based on their principals, I just want good software, and like most people I am willing to pay for quality. I refuse to use a pinephone out of principal, I refuse to make my life hell.
>FYI, Gimp have a single window mode since about a decade.
I know, I used it recently. They are so bad with the UI I find myself using Krita as a better photoshop replacement.
>I think saying it sucks is just being rude against one of the most amazing foss project of all times and it’s maintainers.
Comparing it to Krita, it is hard to compliment any of it, the UI, the features, the difficulty. Its been 25 years, and lots of FOSS stuff has caught up with paid options, and I see Krita being a photoshop replacement way before GIMP becomes usable.
If you can't pitch it yourself, I don't see the point, the first link is just about paranoia.
Were you ever curious but afraid:
– to click on that link in the email,
– to open that email attachment,
– to go to that shady-looking website,
– to install and run that suspicious program or even a virus,
– to insert that USB stick from someone untrusted?
Wth Qubes you do it all securely in a disposable VM and your personal files are safe. The worst thing which might happen is that the disposable VM breaks.
If the pitch is to make people paranoid to run a new OS they need to learn isolation on, there are way easier ways like running a normal VM, which most people are still not going to understand very well or do.
> If you can't pitch it yourself, I don't see the point
I linked my own text.
Security is one benefit, and I agree that it's not so important for everyone. There are other benefits, as described. A normal VM is much less secure, because the host OS will have the Internet access and you will not benefit from hardware virtualization. It's also less convenient in my opinion. The UX of Qubes is really good.
>Were you ever been concerned about opening your personal email (controlling numerous online accounts) in the same browser where you go to random websites? Actually, even when the browsers are different it can be a problem on a monolithic OS!
Mozilla has containers that people also barely use.
>On Qubes OS, you open those things in separate VMs, isolated with hardware, not software. It’s often better 2 than physical (air-gap) isolation. Recommended by Snowden.
>Are you tired of remembering tens of complicated passwords, or using a password manager? On Qubes OS, you can save all your passwords as plain text (in a dedicated offline VM) and copy them into the necessary fields (in other VMs) whenever needed.
Password managers are fine, why would someone tire of them?
I don't think you are pitching it correctly, if you want to succeed at selling Qubes, remember most people use laptops, the amount of ram required to run Qubes and the battery drain is not worth it for the performance hit. It would be much better to sell it as an OS on a remote computer that uses Xen, there are huge problems like audio quality, slow loading, and I would never recommend it as a mobile device OS. Most of the benefits are done with less resource intense methods, and learning a new OS for most of these features and mainly drawbacks is not a good pitch. I think it would be great on a remote device with your computer as a thin client, but it has very little day to day practical use.
> Mozilla has containers that people also barely use.
Does this mean that nobody needs security or privacy? It just means that it's too hard to use (not transparent to users) and probably that most users do not recognize the dangers.
> Password managers are fine, why would someone tire of them?
Well, at least for me plain text file looks quite a bit easier to manage. Thank you for the feedback, I will try to improve that.
The performance hit mostly comes from the lack of GPU acceleration. RAM is getting cheaper and more available with time. My laptop has 32 GB. Battery drain could be worth the added security and organization of the workflows. It is for me. I did not notice any problems with the audio quality.
> It would be much better to sell it as an OS on a remote computer that uses Xen
I am not sure what you mean here. Could you elaborate the use case? By giving the remote access to dom0 you are practically breaking the whole security model of Qubes. Although it is possible.
> Most of the benefits are done with less resource intense methods
It's a huge difference. When was the last time someone escaped VT-d virtualization?
> and learning a new OS for most of these features and mainly drawbacks is not a good pitch
Actually you do not need to learn anything serious unless you need some advanced things. Qubes relies on Linux VMs, and you just use all their apps and stuff. This could probably be another pitch point.
>According to your own link, these XSAs do not affect Qubes 4.0
Yes its been updated since, but the point is its still PCI linked, which you probably use.
>Does this mean that nobody needs security or privacy? It just means that it's too hard to use (not transparent to users) and probably that most users do not recognize the dangers.
It means that this is overkill, and even when its easier to use its still not being utilized.
>The performance hit mostly comes from the lack of GPU acceleration. RAM is getting cheaper and more available with time. My laptop has 32 GB. Battery drain could be worth the added security and organization of the workflows. It is for me. I did not notice any problems with the audio quality.
Needing to buy 32GB of ram to run an OS on a laptop that works fine now isn't a selling point, I had tons of crackling.
>I am not sure what you mean here. Could you elaborate the use case? By giving the remote access to dom0 you are practically breaking the whole security model of Qubes. Although it is possible.
It would be better as a server OS you VNC into.
>It's a huge difference. When was the last time someone escaped VT-d virtualization?
In the real world, it doesn't matter, its a nuclear bomb shelter, I would rather build a house with a fancy kitchen for cheaper.
>Actually you do not need to learn anything serious unless you need some advanced things. Qubes relies on Linux VMs, and you just use all their apps and stuff. This could probably be another pitch point.
It isn't easy to get used to the isolation of all instances especially if you use more than one computer, and other people's computers.
The pitch is that your computer is more secure (from what real world threats?) at the cost of huge battery drain, going from needing 8GB comfortably to 32GB, slower from PVH that gives problems like no GPU acceleration, while its still running insecure backdoors like Intel ME, or AMD PSP, which are far more dangerous.
> while its still running insecure backdoors like Intel ME, or AMD PSP, which are far more dangerous
My computer has disabled and neutralized Intel ME.
Qubes PVH virtualization has no practical effect on performance. Qubes works great for me for everything that a non-sophisticated user would want, except games. RAM is cheap.
> (from what real world threats?)
Any serious privilege escalation which happen every month on all other systems.
>My computer has disabled and neutralized Intel ME.
So why don't you make that the top priority before Qubes? Isn't it essential to make sure every user of Qubes does the same? Locking the screen door without locking the front door isn't at all secure. Do you expect the people worried about an email to flash the BIOS with an clip connected to their Pi or Arduino they programmed first? Its a sales pitch that ignores bigger issues. Most people from that pitch if convinced will have a false sense of security when the real threat is an ever present backdoor that can be hacked.
>Qubes PVH virtualization has no practical effect on performance. Qubes works great for me for everything that a non-sophisticated user would want, except games. RAM is cheap.
People like to play games, not everyone has removable ram or multiple slots, and if it has no practical performance effect, what computer do you have? I bet your computer isn't a generic dual core that most people have. You said it has no GPU acceleration, which a lot of browsers use, so it wiil be much slower for most people.
>Any serious privilege escalation which happen every month on all other systems.
Again, what real world threats? You say serious, but these threats are not serious, if they were, you wouldn't need to convince anyone to use qubes. There are not real issues, BSD servers that don't update or reboot for years wouldn't exist if there were actually any serious threats. Windows has automatic updates, Linux has quick patching, OSX had a bunch of RCEs corrected and no hacks. You fail to name a single concrete threat. Its cool if you want to run a bunch of VMs, but on a laptop that you need 32GB of ram, that depletes battery life more, and for some vague "serious privilege escalation"? Its a hard sell, better to suggest it as a remote desktop that you can control with a thin client.
Yes, it would be ideal to have everything open and controllable. However you need to take into account the bitter reality and go step by step. Are you aware of any possibility of remote access with Intel ME? I'm not. See also: https://forum.qubes-os.org/t/intel-me-real-threat-for-ordina....
> Do you expect the people worried about an email to flash the BIOS with an clip connected to their Pi or Arduino they programmed first?
I did not do it myself and I don't expect that people will do it, too. I bought my Librem 15 as it is, and recommend to everyone. (It's not sold anymore, Librem 14 replaced it.) See also recommended computers: https://forum.qubes-os.org/t/community-recommended-computers.
> I bet your computer isn't a generic dual core that most people have.
It's actually dual-core i7-6500U.
> People like to play games
Sure. These people unfortunately are not the target audience of Qubes, unless they are ready to do GPU passthrough (which has been shown to work).
> not everyone has removable ram
So what? Do you suggest to give up? People who are aware of dangers of the Internet could choose their next machine to be compatible with Qubes and allowing more security and control.
> You said it has no GPU acceleration, which a lot of browsers use, so it wiil be much slower for most people.
Bloated websites are slow, almost independently on what machine you have. User-friendly websites work flawlessly for me. Youtube works fine.
> but these threats are not serious, if they were, you wouldn't need to convince anyone to use qubes.
Are you implying that every person knows everything about their threats and makes perfectly logical decisions? This is not a game with complete information: https://en.wikipedia.org/wiki/Complete_information. People need security even if they do not realize it yet (until their data is leaked, which happens very often nowadays).
> BSD servers that don't update or reboot for years wouldn't exist if there were actually any serious threats
I don't see the logic here. There are millions of hacked servers in the world used for spam and DDoS attacks. Where do you think they come from? (hint: not just from IoT devices)
> Windows has automatic updates, Linux has quick patching
Before it is patched, you are vulnerable. It's called a "zero-day vulnerability". And you are typically not aware of it when it happens. Also, vulnerabilities in browsers are also numerous and frequent.
> better to suggest it as a remote desktop that you can control with a thin client
I don't get it. You are going to connect to a "secure" server from an insecure machine with full access. Do you expect that your server stays secure after that?
You also did not mention that Qubes defends you from simply broken software which you sometimes have to install, which could make your system unstable.
> but on a laptop that you need 32GB of ram, that depletes battery life more
Are you aware that a lot of people today are using a laptop as their desktop home computer? I do.
Also, note that I'm not trying to literally sell anything. I'm just a happy Qubes user and I think that more people deserve better security for their computing.
I see pitching as sales. Yes, I think qubesOS is perfect for people who worry about privacy, opening emails, still want a PC over a tablet or phone, do not have Intel ME/AMD PSP and will spend much more for lesser hardware to purchase one without or are willing to do so themselves, do not play games, battery life not as important, have an i7 with expandable ram up to 32GB, do not install patches often, and are willing to isolate their programs in VMs.
>Are you aware that a lot of people today are using a laptop as their desktop home computer? I do.
False, most are using mobile like phones and tablets as their main computer, desktops and laptops have declined for over a decade.
>I don't get it. You are going to connect to a "secure" server from an insecure machine with full access. Do you expect that your server stays secure after that?
Lots of ifs, like if I installed a hardware keylogger onto your computer.
>I don't get it. You are going to connect to a "secure" server from an insecure machine with full access. Do you expect that your server stays secure after that?
Yes, XEN and hardware virtualization keeps it all safe. VMs like those on Qubes work the same way. Most laptops don't have good virtualization hardware, expandable ram, or decent processors. Connecting to a server doesn't compromise it in any real world scenarios, the same way if I remote access your laptop through mine, root ssh is not common, and you'll have it isolated in a VM anyway won't you?
People have been saying this for the last 20 years and I've yet to build a linux workstation that isn't at least 20-30% more time consuming to configure, maintain and tweak than a Mac and sometimes a lot more time than that.
This seems off to me. I remember when I was using Mac, if I got a new machine, I had at least a day of downtime. Everything is minor, but it totally adds up. You've got to run around to dozens of websites, download, and click through menus to install Docker/Homebrew/IntelliJ/etc. Then there are all the nanny systems you have to turn off every time you hit them the first time. Oh, gotta give the console access to the filesystem. Half the binaries you download aren't signed to Apple's liking, so you gotta work around each one of those, and how you work around it changes with every major version of MacOS. Hell, just dragging, holding and releasing the 2-dozen useless default apps the hell out of the Dock takes a solid minute. Did you sign in to iCloud yet, btw? Siri would love to be enabled to, if you have a moment. Your home folder isn't in the finder, just a bunch of iCloud stuff and the Desktop. Do you remember how to option-click in the magic spot to find it and make a shortcut? It's death by a million cuts.
That said, I'm not gonna pretend that Linux is some utopia and everything works immediately. But there's no way it's worse. I have a script that takes any machine from a formatted drive to a working OS exactly how I need it, unattended, in about 20 minutes. That's extreme, but everyone at least has a list of the packages they need that they can copy paste install into the terminal as soon as the installer is done.
MacOS is a lot easier if you restore from a Time Machine backup. In four hours I can buy a new machine and be logged in and working like nothing happened. Done that plenty of times.
I think the biggest thing I can point to is how well Apple products work for the average person. I buy new ones every 2~3 years and keep N-1 as a spare. N-2 goes to someone who has asked me too many PC support questions.
Everyone I've given an apple laptop has eventually bought a new one. They bitched about the price, but they were ultimately willing to pay for it. My dad worked for IBM for many years and was not happy I "wasted money" on my first mac in college. Ten years later after him still saying that, I gave him a laptop. That was a few years ago and last month he bought a new M1 Air. Meanwhile, a number of the people he works with have converted to Apple laptops because they've seen how well his works.
I am on my 4th macbook in 7 years, I have not had to install any program twice and all my settings so far have carried over between devices, with the sole exception of security settings. Maybe I got absurdly lucky but I have never had this little issues with switching devices thanks to migration assistant.
I do think that's where you're hurting yourself. I've been using Macs since before OS X came out, and in the OS X era I have never had long downtime when switching to a new machine. In the old days before Migration Assistant, you'd just clone the drive. These days you use Migration Assistant. I do understand the "dirty" sentiment, but in reality I've been upgrading via cloning or Migration Assistant for close to 20 years now with never a clean install, and it's worked well.
Absolutely not hurting myself. I love Linux. :D I don't think I'll ever switch back. I actually understand how every aspect of my machine works, and this feeling of control is absolutely addictive.
I kinda lost the thread at this point, but I was just trying to make the point the Macs aren't magically usable out of the box, at least for power users.
> Absolutely not hurting myself. I love Linux. :D I don't think I'll ever switch back. I actually understand how every aspect of my machine works, and this feeling of control is absolutely addictive.
This is exactly the reason for the feeling of empowerment I got when I moved to FreeBSD/KDE after Mac. I didn't realise just how closed down macOS had gotten over time. This was the number one reason for my move but the extent of it still surprised me. Everything I want differently now is just easy, documented and usually even has a GUI button ready for the clicking. Whereas in macOS it's just "this is how it works, get used to it" now.
Apple is working so hard to lock the underpinnings of the OS down now and even if you can still change things to work your way you have to jump through so many hoops it becomes really annoying. And as well as that whatever option you used to get there is bound to get changed or removed without notice.
I just realized I'm not happy just going along with other people's choices. I hate 'going with the flow'. I recognise any the specific annoyances you mentioned in your post higher up in the thread.
You can copy your home directory on macOS, too, I do it every time I get a new machine and it works fine. I just `cp -a` my home directory onto the external drive and do the same back off. I do have to chown -R the directory sometimes, if the UID is different for some reason. Obviously you need to reinstall the applications, but that is mostly just dragging them into /Applications. Not too much different from `apt-get install`ing them, really.
I have dozens of GUI programs—commercial and open-source—and even more command line programs and I'm like 99% sure the only one I installed in any way other than `brew install [name]` was homebrew itself. Oh, wait, not quite true: Apple's software (Numbers, Pages, Keynote, et c.) I manage through the App Store. That's it.
The fact that Brew's package selection is one of the most complete that exists (I think Arch's and Gentoo's get fairly close, maybe with a few not-enabled-by-default ones added in?), is one of the main reasons I like it so much.
Depends on what you’re used to and what your needs are. I can have a Mac in reasonably usable shape in 30-45m, whereas getting Linux configured to my liking (especially under KDE) quickly turns into a multi-hour affair and ends in an unsatisfying state because of all the little things that can’t be the way I like unless I start digging into source code or writing my own desktop bits.
> Depends on what you’re used to and what your needs are. I can have a Mac in reasonably usable shape in 30-45m, whereas getting Linux configured to my liking (especially under KDE) quickly turns into a multi-hour affair and ends in an unsatisfying state because of all the little things that can’t be the way I like unless I start digging into source code or writing my own desktop bits.
That only works well if macOS is in fact exactly how you like things. Because it's a lot less configurable than KDE and the whole source code thing is not an option at all :)
I agree, if you want a Mac then don't try to make Linux like one because you'll never get close enough.
I mean I've had 2 mac's so far and I've reset the first one a handful of times it never took me more than about 1 - 2 hours to get up and running with Homebrew (a package manager for Mac). Although I will agree the Mac installer is highly unintuitive.
> People have been saying this for the last 20 years and I've yet to build a linux workstation that isn't at least 20-30% more time consuming to configure, maintain and tweak than a Mac and sometimes a lot more time than that.
That's a high bar to clear. One thing that Apple does that makes a lot of sense is that they own the hardware and the software. Hackintoshes are a good indication of how things would look like if they had to support more hardware than they do.
This is one of the reasons why I have recommended Macs for the less tech savvy members of my family. Just works, great manufacturer support (noone really comes close), has things like Time Machine to keep backups running. My "support calls" dropped drastically.
My mother had a Linux netbook(remember those?) for banking purposes but a Mac for general usage. She would be able to browse on Linux, but things would get messy the moment she would want to, say, use her cricut machine.
Let's compare with Windows? That truly takes a whole lot of time, specially because the OS seems to "rot" over time. It's become less of an issue (possibly also because our hardware is so powerful). My Windows desktop has taken way more work to maintain than my Linux machines(plural) + Macs. I have to keep that for VR, Fusion360 (going to ditch that as soon as I can) and the odd game that doesn't work properly (Sea of Thieves in-game chat?). Can't wait to get rid of it and use only Linux - which takes hardly any "maintenance" at all (other than installing updates and even that is just a click).
>That's a high bar to clear. One thing that Apple does that makes a lot of sense is that they own the hardware and the software.
It’s an ideological bar though. Pick a line or models of laptops and officially support that specific line. Keep the list smallish and from popular manufacturers. Everything else deals with it they way we do now.
Linux on supported hardware is pretty low maintenance. But it’s still not quite as nice as MacOS. And god forbid you get the latest hardware.
For some that’s a feature, not a bug. I really wish there was at least a CLI you could use to tweak certain behaviors instead of a mix on nvram and system preference
You can't really configure your car after you bought it either. Or your blender or your hair dryer.
Not everything in the world needs to be configurable to be good. Yes, when moving from Windows to Mac, people miss stuff that could be configured. When moving from Mac to Windows, from Mac to Linux etc, it is always the same story.
But people often miss the point that being configurable does not always means making it better necessarily. Part of configuration is to allow people to work differently, and that is good. But configuration to make it work like the competition (make Mac work like windows, or Mac work like linux) is beyond what configuration should be able to do anyway. Otherwise you end up with no consistency and a bad compromise.
Even with Macs trying to offer a consistent experience, Mac apps can often behave differently. Back in the day when I used Linux I felt like many apps I needed to learn how they dealt with the most basic things like the menu locations, like window management, preferences/settings/configuration etc. Too much configuration is not healthy all the time. And Linux culture of "configure all the things" might be hurting Linux adoption itself.
The problem with no configuration is that someone else (the vendor) configures it for you. Who doesn't necessarily have the same interests.
Consider Microsoft complement destroying usability in Windows 8 just because they wanted to sell more tablets. Apple trying to lock you in ever deeper into their walled garden.
Depends which OSX version you have for both, I don't know if your hardware supports the last one, but you might need the correct cable, HDMI doesn't do it for me, and if your intel GPU can't, it won't, not an OSX issue.
Does it make a big difference? I heard 60hz freesync would make a bigger difference than high refresh, it looked nice when I had 144hz but aside from me sliding my mouse it didn’t really change anything.
From someone who’s used both macOS and GNOME a good deal, GNOME to me feels a lot more like iPadOS if it had been mildly adjusted to run on desktops than it does macOS. There’s several aspects of GNOME and its app suite that are more inflexible than macOS and its apps.
I completely agree. If someone uses GNOME and says OSX isn't configuration friendly, the critique is very hypocritical, enough to be dismissed. If you say that and use almost any other DE (I love KDE) sure. Someone saying the overcooked steak is disgusting while eating a burnt piece of toast is hard to take serious, they are constantly breaking addons and plugins, its like they don't want any 3rd party support, not unlike the often critiqued apple desktop.
pop! is indeed really nice, with the possible exception that for some reasons I have to manually recompile the nvidia drivers (and sometimes wifi drivers) with some arcane `dkms` commands every time the OS updates include a kernel update. After the second time this happened I just pasted all the commands in a text file for later reference but it sure is annoying. Apparently not everyone has this problem though, so maybe my computer is just weird.
I guess it depends on what you're configuring. I'm using microk8s at work and the part of my team that has to deal with the VM-encapsulated version that runs on Mac or Windows is always mired in configuration woes. Same goes for docker.
Granted, that's a far cry from touchpad configs--but if you keep your dotfiles in git then touchpad config is something you only have to once (per touchpad that you own).
maybe you're just a passionate tweaker. the basic stuff I need for my workstation has to be configured on any OS - be it Windows, Linux or Mac. the more advanced stuff is mostly optional and I usually do it additionally on linux just because I can. so, I'd say this is a bit of an apple-pear-comparison.
To get an OS? Sure. To get one setup to run my dev environment, games and media? I have to pull in a GitHub repo and run a script that installs and configures everything for me (before the usual tasks of logging in to a hundred websites.) I swore Linux was super easy to setup for years, and as someone whose daily driver is MacOS now I still agree it’s the easiest to setup to my needs, but all 3 main OSs can get you a system in a couple clicks/keystrokes. Windows is probably the most difficult to setup for me (MacOS is at least unix so my scripts for Linux mostly work on the Mac as well) and I only use it for my gaming pc (which was for years Ubuntu, I just decided to install windows on it since I only use it to game anymore.)
I love Linux, it’s been my OS since Vista came out and I switched to SUSE Community Edition, but we get so used to our OSs we forget part of the ease of use is being familiar. When I problem pops up on Ubuntu I can solve it within a minute. When one pops up on Mac it takes me some googling (and I give up on windows.) None if these systems are all the easy (or difficult) to use, it’s just a few basic concepts and then a ton of time figuring stuff out.
> From what I see the proprietary operating systems are in decline now
It is a very slow decline, but it's being largely going on unnoticed. Rewind the clock 20 years, there were all sorts of paid software offerings everywhere on the desktop. You couldn't uncompress a zip without paying (or stealing from) someone. Unless you were a Linux rebel running slackware or whatever.
We can do a lot on Linux now. We have thousands of Steam games running on the Proton+Wine+Vulkan combination (often with similar performance, in some cases even better). Most of the missing apps are available in the browser - which is a blessing and a curse.
Everything is moving to open source software. The issue now is different: we have commercial offerings on top of open source, closing their gardens. Take Android. Ridiculous amount of open source software, arguably the most deployed software worldwide. Controlled by a corporation.
Lots of apps have moved to the web, where we have a similar problem. Their foundations are most often based on open-source. But they are behind some corporation or another.
Heck, the big cloud providers all have in their core a whole plethora of open source software.
The operating system concept, as we know it, is in decline. This may be a global maxima for FOSS OS. I hope it isn't.
> "Rewind the clock 20 years, there were all sorts of paid software offerings everywhere on the desktop. You couldn't uncompress a zip without paying (or stealing from) someone."
Windows XP had built in zip support and was released 20 years and 4 months ago. Before that, from memory pkunzip was free and it was zip which needed payment, but wikipedia links to a review of an early version from a BBS here http://cd.textfiles.com/rbbsv3n1/pool/pkpolicy.zip which states that it was free for noncommercial use, shareware license for commercial use, and that Phil Katz had helped people implement unzip (on deflate algorithm I think) in their own code without charging for his help.
You move onto talking about Steam (not open source) which is used as a DRM and payment engine for many closed source games - and that is one of the more popular uses of closed source software (as well as things like Microsoft Office), XBox, Playstation, Nintendo and PC gaming. Having proprietary Steam running a proprietary game after you login to Valve's proprietary online authentication, on a free reimplementation of proprietary Win32, but hooray because there's Linux somewhere inside, doesn't seem very close to the future Stallman was hoping for or the ideals of free software or open source. What would it mean to change the source of an online game? Probably that you can't connect to any servers anymore. What would it mean to move your FIFA team to another game engine?
The GDPR at least gives Europeans some rights to download their data from cloud services in a machine readable format, but strangely you don't seem to have that right to your data in a proprietary game stored on your local machine with access gated through a proprietary online service.
>> "Rewind the clock 20 years, there were all sorts of paid software offerings everywhere on the desktop. You couldn't uncompress a zip without paying (or stealing from) someone."
> Windows XP had built in zip support and was released 20 years and 4 months ago. Before that, from memory pkunzip was free and it was zip which needed payment, but wikipedia links to a review of an early version from a BBS here http://cd.textfiles.com/rbbsv3n1/pool/pkpolicy.zip which states that it was free for noncommercial use, shareware license for commercial use, and that Phil Katz had helped people implement unzip (on deflate algorithm I think) in their own code without charging for his help.
Free as in beer. Not FLOSS.
> You move onto talking about Steam (not open source) which is used as a DRM and payment engine for many closed source games - and that is one of the more popular uses of closed source software (as well as things like Microsoft Office), XBox, Playstation, Nintendo and PC gaming. Having proprietary Steam running a proprietary game after you login to Valve's proprietary online authentication, on a free reimplementation of proprietary Win32, but hooray because there's Linux somewhere inside, doesn't seem very close to the future Stallman was hoping for or the ideals of free software or open source. What would it mean to change the source of an online game? Probably that you can't connect to any servers anymore. What would it mean to move your FIFA team to another game engine?
For the vast majority of users the OS is just a way to get to the web.
The web (browser) is the OS, but the apps are centralized, incompatible, and primarily SaaS subscriptions (or worse, ad based). Desktop operating systems are primarily thin clients to these services. Native apps on the existing stack are dead outside of a few niche applications. It's why I think Urbit is cool, if you built an OS from first principles to take into account the web what would it look like? It's basically moving the API layer up the stack to include auth and application distribution.
Is android or iOS proprietary? The desktop is declining, it’s market share is so bad that windows literally build an android emulator and Linux subsystem. I’d say it’s less on desktop but all the computing is mobile.
It would be better to move away from all OS and just have ISA software.
Android is the only "open source" out of the bunch, but most devices running it have plenty of closed source internals the devices' are useless without.
There's still no viable FOSS OS for mobile/tablets. Not that they don't exist, just that they don't have the featureset and development velocity of Android and iOS.
I don’t think there will ever be a FOSS OS for consumers that is the préfèred option.
There is too many moving targets to ever be “done” and chasing those customers is profitable and requires money. That’s the perfect domain of a business.
I’m ok with this too. As another comment mentioned, once upon a time you couldn’t unzip a file with outpaying. Now it’s standardized and free (as in freedom and beer). Let apple and google and Microsoft fight and spend money tweaking the gui just right and bundling in whatever crap they want. FOSS community will lag behind and pick up the tail freeing everyone not on the bleeding edge.
Imagine if things like unzipping wasn’t free in 2021? How much shittier phones and services would be. Hell even Niche projects like the oculus quest would be a mess without basic tools being ubiquitous.
Unlike some FOSS supporters, I’m ok with commercial interests in software. 2021 might not be free, but it’s still in the making - if 2020 is free, everyone can help build the best 2021 from a more level playing field when we can stand on the free shoulders of giants.
Ps: companies should donate more to open source software development and protection and maintenance.
At least the OSX kernel is open source and easy to port to, Saurik and the early jailbreakers got apt and all the CLI tools on the first gen iPhone.
>There's still no viable FOSS OS for mobile/tablets. Not that they don't exist, just that they don't have the featureset and development velocity of Android and iOS.
Tell me what I can't do with Android that linux can do, I can do the reverse of what I can do on Android that linux on mobile can't. Its there if you want it. All the new Android phones are getting mainline. Check out my thread. https://news.ycombinator.com/item?id=29373106
I know it's changed recently, but for quite awhile the answer to "what can I do on Android that Linux on mobile can't" was "make a phone call."
That class of product issue is why FOSS is different than the marketplace. It's often behind in creature comforts/UI/UX and lightyears beyond in technical capability. Because FOSS developers spend time on things they care about, not about what the market does.
I prefer when FOSS piggybacks onto larger projects. F-droid is amazing, I have all the comforts of real android and the nerdy stuff from FOSS and root addons. I get the best of both worlds, great hardware for the price, great support from a good UI, and great support from the ports of CLI and nerds who also like to have their android phones be real linux computers.
Does that matter? Android could switch to Fuchsia and there would barely be a way to tell.
Any server you contact can run whatever it pleases and you'll be none the wiser (unless you get a web server error or go spelunking).
The kernel is a very unimportant part of it. So while Linux with a capital-L has "won", the free/open ethos surrounding it certainly hasn't - to my chagrin.
These are hypotheticals that are fantasies, will it matter if a meteor hits the earth?
>So while Linux with a capital-L has "won", the free/open ethos surrounding it certainly hasn't - to my chagrin.
Who do you think develops and sponsor Linux? I am excited for android using a mainline kernel, I have been using it as a mobile linux computer for years. Its easy to open, I would prefer boneless chicken but chicken is fine too.
I would say that yes, in the long term. GNU/Linux on a phone does not have planned obsolescence and allows running desktop apps, connecting it to a screen and keyboard for desktop mode. I'm daily driving such phone (Pinephone). It's rough at the moment but getting good enough quickly.
It definitely closed many gaps. Most servers, supercomputers, embedded devices, routers, smartphones, smart tv's... depend on free software one way or another.
A few software markets have been completely dominated by FLOSS: system software, compilers, lexers, shells, kernels, codecs, programming languages, arduino-like tools...
On the professional market, just look at what blender became. Other examples for end users: OBS is probably the most used broadcast software available. People ignore that most people don't need the features Sound Forge and Audacity has probably many more users. Other examples in this same category: VLC, Handbrake, Inkscape, 7-zip, Calibre, Krita...
Even on the places where it is not a leader, it is sometimes good enough. I can totally easily edit a video using kdenlive; record, edit and master a music using Audour; compose a scene using Natron; compose music using MuseScore; edit 3d models using Wings3D; design an environment using SweetHome3D; render using many of the top-notch state-of-art FLOSS renders like Luxrender or whatever comes with Blender these days.
Desktop software is much more beautiful, intuitive and stable than before. And it doesn't try to milk you for money, attention or personal data. Flatpaks, AppImages and snaps finally make it possible for users of the most popular distros to use the same software, the same version, working the same way regardless of the distro.
Now, go back a few years. The situation was entirely different. You could do nothing of what was described using only FLOSS if you go back enough. Or, you could, but it would be complicated and unstable. This is no longer the case.
Of course there are still gaps, but many have been closed over the years and most remaining ones are slowly closing.
> A few software markets have been completely dominated by FLOSS: system software, compilers, lexers, shells, kernels, codecs, programming languages, arduino-like tools...
While not discounting how good coreutils are, everything you've listed is basically a commodity at this point.
Companies who live higher in the stack swoop in and claim all of the value.
That's not a bad thing. Proprietary software tends to lead by implementing new features first, and then FOSS alternatives catch up over time. As the software category matures and innovation slows down, the FOSS solutions become strong enough to overtake the proprietary solutions in popularity. At that point, the software category is commoditized.
It's a gradual release of intellectual property into the commons, similar to the expiration of patents and copyrights, but at a pace determined by market forces instead of government regulation.
I would absolutely disagree! 20 years ago, Linux required some arcane incantations, but was sold in stores, and was somewhat useful. 10 years ago, Linux required patiently going through lots of forums to find out why sound didn't work, and tweaking some config files - though fewer than before. This year, Linux was featured on LTT as "Can we viably switch / use Linux for home use as regular geeks?"
You're right that "Good Enough" takes many forms, and it is a moving target. But whether it's closing the gap might just be a difference in which timescale one uses.
Sure, you may be able to download an ISO or USB image and relatively easily install Linux today without worrying about what sound or graphics driver you need like you could with other OSes 20 years ago. But that doesn't mean the competition didn't move forward as well.
Today I can wipe the entire drive on my MacBook and re-install everything from scratch over the internet, without install media, directly from the firmware. When can I buy an off-the-shelf computer that can do this with Linux ?
After installing macOS I set up my account, enter my iCloud details and all my photo's just appear, I can send and receive text messages, phone and video calls. My clipboard syncs to my phone, and I use my phone to take a photo or scan a document and insert it directly into whatever document I was typing. I can start typing an e-mail on my phone, decide it's too long to type on the little on-screen keyboard and seamlessly move the task to my desktop. I can take a photo of some text on my phone, copy it as plain text, and then paste the text into anything on my desktop.
All of this with no setup required.
How much time will it take Linux to be able to do all of this ? And what else will it have to catch up to by then ?
Hardly a dent has been made on the desktop. I still see users running either Windows or macOS.
ChromeOS is possibly the 'closest' to this (and that is proprietary), but Google is going to replace it with Fuchsia anyway; meaning that we will soon be back to square -1 on the Linux desktop.
I think it's more likely Google's new Fuchsia OS will displace Linux for most desktop use (and maybe more than just that) than that Linux will get its shit together and ship a GUI layer good and cohesive enough to compete. And I write that as someone who used Linux desktops heavily (and, for long stretches, exclusively) for about a decade.
It's, what, BSD licensed? Open source, but not GPL-alike Free Software. Still, that's pretty decent.
I reckon the only way that doesn't happen is if they pull the plug (quite possible) or they shift to keeping the GUI layer proprietary (also quite possible).
FreeBSD, NetBSD, and OpenBSD all include instructions for obtaining nonfree programs in their ports system. In addition, their kernels include nonfree firmware blobs.
There is just one: “GPL”. I’m not aware of any other viral licenses. But I suspect you’ve meant copyleft instead, which is something different from virality - there are plenty of copyleft licenses, eg MPL, which are not viral.
Maybe that's what I was looking for. FFS, way back in the mists of time I'm pretty sure I had a hat that read "COPYLEFT". You'd think I'd be able to get it correct.
I think that for it to really take off someone needs to release a laptop where linux is truly supported over windows, and they maintain, or help maintain, a distro to runs perfectly on that hardware. That's the only way you're going to avoid the driver issues that will ruin the experience for typical users.
I know there are some, like System76 but looking at their site quickly, they start at around $1,000. For the average user that's waaay too high, especially if they can get what they need out of a chromebook.
> like System76 but looking at their site quickly, they start at around $1,000
Maybe it really does cost that much to produce and support well matched HW and SW. Apple machines are kind of expensive, Windows licenses aren't that expensive but MS sells a ton of them.
They have them for Dell, and Lenovo Thinkpads. The main issue is there is no benefit for the end user. Why would they want to switch to linux as an end user?
They just installed Windows on their netbooks last time it was even cheaper. The people who think steam deck will make linux mainstream are going to just see the next deck come with windows and everyone who gets a deck installing windows on it to play games.
They might, but they'll be getting Linux for the wrong reasons. And will end up dissatisfied because they'll be expecting Windows only free. Which will hurt adoption more.
Wifi is the worst. I had a wifi driver fail on me the other day after the updates included a newer kernel version that didn't work with the current wifi driver. Newer ones were available on github, but downloading requires wifi. Eventually I managed to make a hotspot out of my phone and connect to github via bluetooth.
The reasons for this are not technical though. Modern WiFi cards don't have firmware in ROM or flash but in RAM (loaded every time you turn it on) and due to the closed source nature they can't be included with the OS.
"Due to the closed nature of XYZ" is one of the stupidest reasons ever for breaking the computer of your users. They could have an automatic check for incompatible drivers and cancel the kernel upgrade, or even provide a checkbox somewhere in the settings where users can click "I don't care if my drivers are closed source or not" that would automatically also update any closed source drivers. I personally derive some perverse pleasure from learning obscure Linux kernel stuff, but I also 100% understand if some cook or a high school French teacher would throw out Linux forever and go back to Windows or MacOS the first time this happens.
(Also, I just double checked and the drivers in question are licensed under GPLv2 so the licensing is unlikely to be the issue)
The drivers are licensed usually but the firmware blob often is not.
But yeah this depends on the distro. Ubuntu is very liberal with this, it simply has a tickbox to enable non-open parts. Others like Debian are stricter. It's up to them really.
I have had terrible luck with suspend/hibernation on all of the laptops I have used Linux on, so I just resort to shutting down and starting up... I have SSDs so it is not that big of a deal, but still. I cannot wait for the day I can simply close my laptop lid and have the battery not drain ~12% an hour.
Which laptop do you have? I'd like to keep clear of it and its manufacturer when I'll have to buy a new one. My nearly 8 years old ZBook from HP loses maybe 3% per hour with 32 GB RAM to keep powered. It was much less with 16 GB. Don't you have a 128 GB laptop, right?
More seriously, did you check if people have the same problem with that laptop and Windows? Or some firmware update from the manufacturer to fix the battery.
My Ubuntu laptop is just fine apart from every now and then randomly pausing and not accepting user input for a minute or so at max CPU. Seems to be maybe Chrome related, so that's maybe the next thing for me to try to migrate off after Windows.
Quite difficult to debug as system is unresponsive when it happens, so would have to delve into logs. Maybe a Chrome memory management or graphics acceleration issue, I don't use many extensions but it's possible I guess. 12GB of RAM, but that can get eaten up by Chrome pretty easily.
Linux, macOS, and Windows have all been good enough for a long, long time now. If you need to run some application that only runs or (or runs best on) some particular operating system, then that's probably what you buy.
Until haptic touchpads become available outside of MacBooks, touchpads are always going to suck under Windows and Linux. Gestures aren’t the main thing preventing me from enjoying the touchpad on my Dell — it’s plenty big and it supports four finger gestures under Windows. The problem is that the entire thing shifts down a couple millimeters when I click and my finger ends up dragging slightly, which moves my click off what I’m clicking on.
I don’t understand how Apple has had a monopoly on this for half a decade. Lenovo had a thinkpad come out last year with a haptic touchpad but I haven’t seen anything further. Is it patents?
This was a really surprising thing to read. For me the Apple's haptic touchpads were a slight downgrade from their excellent non-haptic touchpads in <=2012 MacBook Pros. I liked the stiff clicky feel of the "real" touchpads over the haptic simulated click. Though the haptic touchpads do seem to be more reliable and are better at registering clicks near the top of the pad.
I didn't realize how much I click at the top without thinking about it until I helped my parents with their older Air. The haptic feel is maybe a touch less realistic but not having to move my finger down or apply half my body weight to register a click more than makes up for it in my book.
I had a coworker who worked on the haptic trackpad for 4 years before he left Apple and they shipped it a year after that (so at least 6 years or so of R&D), I'm sure there are dozens of patents around the technology and probably a high barrier for entry for other companies.
He griped that they didn't even send him the 12" MacBook it initially shipped in and that the R&D for it was ring fenced from the Mac hardware teams so both were working in isolation until the product was developed with the tech.
Your experience is exactly the reason for I buy laptops with 3 physical buttons and a touchpad that doesn't move. Then I disable tap to click. Nothing moves around and it's very clear which button I click, left middle or right.
I also don't do any gesture except of course vertical and horizontal scrolling. Maybe pinch to zoom would be useful like on my phone. I got hotkeys for everything else I care about. Anyway I welcome this project, it improves Linux.
> Until haptic touchpads become available outside of MacBooks, touchpads are always going to suck under Windows and Linux.
Incorrect - though I'm sympathetic. The ideal touchpad is a click-less touchpad with physical left / right buttons on the bottom left and right of the touchpad. No need for a physical hinge that requires the full weight of your arm to click, like an old Macbook. No need to push down on the touchpad at all and go to the expense of getting haptic feedback either. Just provide buttons!
(I do agree though, that so long as manufacturers are not going to provide physical buttons, haptic feedback would be a significant improvement.)
Nice, I recently replaced my mac book pro with a cheap Samsung laptop (emergency replacement, the mac died and the new ones are unobtainable currently).
I installed Manjaro on it and it was a relatively OK experience getting that going. I had some issues with the sound that took some "maybe this will work" style copy paste of all sorts of magical cli incantations to get working. But I kind of knew what I was getting into so I'm not disappointed by that. The important thing is that I have a functioning laptop with all my developer toys running.
However, the touchpad support is so awful that I ordered a mouse. I never needed one of those with any macbook I've ever owned. It seems it's impossible to configure the touchpad in a sane way without ending up with piles of custom scripts. E.g. the scrolling speed is way off and I constantly have my single clicks interpreted as middle clicks, which does such fun things as close the browser tab instead of opening it. Simple tasks such as selecting text are made hard because the mechanical pressure needed for the clicks actually tends to move the cursor by enough that you basically mis-click. I briefly used the touchpad under windows before I wiped the disk. So, I know the same touch pad can behave a lot more reasonable given better software.
So, any improvements in this area are very welcome!
Cheap laptops have generally crappy touchpads. I have one (on a Lenovo ThinkBook) that has 5 finger multi-touch and it works completely fine with Gnome 40+ gestures. The only thing bothering me is a lack of consistent inertial two-finger scrolling. Never had left clicks misinterpreted as middle clicks.
Two click scrolling kind of works though it's set to some ridiculous speed. If click it slightly sideways with my finger it's interpreted as a two finger click.
It may be that single click in the bottom center of your touch pad is interpreted as middle click. I have a touch pad that does this, but it's a feature to me, so I never looked into how to disable it.
If the mechanical pressure necessary to click is too much, that's a hardware issue, so not much you can do there. I worked around it by enabling tap-to-click, which works fine for me, but I tend to keep my palms away from the track pad while typing.
Very interesting. I’m running PopOS on my framework, running Wayland— the touchpad gestures is very good. But still not up to par with MacBook. I’m extremely curious as to how this compares.
I think it’s both PopOS’s gestures as well as the hardware (the framework touchpad is quite good!) that has resulted in such a fantastic experience, at least compared to other laptop touchpads I’ve used. Definitely the closest to MacBook gestures I’ve experienced.
I’m working on getting an eGPU setup going so I can properly run X instead of Wayland, and am very excited to try this out.
(Please correct me if I am wrong, but my understanding is Wayland if you’re running integrated graphics, X if you have a GPU)
Wow, we are very similar! Running PopOS on a new framework laptop and loving it. Agree it is good but not up to par with MacBook. I set up linuxtouchpad.org if you're interested in dropping by the forum or chat channel.
It seems incredible to me that Linux, and even Windows, is still trying to figure out how to make a trackpad experience on par with what Apple offered 15+ years ago.
Well, they've had to support about 100x more devices than Apple does. Conversely, it's incredible that Apple still hasn't figured out how to get Vulkan running on their handful of supported hardware.
The difference here is that Apple doesn't want Vulkan rendering, because they have their own special graphics stack that they want everyone else to use instead. The Linux and Windows folks want good touchpad support and work is being done to improve it.
I must say that the touchpad support on Windows is pretty great if the vendor bothered to implement the right type of driver (the recent driver model, I think it's called "high precision" or something?) but many touchpad vendors don't do that and instead ship simple drivers based on their old code base and call it a day.
On the Linux side it's just a capacity problem that won't go away any time soon. This project seems to be the only group of people in the Linux community that cares enough to take the huge amount of time to re-write and re-design the software stack to accept better gestures, and even with their dedication it's been taking them a while (understandably so!).
I always found touchpads on Linux to be quite okay, mice on the other hand… For example, I find scrolling with mice in most browsers on Linux unbearably slow compared to other systems, and there’s no option to speed it up, and overall a lack of customization options when it comes to mice. I’m starting to wonder whether Linux developers don’t use mice, or if there’s something wrong with me or my setup, because I rarely hear about it. And as stupid as it sounds, being able to scroll properly is probably the #1 thing keeping me off Linux desktop right now.
I've never thought about this, but you're actually correct - normal scrolling really is quite slow. There are two reasons this has never bothered me though:
1. I always install Vimium in order to avoid using the mouse when browsing the web. (So in that regard your statement above is correct :-) )
2. For fast scrolling with the mouse, you could try enabling autoscroll in Firefox; once enabled middle click will activate it, then the page will scroll based on the direction and distance you move the mouse from the scroll anchor.
A lot of apps don't interact well with Hyperscroll at all, and when it sends the "1 pixel" scroll event continuously every 10ms, they'll either keep scrolling for days after the wheel stops spinning or scroll like 1000 lines when you only moved it half a rotation.
$60/yr for a touchpad driver for every Linux user in the world is quite reasonable. Plus, you get to stop paying after it's done, and no one can take it away from you :)
I sponsor this project for $5/month. Started a few months ago. I'm happy to support! For Linux desktop to succeed with consumers like me, it has to move beyond being for mechanics only, which means we have to do our part and pay money for it.
I've been a sponsor of this project on Github for $5/month. That said, I'm more of a consumer rather than a developer. I know my trackpad on Linux seems to sux, and I can't see how anything gets better for consumers on Linux without us paying for work like this. I'm curious for feedback -- is this team making good progress, did you install the code, etc?
I'm left wondering -- isn't this a toolkit/library issue?
I am a bit afraid of seeing gestures handled differently in multiple programs, like inertial scrolling, or pinch-to-zoom speed. At least gestures are always detected by libinput AFAIK, so there's that.
But instead of implementing this in every application, wouldn't it be nicer to implement the common part in a library and link that library to all app that require/want touch handling? This would provide homogeneous behavior, and would allow sharing configuration files.
Most applications will use either Gtk or Qt widget libraries, so a lot of similarity of how applications behave already exists.
I don't think it's possible to make Gtk and Qt themselves behave identically at this point. For example Qt is a commercial project and an effort that would break backwards compatibility is not worth it. Gtk on the other hand has strong opinions about how things should work, so it would be hard to change that too.
> Qt is a commercial project and an effort that would break backwards compatibility
Not sure how that would break backwards compatibility. Moreover, Qt has always tried to plug into native libraries and integrate as well as possible with the host platform.
You have a fair point about GTK. Any implementation should at least cater exactly to their need or they will make their own.
When running X on a previous Debian version, it was possible to edit the synaptic settings to prevent accidental touchpad touches when typing (along with a lot of other settings to make the user experience excellent).
Now running Wayland on Debian 11/Bullseye (with Gnome), there are only a few possible simple settings (that I can find), and so I have continual issues with window de-focussing due to accidentally touching the touchpad.
Is there anything that I can do now to fix this issue?
edit> The libinput webpage says this:
"How do I configure my device on Wayland?
See Where is the configuration stored? Use the configuration tool provided by your desktop environment (e.g. gnome-control-center) or direct access to your desktop environment’s configuration storage (e.g. gsettings)."
Solid linux laptop with a macbook style centered keyboard and touch pad. (system76 has offset touchpad / keyboard and it drives me insane)
Near perfect palm detection on the track pad like a macbook.
Does anyone have a recommendation? I've not tried any newer lenovo lappies nor have I tried linux on a Razer laptop which looks very similar to a macbook.
I have a Thinkpad X1 Yoga Gen 6, and haven't had any issues with palm detection either, unless I rest it too hard and it registers a physical click (which maybe only happened once or twice).
It's 2-3 generations old at this point. Really wouldn't recommend getting something this old. Comet Lake -> Ice Lake -> Tiger Lake -> Alder Lake (now). The starlabs starbook is quite interesting if you care about coreboot.
Barrel plugs weaken over time because of strain, and if you pull on it the wrong direction you're liable to destroy the connector inside the laptop. Apple magsafe or Lenovo's Thinkpad charger design can easily breakaway. Barrel adapters are also easy to confuse with each other.
Personally I want a magnetic USB c power delivery that doesn't suck.
Both the old and the new Magsafe connectors and magnetic USB Type C cables are the best in this kind of situations, but I personally only would accept any other kind of USB charger if I'm totally sure it's easy to replace that part in the laptop and doesn't require to do a soldering job.
It reminds of a coworker, a frontend developer and Linux user. He asked for a macbook pro from his employer, and after using it for a year he switched back to Linux. He said it was because he missed using it.
I use Linux primarily because of ideology, but luckily it's also the best OS out there, for me at least and many others.
For others Mac OS or Windows will be better for them, and that's fine.
I use both OSX and Linux extensively. I had the choice to go either way for my work computer and I chose OSX (intel chip).
A couple of days ago I wanted to use a Ruby gem ( https://github.com/rubyjs/therubyracer ) for some random project. To install the library (compile native bindings), OSX wanted me to download an install 12 GB of crap (full XCode, it didn't work with the command line tools)... In linux it was just a matter of downloading and installing the gem (100MB at most). That's crazy.
What I dislike more and more about OSX is how they have been aggresive against developers and technical people in the last years (like, why do I have to jump through hoops to modify my /usr/lib folder with SUDO/root? I AM ROOT ASSHOLE OS, LET ME DO WHATEVER I WANT TO MY COMPUTER.
Their kernel at least is and it’s Unix. From a user standpoint I don’t have any issues with it, it’s like if Linux could run photoshop natively and had good paid software and you can’t change the UI. It’s not ideal but it works for me.
But the rest of the system is not. I want I computer that I can use in the way that I want, with the software that I want, and not a computer that must be used as Apple wants, and limits you in using an hardware that you bought.
The reason why I use Linux is that I can do whatever I want on my computer, I don't have to have signed applications, annoying prompts to tell me that the software is not Apple approved (till there are the prompts and Apple doesn't decide to forbid all unsigned software as on iOS), and similar.
Also from an hardware standpoint Mac are overpriced machines, with insufficient I/O that forces you to bring a bag of adapters with yourself and components that are all soldered on the motherboard, impossible to upgrade.
> But the rest of the system is not. I want I computer that I can use in the way that I want, with the software that I want, and not a computer that must be used as Apple wants, and limits you in using an hardware that you bought.
You can't run a lot of software on linux, you can run less of it compared to windows and osx. All hardware is limited, all the x86 laptops have soldered CPUs, there is no such thing as unlimited hardware.
>Also from an hardware standpoint Mac are overpriced machines, with insufficient I/O that forces you to bring a bag of adapters with yourself and components that are all soldered on the motherboard, impossible to upgrade.
M1 is not overpriced for its benefits, the x86 stuff I agree with though and fuck their keyboards. No more Jony Ive, so ports are back!
>The reason why I use Linux is that I can do whatever I want on my computer, I don't have to have signed applications, annoying prompts to tell me that the software is not Apple approved (till there are the prompts and Apple doesn't decide to forbid all unsigned software as on iOS), and similar.
The problem I have with this is that its not that you can do whatever, you MUST do it and use a lot of time in configuration. I find the root prompt just as annoying in linux. I just want a working environment, not a bunch of software that asks me to set every preference.
I don't see the problem, I want to run better software I have to pay for, not crappy free stuff, and giving choice to do that is not a negative. If linux did not allow any paid software, it would not make it a better OS.
I have tried to replace photoshop with free software forever, and nothing still comes close to CS2. I don't care to make my life harder or use crappier tools because its not free. There is nothing free about crippling a workflow.
Well, except Docker. And all of your apps that are still x86 (and ARM support is WONTFIX), and then there's the 32-bit applications and the 32-bit libraries, and the plate-spinning clusterfsck that we call Homebrew, and your coreutils have to be manually upgraded or replaced altogether, and then you're probably going to want to find a replacement for Terminal.app, and while you're at it you may as well install a better window manager like Amethyst or Yabai. What's that? I can't disable the normal desktop interface and need to make do with 4gb of memory bloat at all times? That's alright I guess, I'll just disable system integrity protection and try to remo- hold on, you're telling me that there's a new update coming through? And none of my software is ready yet? That's unfortunate, it's already rebooting... and now we're in a boot loop. Siri, when was my last Time Machine backup? Siri? You there?
It may be hard to believe, but sometimes, for some people, just installing Linux is what they need.
No snark: I love OSX but as I get older I would like to support free as in beer and free as in speech computing devices.
I think there could be some real hope for Linux on a laptop, especially the M1, and being able to use all the Linux tools directly (looking at you, containers).
I've been feeling this, too. I made the switch to Linux from Mac this year, and I'm committed to improving the Linux experience where I can. "A rising tide lifts all boats". And we can live our values, too.
I mean from a user standpoint I don’t have any issues, I would use the M1 until Linux on there gtt set s stuff like GPU acceleration, at least OS X is free beer and the kernel is FOSS even though it’s BSD.
No, of course I can't quantify that. Not everything in life is quantifiable. But I can tell you from over 15 years of experience writing shell scripts across Linux and MacOS that the GNU coreutils are superior. Little things like 'find -name' without '.' working, to bigger ones like 'sed -i' and 'readlink -f'.
I love Rust but uutils are really not mature enough, which is understandable given how new they are -- the linked thread has examples of where they fall short. I would be delighted to start using them once they become drop-in replacements for GNU coreutils (so existing shell scripts don't break). Meanwhile, I use fd and bat in interactive shells (but can't use them in shell scripts distributed to others obviously).
I wonder, why nobody is mentioning two-finger scrolling (also inertial scrolling or kinetic scrolling).
Interesting fact: Ubuntu and Fedora have a fully working implementation for this - although scrolling is way too fast, while other distros don't have this. KDE distros also don't have this, even if KDE Wayland is used.
Can anyone tell my, why on Ubuntu and Fedora this works for EVERY app, whereas in other distros using GNOME 40 or 41 (e.g. Arch) it is only working on SOME or NONE of the apps...? Is there a patch for libinput or MUTTER, that is not in the official GNOME repos?
Nice! Thanks for the hard work everyone involved! I have nothing to complain, things just works for me so far, but great to see improvements in the pipe! :D
Looks like a great effort, but there's two things I don't understand:
1. Is this for X only? That's all I see mentioned on the page. If so, that seems like a pretty big waste, Wayland is definitely the way forward and efforts like this will only delay that even further.
2. That said, I use Wayland on my XPS13 in Gnome, and touchpad gestures feel pretty great already, comparable to a MacBook in my opinion.
You need to wait up to a year for the distributions to pick up the code into their default installs. Installing bleeding edge window managers, widget toolkits and similar software yourself is more difficult than it's worth for a normal user.
Linux will NEVER be a mainstream desktop platform as long as end users have to answer questions like "am I 'under Wayland'?" or "what does 'under Wayland' mean?" or "why does the alternative to 'under Wayland' seem to be 'for Xorg' (whatever that is)?" or "how do I 'install the xf86-input-libinput package' and why does it have such a weird name?"
On a normal distro it's just part of the default install, not something you have to even know exists. Arch just doesn't have a "default install", it's "build your own" distro focused on power users that like to control things at that level.
Literally just read this exchange in this very thread. I wish you "linux is easy" folks could agree on something for once.
> > People have been saying this for the last 20 years and I've yet to build a linux workstation that isn't at least 20-30% more time consuming to configure, maintain and tweak than a Mac and sometimes a lot more time than that.
> Bootup archlinux image, type in archinstall, make sure to select gnome... It's that easy.
Eh, I don't even consider myself in the "Linux is easy" group but I wouldn't pull up the word of a negative karma fresh single comment account that was created after the conversation started as gospel for the validation an entire group of people on HN are inconsistent.
It just happens you don't need to do anything special to install this on any traditional distro that ships with a default GUI install option. Regardless of your views on the viability of Linux overall these changes to libinput are really as zero effort as it can get on any OS, just wait for the update to push.
This looks old, is it the same? It looks like its only for wayland too, uhhh fuck... I didn't know if it was merged already or I need to edit a config, or uhh maybe is it a kernel flag? I love linux, this sounds like normie stuff to the usual user buts its probably very alien to anyone who never used it.
Gestures are nice but what I love most about Apple trackpads is everything else. Accidental input avoidance, precision along with speed, registering a click anywhere on the surface…
The gesture I care the most is smooth scrolling, which is a combination of OS graphic and input. Again, nothing smooth scrolls like the Mac or iOS
Yes! It’s the sum of all these little details that makes the Mac implementation amazing. When I started using macOS I took for granted how much work had gone into making its touchpad experience possible, because it Just Worked. Now that I’ve been nudging this Linux project forward for a couple years, I can appreciate how much long-term coordinated effort is truly required to make the touchpad experience become so smooth it’s forgettable. :)
What I really want to see is support for the `browser.gesture.swipe.*` options in Firefox. Being able to do a multi-finger swipe up to close the tab, left and right to go forward and back made browsing with just a trackpad very pleasant. The only real thing that I miss from macOS on a work laptop.
Does Ubuntu sleep/wake easily with a lid close (maintaining runtime state), and avoid weird battery drain issues?
To me, that's the killer feature of macOS. I am usually connected w/ a mouse and keyboard, the touchpad is great on the go, but nothing really substitutes the ability to turn it off and on again without thinking twice.
My girlfriend uses a MacBook and likes it. She bought a cheap windows all-in-one desktop and simply couldn't use windows. I made the effort during one weekend to make it look and behave like MacOS as much as I could.
Appearance-wise it is very very close. I got some gnome extensions to mimic the behavior quite closely too. In terms of software, it is still missing MSO and some magic apple integration to their other products. Otherwise, she is now a happy linux user on the desktop.
There's more to macOS than look of the desktop, notably the universal menu and full screen mode opens to a new workspace. I find the latter a big productivity boost when working on an undocked laptop as the gestures to switch workspace or windows between an application are so well integrated.
As another data point, one of the worst things on osx for me is their window management, specially that full screen mode creates a new workspace. I switch workspaces with command + a number, and that breakes it.
Yellow has always been minimize, did you mean green? The green button’s function used to be roughly “resize window to fit content” which makes sense because Mac desktops have never been built for maximizing windows, but it depended on developers to let the system know what a program window’s ideal size was and cross platform devs never did, so they changed its primary function to full screening the window and putting it in its own virtual desktop.
For me one killer feature is how MacOS treats every text field equally. I can use the Emacs style Ctrl-A/E/K/etc shortcut keys anywhere I enter text and it works[1]! To me it's deeply ironic that this is a feature in MacOS but not Linux (unless you count the terminal).
[1] With the exception of websites and SPAs that rebind these keys to something else. I really don't like sites that do this.
Universal menu is built into KDE and can be added to GNOME with minimal hassle (though the devs might try to shank you for it). Dynamic workspaces are also a pretty bog-standard feature for desktop environments, so if that's all that MacOS means to you then it might be worth taking another look.
I'm primarily a Ubuntu user but I there are certain design elements of macOS that I appreciate. It's a common theme in reddit for users to post the desktop environments with a macOS skin and state no reason to use a mac now - but the workflow differences extend far beyond the look of an empty desktop with no windows open.
When working portably, macOS is my preferred OS.
I'm not sure about KDE, but Gnome's experience is severely degraded when outputting to an external monitor with different display scaling. Without Wayland there is horrendous screen tearing but pipewire isn't trivial to install which makes screen sharing while WFH problematic. Intel WiFi drivers are a hot mess. There are numerous other QoL differences.
While Gnome supports dynamic workspaces, the workflow is very keyboard based and without Wayland gestures aren't native (e.g. smooth scrolling between workspaces) - AFAIK the only solution is to emulate keyboard shortcuts which is quite jarring given a decade plus of having super responsive smartphones - but Wayland without pipewire is problematic for screensharing, etc ..
KDE global menu is nice in theory but too many apps disregard it entirely which makes things awkward. Might be ok if you can restrict your app selection to Qt only but that’s very difficult to do for most of us.
I mean, the majority of apps I use on a regular basis support it. Chromium, Discord, Konsole, Dolphin, Mailspring, VS Code... the list goes on, and it's not just limited to QT applications. If your only holdout as a Mac user is the global menu, I think you'll be perfectly contented with what KDE offers.
Multiple finger gestures can sometimes mean more than just "trigger a key". For example, if you're drawing on a canvas and you expect pinch-to-zoom to zoom in and out as you're bringing your fingers together or pulling them apart, that could mean the app needs to integrate with new events at the UI library level. Or two-finger rotation of the canvas, as another example.
Touchpads get in the way of using the keyboard reliably, which is your primary source of input.
They also provide inputs that are themselves unreliable and inaccurate. You're better off controlling the cursor with a trackpoint which also means you don't need to move your hands off the keyboard.
Ehm, I assume you never used a MacBook for work? (giving it a go or I tried my friends MacBook doesn’t count)
No they don’t get in the way of using keyboard and no they’re not unreliable or inaccurate.
I use MacBook keyboard + touchpad for almost a decade without external mouse and keyboard on purpose and I do 120 wpm on a on 2017 MacBook Pro with a 1st generation butterfly keyboard without touchpad getting into my way.
I do have a Macbook Pro from circa 2010. I understand that by Apple fanboy standards, another older than a year is ancient, but to me that's pretty recent hardware.
Only way I found to make this useable is to treat it as a SSH server I connect to remotely.
Its touchpad certainly wasn't anything special, it's probably worse than the non-Mac average, only ever clicked correctly in some parts, before quickly getting stuck forever.
This is a reasonable conclusion to draw after what most people experience on Windows or Linux; however, if you've ever used a MacBook touchpad for a few weeks, I think you might find it appealing:
1. it never clicks when it isn't supposed to
2. its acceleration curves are beautiful
3. it's always accurate when you intend to click on a particular pixel
4. there is no interaction between keyboard and touchpad (e.g. on Linux there is a "disable while typing" feature which disables the touchpad only because the touchpad experience is not perfect, therefore completing disabling it while typing is better than refraining)
A good touchpad does not get in the way of using the keyboard reliably at all, this is a key part why we should not accept bad touchpads or bad touchpad software
I don't like pointing devices either, but if I have to chose between abandoning them and improving them until they become likable, I choose the latter.
They're not that bad if they have decent wrist detection. I'd rather use a proper keyboard and mouse of course, but my laptop is quite comfortable to use with a touch pad (on Linux).
Although I don't buy into the whole gesture stuff. I just want the basics.
https://neosmart.net/blog/2020/multi-touch-gestures-on-linux...
https://github.com/mqudsi/syngesture
The biggest benefit is that you can use drivers with actually correct acceleration curves like xf86-input-synaptics (if you’re on X11) instead of the offensively bad, NIH reimplementation that ships with libinput.
Oh wait, I’m on HN so I shouldn’t neglect to mention my project is written in rust!