I really miss a consistent user experience. A core idea that I as a user can rely on to predict how a new app will work. I was in awe when I discovered as a kid that user interface were a research area. Things like Fitt's Law and so on. It was not just opinion.
Today I get the feeling it's mostly just opinion. Either the designer's opinion or the wish to copy the look of something.
Whenever I see a hamburger menu I silently think "Here someone has given up".
And there are a lot of behaviors that are not functioning well.
Is something a button? Should I click it or double-click it? How about long-press on it? How can I know when there's no visual clues?
Things like "Hide cursor while typing" in Windows. It has not worked properly for decades and today only work in some super old apps like Notepad.
Another thing is type-ahead. I remember in classic MacOS, people pressed shortcuts and started to type the filename or whatever. It was all perfectly recorded and replayed. In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
I feel confused and disrespected as a user every day and I've been using WIMP graphical user interfaces since 1986. Sure, computers do more today, but there's less consideration of almost everything.
> In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
The worst offender is the login prompt. In Gnome, from the lock screen, I simply start typing my password. As soon as I type the first letter of the password, the password prompt appears, and when I'm done I hit enter and it unlocks.
On Windows, the first letter merely makes the prompt appear, but does not yet type into the prompt. In fact, it will only start feeding letters into the prompt once the little prompt-appearing-animation has finished. If I just start typing my password, it will eat the first three or four letters. So I hit enter, wait for the animation to finish, and only then start typing.
That's not the end of the world, honestly. But it is a learned gesture that the system taught me by failing to do what I'd asked it to do. It just goes to show the attention to detail that makes a UI feel fluid and frictionless.
> In Gnome, from the lock screen, I simply start typing my password. As soon as I type the first letter of the password, the password prompt appears, and when I'm done I hit enter and it unlocks.
Congratulations! You were able to recognize that it's a _login screen_ without seeing that there's a hidden password field.
The first time I booted to Gnome I waited damn near ten minutes for it to tell me it's ready for my login before I decided to start typing things to see if it was stuck.
Miming mobile OS lock screens is one of the weirdest damn things about modern desktop trends. It's so obviously a bad idea. I can't even figure out what useful functionality anyone thought that would provide. It's a fucking desktop, there's plenty of room to stick notifications or the weather or whatever on the same screen as the login prompt, if that's what you want.
I've said this on Reddit before, but the current Gnome is excellent for someone who loves it and only needs a few changes to make it work for you. Personally, I like how everything works (you could also say that this is some form of Stockholm Syndrome, ha ha). I like not having anything distracting from
The answer for most of these is "there's an extension for that"
> - Where can I go to see my current running applications?
I rarely have to do this (for me it's almost always IDE, Terminal, Browser), but "alt-tab then escape" is an option. For the window list at the bottom, there is the Window List Extension (https://extensions.gnome.org/extension/602/window-list/), part of the core Gnome desktop as well.
In short, most of your complaints would be solved by logging into Classic Mode, which tries to emulate the Gnome 2 experience using some built in extensions.
> - Why does the top bar do nothing and function like my Android top bar?
I am not sure what this means, but I like the Shell's current behavior where we have as few active elements as possible detracting from your active window.
I should preface that I love Gnome and am excited by the future. My criticisms come from a place where I want to see it be the best experience possible on desktop, and the obvious choice when it comes to choice of DE.
> there's an extension for that
Yeah this is fair, sadly it's not a satisfying solution. Extensions are often not supported on the latest version of Gnome for some time and even then, there's an issue of discoverability for both experienced and new users.
I understand the desire for customizability, but I'd prefer my desktop environment come out of the box tailored to facilitate a desktop experience.
I am happy there is an alternative however this has the side effect of removing the "Hit the super key then type a search query" workflow. Ironically, an actual application menu is nearly entirely useless for me outside of niche cases. I have been using start menu search and spotlight for years at this point - gnome's activities overlay is just that with extra animations.
> I rarely have to do this (for me it's almost always IDE, Terminal, Browser), but "alt-tab then escape"
This is exactly how I use my desktop too.
In MacOS I used virtual desktops so heavily to organise this and it was an _incredible_ workflow.
Monitor 1, desktop 1 is terminal, desktop 2 is IDE. Monitor 2 desktop 1 is debugging browser, monitor 2 is slack, music and misc. Flicking between the desktops, everything is positioned exactly where I want it to be and there is no guessing what is going to be shown.
I only started using Alt+Tab after coming back to Windows and now again on Gnome. Alt tab is a bit too guess-y for my taste. If you have multiple windows of the same type (e.g. browser and seperate browser debugger) open, what does Alt-Tab open?
In Gnome, rather than using virtual desktops, I hit the super key and pick the application out of the application mosaic because it's deterministic and faster than Alt-Tab
> I am not sure what this means
Swipe down from the top left and you have applications, swipe down from the top right and you have quick settings. Clock is in the center. The elements are responsive. It looks and functions like the top bar for a mobile device.
The top bar doesn't have application shortcuts, it doesn't show you active background applications.
If you look at the effort that all desktop experiences have made to make their top bar/desktop bar useful, you will see distinct differences when compared to mobile operating systems that have to be cognizant of their use of space.
MacOS has a global menu, taskbar tray, taskbar add-ons (for cpu usage, international clocks, etc) and applications are opened via spotlight (Albert) search.
Windows has shortcuts, a start menu with search, and a taskbar tray, along with lots of options to administer the computer when you right click the start bar.
Gnome offers a horizontally space efficient mobile-optimised top bar and a full screen, mobile style application picker. Application search and the open apps mosaic are the saving factors of it
>Why are there no icons in my tray for my background apps (steam, slack, etc)?
I understand the other ones but I don't understand this one. The Android tray does exactly this, background icons on the bar are a mobile feature. If you consider the Windows taskbar to be the ideal desktop experience, it doesn't do that, it hides the icons in a menu.
Not sure what you mean here - both MacOS and Windows have icons in the system tray for background applications and it's been that way since before Android was a thing.
Windows does hide excessive icons, which MacOS doesn't do - it's a great way to hide running the pre-installed Skype and Teams at the same time :laugh:
Normally I'm a Mac/Linux user, but last time I tried Windows it hid all the icons by default. They used to show them in older versions but they do not anymore, probably because everyone including the OS vendor itself uses it as a dumping ground for product tie-ins and clutter like that.
These things made me move away from Gnome. In gnome3 occasionally the login screen would require me to click the screen and drag upwards to get the login prompt. Customizing any if these things was always a hacky workaround. I'm very happy with Mint Cinnamon right now.
One unfortunate reality is, for a lot of new users - especially people who haven't used desktop computers before - their first computer was a mobile device. If we don't want Apple and Google to take over the entire space, we need to meet them halfway.
The lock screen is weird, though, I agree. The newer (blurry background) one is better than the old one with the tedious swipe animation, but really could use a persistent "do this to unlock" prompt / button, because many people just aren't comfortable poking their computer unless they know what it is going to do. A lot of modern UI design unfortunately forgets that.
For what it's worth, the original design for GNOME's current lock screen involved bridging the gap between it and the login / switch user screen, which _would_ involve some more UI elements, but the actual implementation must have stalled somewhere.
Oh ok, so people don't make the difference between a mobile phone and a computer. That is also why they don't know how to use a TV because it's not like a smartphone, right ?
That was a very bad idea from the very start. But it's the motto Gnome 3 was built on... I agree with the initial comment, Human-Computer Interraction SCIENCE went down the toilet drain to be replaced with so called UI/UX "experts"
That is one of the main reasons why I switched from XCode to CLion. XCode lost keystrokes, which drove me crazy. The other thing was that I could reliably type faster than XCode could display the letters, which made me feel like I'm drunk.
CLion seems to be quite excellent at recording and replaying everything, including stuff I type while the auto-completion is loading. Plus it has a really fast key to screen loop.
As an aside, I find it amusing that more latency is introduced by a USB keyboard than existed in an entire 80's computer (from keypress to rendering on screen). See also: Carmack's rant about how it takes longer to put a pixel on the screen than ping across the Atlantic.
About the USB keyboard, that may be true only for the cheapest keyboard you could find, but basically not applicable anymore nowadays. Rather than giving you the details, let me link to this video, which is awesome at explaining it and much more -- the USB vs PS/2 part starts here: https://www.youtube.com/watch?v=wdgULBpRoXk&t=1766s
>at the time I did these measurements, my 4.2 GHz kaby lake had the fastest single-threaded performance of any machine you could buy but had worse latency than a quick machine from the 70s (roughly 6x worse than an Apple 2), which seems a bit curious.
>We can see that, even with the limited set of keyboards tested, there can be as much as a 45ms difference in latency between keyboards. Moreover, a modern computer with one of the slower keyboards attached can’t possibly be as responsive as a quick machine from the 70s or 80s because the keyboard alone is slower than the entire response pipeline of some older computers.
Most normal keyboards, mice, and USB HIDs report events at a rate of 125 hz == 8 ms between reports.
Gaming mice usually go up to 1000 Hz / 1ms, the wireless ones usually let you configure down to 500/250/125 Hz if you want a bit more battery life. I'm sure that gaming keyboards also have a high refresh rate.
If you have a 144 Hz monitor that actually means your (normal) keyboard and mice are reporting events less often than your display is updating.
A lot of gaming hardware is trash. 1000 Hz on the package, endless latency inside, buggy firmware, buggier and resource-hogging companion apps, which are always Windows-only, and there are still even "gaming keyboards" which are 2KRO, bad materials and so on
That being said pretty much any 1000 Hz mouse is better than any 125 Hz mouse when using high refresh rates because the discrepancy between the 125 Hz mouse poll rate and the 120 or 144 Hz display causes very noticeable jitter. If you have a mouse where the poll rate can be adjusted, this can be easily A/B tested; it's quite visible in high-framerate recordings as well (this can be done in OBS by using the fractional frame rate selector and just inputting 144:1).
All of the Jetbrains editors also have these bugs in some places too unfortunately. The main one I run into is when you open the fuzzy goto modal (CMD + O) there’s a delay before the window opens where if you paste/type anything it goes into the file you’re editing instead of the search box. It’s been like this for at least 5 years.
Win 10 tought me to type again at least CTRL+ALT if i want my password to be accepted from the first time. Clicking with the mouse sometimes works before entering the password. It seems that at MS and also in other areas (Android, Gnome) people want just to code new things but not debug or fix bugs. They also must be as simple as possible because: 1. code reuse sucks. 2. Thinking is hard.
When I log into a Win 10 virtual desktop with 1024*768, the login form with 2 text fields and a button gets shoved into a box and I have to scroll down to see it fully.
It's certainly not the end of the world, but it's death by a thousand papercuts. I find macOS has fewer sharp edges (but still has plenty of them), and Linux too, once the system is configured and only used for writing code.
> That's not the end of the world, honestly. But it is a learned gesture that the system taught me by failing to do what I'd asked it to do.
Although I agree this is annoying, to be completely fair, it doesn't suggest that you should start typing your password immediately.
You could also start typing 'news.ycombinator.com' and it wouldn't browse to HN, but you wouldn't say that it failed to do what you asked it to.
Arguably, you have been 'taught' or you have inferred yourself that you should expect a sleeping system to immediately accept your keypresses as a password, which although reasonable is not necessarily a good practice.
For example - I have two computers connected to the same keyboard, one is indeed Gnome, the other Windows. I have twice now typed my password into what I thought was my sleeping Gnome system, hit enter, only to have to apologise to someone on Teams because I accidentally sent them my password because my keyboard was sending my keypresses to the Windows PC.
I... am sorry, but I think that is extremely bad practice. You should not put yourself in that position, and your experience in that position does not excuse the design flaws in windows.
> You could also start typing 'news.ycombinator.com' and it wouldn't browse to HN, but you wouldn't say that it failed to do what you asked it to.
Well firstly, that isn't what the prompt is for, so I'm struggling to see the relevance. However it is a good example for me- When I type C-l right now, I can _immediately_ start typing 'news.ycombinator.com'. There is no delay, no animation, nothing. I didn't go through a learning experience of finding out how long I should wait before typing- it's just ready.
Fundamentally: _why_ should I have been "taught" anything about how long I should wait? I don't have to do that on my ubuntu laptop. I don't have to do that on firefox in the example you handed to me. It's bad when I have to learn that. The computer should never make me wait unnecessarily, and I find it worrying that people are so readily accepting of it.
GP's (I think reasonable) point was that typing A on the keyboard of a locked (or worse, sleeping) computer has no reason to be interpreted as "input the letter A into the password field that will be displayed on screen", which is what the original post is assuming it should mean. There's no real reason it should even mean "input the letter A into the first field that comes into focus", which may anyway not be the password (it could be the Username field in certain setups).
Well, you usually see the logged in user’s name and icon on the screen, and presumably you left your own computer there.
And frankly, UX is about finding the little practices that make the experience better — arbitrarily waiting for windows to bring up a prompt is not making me any good, since I know what I want to do. So in my opinion in this very special case, windows is simply doing an incorrect thing — making the common case slow.
> Well, you usually see the logged in user’s name and icon on the screen
I don't have this experience. On Windows (10, at least), normally I see either a black screen (if it's asleep) or a wallpaper, clock, and a message saying 'Press Ctrl + Alt + Delete to unlock.'
> arbitrarily waiting for windows to bring up a prompt
I don't think it's arbitrary. It's doing something - it's calling Winlogon, spinning up disks and reading hibernation data into memory, restarting power to powered down components, probing authentication methods available (eg: fingerprint/card readers), in case of domain joined computers, validating whether the domain is available (which means firing up network interfaces), and if so, whether the user's password has changed, etc.
There are definitely downsides to Microsoft's approach here - as you mention, in a large number of cases, it takes more time to unlock the computer.
However, there are also upsides if you accept that UX is only one consideration when designing a secure log on prompt, and there may be other priorities.
> I... am sorry, but I think that is extremely bad practice.
Completely correct - I'm not excusing my lack of attention there.
> Well firstly, that isn't what the prompt is for
That's exactly my point. There is no prompt displayed on the screen, you have not been asked for input. You are pre-empting the next prompt before the system is ready for it.
> When I type C-l right now, I can _immediately_ start typing 'news.ycombinator.com'. There is no delay, no animation, nothing. I didn't go through a learning experience of finding out how long I should wait before typing- it's just ready.
Agreed - but that's with the application already open. From the lock screen, it won't do what you intend. With Firefox running and in focus, Ctrl + L tells the application to focus the address bar, but it only works in that context, you first have to prepare the correct context (computer unlocked, Firefox running, in the foreground and focused).
On Windows, Ctrl + Alt + Del tells the system to wake, not 'the next characters entered will necessarily be the password of the most recently logged in user'.
In fact, I tested a little bit, and often Ctrl + Alt + Del on a sleeping system actually asks me to enter a username first, not a password, although in some cases the username is pre-filled for me, and the password prompt will be focused. I cannot necessarily know a priori which I will be asked for until the system wakes and decides.
As soon as Windows displays the prompt, you can type into it (same as Gnome). The only difference is that Windows requires you to first wake and then authenticate once the system is woken, while Gnome allows you to do both simultaneously.
There is one particular 'pro' to the Windows approach of requiring Ctrl+Alt+Del which is that it's an interrupt sequence. If a malicious actor created a full screen application that mimics the Windows password prompt, it would not be effective because the Ctrl+Alt+Del sequence cannot be handled by an application. The same is not true for Gnome - if a malicious actor created a full screen application that mimics the login prompt on Gnome, you could be tricked into entering your password.
> Arguably, you have been 'taught' or you have inferred yourself that you should expect a sleeping system to immediately accept your keypresses as a password, which although reasonable is not necessarily a good practice.
I fail to see why that wouldn't be a "good practice". Last weekend I read some articles on Canon Cat[1], and Jef Raskin referred[2] to the feature as something desirable:
> In many ways it was, for 1987, far ahead. [...] instant on with any keystroke (and you didn't even lose the keystroke)
He goes on to explain why it's a good idea to do so in the article, it's a good read.
Thanks for sharing. I can definitely see why it might be appealing. However, using Ctrl + Alt + Del guarantees that only Winlogon can handle the logon, no other process can be simulating the login prompt.
My dad told me about a program he wrote for a PDP-11 which would just simulate a login prompt. When a username and password were entered, it would append it to a file somewhere, then print the 'Incorrect username or password' message, and exit silently.
The user would assume they'd fat-fingered the password, and try to log in again (successfully) and be none the wiser that their password had just been stolen.
The Windows dependency on Ctrl+Alt+Del makes this type of attack impossible, while systems that don't require a Secure Attention Sequence (as apparently it is called) to log in are susceptible to a slightly more sophisticated version of this attack.
I see. It makes sense, and it's a valid concern that I didn't think of. Personally, I prefer the convenience of continuing where I left off with as little friction as possible, so I don't even lock the session when the screensaver kicks in. But I never take my laptop outside, and the threat of malware is considerably lower under Linux. If that wasn't true, I would probably also want to err on the side of caution. Choosing a specific key (I have a very conveniently positioned X86WakeUp physical button) or combination for waking up and getting to login prompt seems like a good idea in that case.
> And there are a lot of behaviors that are not functioning well.
The worst of which are dropdowns, lists or menus that inserts new items right over where your mouse is so that you accidentally click on the wrong thing.
All browsers do this for the dropdown that appears when typing in the URL bar. I type a few letters, see the site I want is 3 down in the dropdown, hit the down arrow three times and press enter, only to find that the item changed as I was hitting the down arrow.
And don't get me started on user elements that have a new position in the menu system for every release, if they even have a decent menu system.
That is the worst thing. Updating a UI while the user is interacting with it.
Everything that reaches the screen should remain FIXED in place unless moved //by the user//.
This might happen E.G. because something was loading and now gets a resize event; so some list suddenly changes the position of everything rather than force the user to click something else to resize the misbehaving entry. (Scrolling could also do this, if it's based on a distinct UI element that is not 'activate the list'.)
I think of this frequently when I interact with bad offenders, Like Waze on my phone. I have like 2 addresses starred in addition to the home and work slots. It should PIN those, in a fixed (probably alphabetical) order under the home and work slots. If it's not going to do that for starred items (maybe I have 300 addresses?) it should give me a different widget to pin them.
It's less egregious, but remember that clicking away from the list (e.g. to close it) is also often an input option. Even though that's a lower input priority it could still disrespect the user's desires if they were trying to leave the list.
You are right, then perhaps leaving a placeholder area which is already bigger to accommodate some additional content? Eg. showing 3 actual lines plus a placeholder of additional 10, which might remain empty (but shows a loading indicator before)?
I estimate I have about 100 daily counts of frustration because of elements moving while I'm interacting with the UI. It drives me crazy, and I assume it drives younger people even crazier because they have quicker reaction times.
It's a disgrace that modern UIs haven't dealt with this.
I had high hopes for Windows Phone because it was the only mobile OS that had a realtime UI. I don't care how long an operation takes. Just give me immediate feedback! And respond to my input as if it was mission critical. Why is it so hard?
Related: If I type 'ha' for "Hacker News" in Safari's address bar on my iPhone, by the second letter it'll show "Hacker News". But if I'm too quick to hit "Go" on the keyboard, Safari instead searches DuckDuckGo for "ha".
The UI must update sooner than some underlying state; this bites me multiple times a day.
I was just thinking about how windows xp and old linux DEs (Gnome 2, KDE3) somehow felt snappier and now I'm wondering if the rise of multicore/multiprocessing is to blame. When the windowing stack is running in the same physical core as the kernel maybe it's harder for these things to become disconnected.
Mobile Safari didn't used to do that. I wanna say they introduced that bug a year or two ago. It annoys me constantly and it's worrisome that they haven't fixed it yet.
Oh, don't get me started... MS Teams has been doing this recently at work. Everyone is pinning messages, because when you hover over the Edit menu item for a message and go to click it, two more menu items load asynchronously below it, and switch the Edit button out for a Pin Message button right when you go to click it. And it's not even consistently like that, just sometimes. It's infuriating...
Personally what I hate the most is heart icon randomly suddenly appearing exactly where I wanted to click resulting in me adding hearth icons to completely nonsensical messages.
The teams share screen “menu” is stubbornly stack on a strategic real estate with no way to hide or minimize. Consequently, it usually hides the browser tab I’m on.
But the best part is throwing in some rdp. The teams share screen rect is carefully designed to cover the entire Remote Desktop title which is required to resize it.
I recently had to use the Outlook web client to send an email and the experience was nothing short of horrendous. The autocomplete feature kept eating keypresses unless I typed exactly what it was expecting, or accepted what it wanted me to say, and pressed tab. Of course, I can type faster than it can predict, so I'd be done with a word before the prediction would show up, and the damn thing would eat the space and the first two or three letters of the next work.
The app search in both iOS and Android do this to this day. Every time I launch an app I didn’t intend to launch because it popped into the results suddenly, I nearly throw my phone. It’s literally rage inducing.
How hard is it just to grep The list of local apps on a device entirely before showing me the results? Why on earth does it even take a notable amount of time?
I don’t say this lightly when I say someone should be fired. Everyone on every level of every who let this exist should be blackballed by the entire industry. Managers, QA. Someone should’ve spoken up. It’s an absolute moral failure.
> I feel confused and disrespected as a user every day
I definitely agree with this, and I like your use of the word disrespect. I think the younger generation has encroached onto the sanctity of "the platform", if you will, and now us old timers are suffering. I time myself for my actions every day since I started noticing these despicable UI trends, and on average I waste about 2-3 hours a week trying to discover what behaviors work with flat UI applications.
But, take a natively designed application from Windows 2000, and it's extremely easy to understand and use! And, dare I say it, more pleasing to the eye than flashy, animated graphics of today.
> But, take a natively designed application from Windows 2000, and it's extremely easy to understand and use! And, dare I say it, more pleasing to the eye than flashy, animated graphics of today.
Coming from that era, I strongly disagree. While I agree with the article, that some elements are harder to spot as interactable, I would still say that they are way more coherent than programs from W2K era and that one just got used to the broken design of the UIs of that era/platform.
While usually understanding the UI even in W2K, I don't think that the dumbing down of UI (and reducing palette/layers adds to that) for the dumb users is necessarily a bad thing. At the end of the day I am also a dumb user.
Actually using GTK3 for a while now and moving to GTK4 however might also trick me to not notice a downgrade in discoverability, so in that particular instance I might get tricked by getting used to a certain design language even if they simplify it to the point of actually decreasing UX for users that are unfamiliar with GTK compared to GTK3.
At least in the windows 2000 time every app was using the same UI toolkit. That helped a lot
What made some stuff hard was that there were not many pixels available for icons due to the more resolutions in those days, which made icons hard to recognise.
> I don't think that the dumbing down of UI (and reducing palette/layers adds to that) for the dumb users is necessarily a bad thing. At the end of the day I am also a dumb user.
I think UIs should be dumbed down for dumb people. This is a good thing.
What I have a problem with is that UIs are forcing power users into only being able to experience software as if we were dumb users.
I'm deeply concerned about the future of software.
> While I agree with the article, that some elements are harder to spot as interactable, I would still say that they are way more coherent than programs from W2K era ...
Maybe that's the thing. More coherence might be nicer, yes. But it might also not make UIs easier to use.
Maybe sqeezing apps into a tight framework of UI coherence, makes the overall appearance of what's on the screen more appealing but at the same time loose usability.
Think of special applications like technical ones (Blender) or office (Thunderbird, LibreOffice) and also simple ones like a notepad application. Now try to find a common set of UI elements to use for all of them.
What you'll probably get, is an OK notepad but a disturbingly bloated Blender.
TLDR
Niceness does IMO conflict with usability and the former sure shouldn't be prioritized over the latter.
> Maybe that's the thing. More coherence might be nicer, yes. But it might also not make UIs easier to use. Maybe sqeezing apps into a tight framework of UI coherence, makes the overall appearance of what's on the screen more appealing but at the same time loose usability.
It's a balancing act and you can screw up in either direction. What's clear is that limiting options and limiting depth is at a certain point really better from a UX standpoint - and I'd say old Windows toolkits are an example. It doesn't even mean there is missing anything, but that it is structured differently.
> Think of special applications like technical ones (Blender) or office (Thunderbird, LibreOffice) and also simple ones like a notepad application. Now try to find a common set of UI elements to use for all of them. What you'll probably get, is an OK notepad but a disturbingly bloated Blender.
Neither of these apps would have a problem with a GUI framework/toolkit per se. It's more an implementation detail of the specific frameworks/toolkits and the apps in question (also consider the time when they were founded).
I think that’s the point. When every app uses a different set of UI primitives, there’s no consistent UI to become accustomed to. Instead you have to become accustomed to each individual app.
I feel the same way. It's like we're back to the MS-DOS era where every application had its own interface and you had to learn each application's way of performing tasks. macOS and to a lesser extent Windows prided themselves on consistency across applications. But this required developers to voluntarily conform to those platform's guidelines, and in the case of Windows, the goal of consistency was challenged by (1) Windows' backwards compatibility and (2) Microsoft's own disregard for consistency at times, such as Microsoft Office using its own UI toolkit instead of relying on the UI elements of the version of Windows Office is running on (for example, Office 97 introduced flat toolbars, a different style of menu bar, and the Tahoma font, which deviated from Windows 95/NT 4 and its button-style toolbars and its use of MS Sans Serif); this theme even carried over to Windows NT 3.51 where Office's UI was out of place; see http://toastytech.com/guis/nt351word.png for a screenshot). Contrast that with the Web, where there are no common UI/UX guidelines. Sadly this philosophy has spread to the desktop, where increasingly each application seems to have its own UI/UX without regard for the platform's guidelines.
There is one good thing I could think of about the loss of consistency across applications: the underlying operating system matters less when the application works the same across platforms. Ironically this may help with the adoption of desktop Linux; Chrome, Slack, Zoom, and VSCode generally work the same. To paraphrase, this fulfills Netscape's vision in the mid-1990's of reducing the operating system to a bunch of device drivers.
One would think that Microsoft and Apple don't want Windows and macOS to be reduced to a bunch of device drivers. Then again, perhaps Microsoft's and Apple's business models don't require the long-time maintenance of these desktop-oriented operating systems. Microsoft makes a lot of money from Office and Azure, and Apple makes a lot more money from the iOS platform than from the Mac.
Still, I personally lament the rise and triumph of the siloed app, and the decline of platforms that promoted UI/UX consistency through a set of standard human interface guidelines, and I feel personal computing is generally getting worse instead of better.
I've never seen any consistency MS-DOS, Win9x and WinNT. The worst offender is Microsoft itself with the file-browser, settings and office-suits. They don't maintain and evolve their toolkit, they just add new ones and recommend to use them instead:
Win32: Win32, Windows Forms, WPF, MAUI, UWP, WTL, WinUI, MFC and probably more
Gtk: Gtk+, Gtk2+, Gtk3+, Gtk4
Qt: Qt1, Qt2, Qt3, Qt4, Qt5, Qt6
Gtk and Qt didn't just maintain but provides major upgrades and changes. People complained that Gtk changed stuff, which is unfair. The various feature removals in GNOME3 after the first release provided bigger problems. I'm rather sure the changes between Qt major release also require work from developers. Custom theming is an issue with Gtk but also something I cannot recommend as developer and user. It is complicated on toolkit side and faulty on user side. Qt seems do to better in this regard but the user side problems remain. Apple just says just no to theming at all. I've also never felt the desire to theme Gtk because it looked good by default since Gtk3+ and even backported to some stuff to Gtk2+. There is of Java and Swing, yes, but I'm afraid the first error was including a toolkit into a language library.
With Gtk and their HIG most stuff looks decent and usable. Windows? Nobody cares about the HIG. Micrsoft provides more toolkits than I know. And a lot developers just do whatever they want on Windows. Or worse, they use Electron. Microsoft Teams is the worst "application" in this regard.
The font rendering issue. I recommend ignoring the blaming (users) and ignoring (developers) on both sides and instead reading the details involved. Looks like mostly intended and used during development on HiDPI-Displays and they need people experienced in font drawing matters. Some fixes are landing already. Taking into account how much effort was put in Harfbuzz, Freetype, Pango and Cairo I think they can only learn from this - being more careful and keeping backward compatible solutions alongside until the new stuff is working fine for all.
The big difference is that with any of the Windows toolkits, an application written on their heyday will keep working today, good luck doing the same with any Linux toolkit.
Also only Qt is comparable in the full stack experience offered by Windows frameworks (and MacOS/Android/iOS/...), with the caveat it is anyway cross platform and not Linux specific.
Internal political fights between DevDiv and WinDev.
If you are aware that DevDiv controls managed languages, and WinDev Windows/C++, it becomes quite clear why the back and forth between those UI frameworks, given who happens to be on top on a specific time.
And now there is Azure + WebUIs to add up to those resource fighting, oh well.
The whole point of the settings app shown in the OP's screenshots is to quickly find the relevant section in the sidebar. But even before the great design madness of 2012, designers have started to remove icon colors everywhere. (Except for app icons, where Google and Apple have now removed shapes instead, sigh.)
Part of the discussion is that user interface is now a conduit for branding, probably the major conduit for software focused companies. You can't have consistent UI branding across devices unless you replace the UI to suit your branding. So rather than user interface being refined over the years into a consistent usable tool for each platform, by people specifically interested in UI and the paradigms of that platform, it's re-implemented year-over-year, with UX playing second fiddle to branding and design.
> In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
On Win10 I'm getting regular delays in keypresses and just enough lag when typing to be noticeable.
I've been trying plugging in two USB keyboards as a poor man's split keyboard, and the amount of software (looking at you IntelliJ in particular) that does something that means the characters arrive out of order when typing is hugely frustrating.
IntelliJ is really bad about this sometimes. I get that their software does an incredible amount of really tough stuff behind the scenes (which is why I won't be switching any time soon), but God I wish they prioritized input responsiveness over everything else.
Haha, I love the “My IDE is so sluggish, it must be doing important stuff”.
It really isn’t. IntelliJ is just horribly laggish. VSCode, an electron (!) app with 100 plugins isn’t even that slow.
Yes. Even if it were doing important stuff, this "important stuff" shouldn't be affecting typing. We have multicore machines now, there's no excuse anymore.
Imagine if it were the opposite, and moving the mouse or speaking into a microphone caused the compilation to fail in subtle ways. Well, background compilation/typechecking/whatever is messing with user interaction.
I'm not naive, I've actually built IntelliJ plugins. I know engineers that work on the product, and I have an extremely good idea what it's doing under the hood. A lot of it is important to me and doesn't exist in VSCode, though that's also a great product for some languages.
There were big improvements over time. IntelliJ performance seems to vary a lot between people. For some (like me) it's great. Others complain of huge delays. Probably, the differences are OS related and/or project size or plugins related. There are so many different configs possible for an app like that.
Jetbrains really should just update the JDK they use.
With a slightly larger max memory setting than default and using at least G1 garbage collector (ZGC for even better results) they would get a buttery smooth editor, but it is configured quite badly by default.
Though sometimes they just have threading issues, which won’t be solved by above.
1000% agree. I am disappointed [1] with JetBrains' attitude to this problem. They seem to judge anyone who isn't using the JDK11-based JBR in 2020 as a difficult customer, and do not acknowledge latency/lagging problems on Linux. Mac users then reply to threads like this saying "works on my computer flawlessly, I don't know what you're talking about", until they confess to being Mac users.
I have tried JAOTC up until JDK16 with some IntelliJ products, but I think you are right in saying that the single most effective knob (or two) to twiddle in the default installation is to enable ZGC and increase the heap size.
Yes, I know JBR provide DCEVM builds (which need to be enabled by downloading the JDK and changing the "boot java" setting, ironically)
Yeah, vm options tweaks still don't cut it for me on my projects, and I've optimized pretty hard. Threading is the killer, esp. when connecting to other processes it can get all tangled up.
When I was in school we had 286 computers with windows on them. Some of them had input lag of about 7 seconds in the text editor we were using. Think it was Word but not sure any more. But at least they didn't miss keyboard input, every key stroke came out, just with more or less delay. I did most of my text editing at home on my Atari where there was no delay at all.
Even Apple’s bootloader on Macs is dropping keypresses now!
Starting some time in 2020, the FileVault password dialog will drop characters whenever I’m typing quickly. And it never dropped characters before 2020 – even though I’ve had FileVault 2 on for many years.
And that’s at bootloader runtime, with not a single bit of the macOS kernel loaded yet!
No, forward. Because the delay is probably because all of the telemetry and tracking, that has to be activated and synced with the servers, before it can responds to your request. All that amazing technology, unthinkable 20 years ago.
> In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
Every time I bump into these interaction issues, my stomach churns a little bit.
Right now, Windows 11 is especially bad because they have new and old code working side by side. It'll take a few years for Microsoft to rewrite enough of the Shell so everything will be consistent again.
Goodness the start bar on Windows. For something that is meant to be a key part of both the mouse and the keyboard workflow, how can it be so slow? My work computers can hang for so long on it, that it is quicker to find the application shortcut on the desktop.
Because it is trying to be all things to all people: including supporting sponsored content and integrating with any fancy new/copied idea that the company is pushing (which may be quietly deprecated in the next major revision, but the hooks left adding abstraction load to everything else so that the few who actually found it useful don't complain).
Computers are ludicrously fast nowadays. Games are capable of doing a whole lot of shit in <16ms consistently while responding to user input, so I don't buy this "it's doing a lot!" argument, it is coded badly.
They've sort of already tried that, with animated smart tiles that were used by apps on Windows Phone and Win8. Not something I've seen recently, not sure if that is due to it being deprecated of if people just aren't using it on desktop as the menu is never visible for any considerable length of time.
I believe it's not really an integrated part of the Explorer shell anymore, but rather a separate process that gets launched every time you open the start menu.
So, when you hit a key with the start menu focused, it closes the start menu and opens the search window (with the search text initialised with that key).
I still remember the good old days where keyboards had Function keys and apps would prominently display a F2 next to the save button, an F10 next to quit. Even for computer illiterate people, that made using keyboard shortcuts a bliss. Save was ALWAYS on F2 no matter what you did, so people would put physical stickers on their keyboard to make the F2 button say "Save" on the keycap.
What system was this? I've used every version of Windows (literally), DOS since the 80s, Macs since the 90s, Linux since the 90s, Atari ST, and several different 8-bit micros and I don't recall ever using F2 to save. I do believe your anecdote but it's completely escaped me what platform might have used F2.
Also worth noting that keyboards do still have function keys. In fact I use F2 as my `tmux` control prefix. :)
Borland IDEs, Norton Commander (and its alternatives) built-in text editor, and programs that mimicked those green-grayish or blue-grayish text mode windows. Or those simply made with Turbo Vision.
I do remember when ctrl+shift+insert was common for paste (rather than ctrl+v). I can't remember off hand was copy was beyond it being a similar key combination. Still trips me up these days using cmd instead of ctrl on macs.
ctrl+insert for copy and shift+insert for paste are still a thing in the majority of places in Windows, though I think only the ancients like me use them (people newer to the game being taught ctrl+C & ctrl+V from the start).
Not long ago I even spotted it being usable in an app that blocked ctrl+C/ctrl+V.
This reminds me of one of my personal bugaboos, which seems especially bad in Microsoft Windows: windows that spawn child windows that require input focus, but that don’t grab focus leaving you typing into nothing at all!
IMHO, a worse practice is when you are happily typing along and the system throws up a dialog randomly, grabbing the focus and thereby "pressing" one of the buttons when you thought you were typing. Who knows what you just agreed to.
Generally if you were interacting with the computer then whatever new thing should get focus iff its appearance was a direct result of your current interaction.
> Is something a button? Should I click it or double-click it? How about long-press on it? How can I know when there's no visual clues?
Since Windows 7 (or 8, I can't remember) and the "Metro" style apps, in the Control Panel in particular one of the controls (IIRC it was something like the 'show/hide icons in task tray'), that was previously a button, became a "hyperlink", essentially clickable text. Clickable text that was the same colour as unclickable text. What. The. **
I would love it if we could all just decide on a single WIMP interface and stick to that.
I understand that this would annoy designers. But they're annoying me so I think this is fair.
My go-to example at the moment is Figma, which is a design tool that I find almost impossible to use because it has so many hidden, unexplained "features" in its UI. If I accidentally brush the keyboard I have to quit the application and restart because I have no idea how to get back to the state I was in.
When swiping between fullscreen windows, Macos sends the keystrokes to the last window until the animation has completed. The ”g” of all my git commands ends up in vs code if I’m too quick. Very annoying.
Is a dumpster of features, you have no idea what you'll find in there.
Has someone hidden the zoom buttons in there? How about print? Perhaps that's also where save is, or tab colour? Most things in the hamburger menu are completely unrelated but exist for the removal of context. Button outlines so you have no idea what you can and cannot click on as well, which is mentioned in the fine article, are all modern design patterns. Looks over functionality.
Yep. It's the miscellaneous everything bucket. It also seems to only be motivated by either scarce screen real estate (it first showed up in mobile interfaces), or graphic designers who think everything is "clutter" and by sweeping it away into the hamburger menu they can have their beautiful screenshot-worthy interface. I have a 27" monitor, but tons of websites and even Firefox itself hide half the functionality behind a hamburger.
Originally the top menu was to have a way to access to all functions. Many still do (say LibreOffice)
The hamburger is kind of a weird breed of design signaling because of it. In some desktop apps it still is the original top menu, but not always. On websites and apps it can vary tremendously.
However, a top menu is not an expert level mode, it’s a beginners / proficient level mode. Expert level means using keyboard shortcuts.
Context driven display and modes are far more valuable when meeting spacial constraints. But that can be hard when you are dealing with many form factors and so the quick fix is the hamburger.
This is true when you use only an editor and a debugger. When you need to use 10 programs each with its own keyboard shorcuts (which, BTW you need to search in internet which they are because the company doing the software did not bother to make them accessible in a help menu) you're gonna have a hard time.
In the (1985) past you had for such things a holder attached to your keyboard where you had a listing of those keyboard shortcuts. Now the world has evolved and we have ... tada... the hamburger menu.
P.S. I would not feel good as a developer to know that someone , somewhere curses me and my family constantly for the work i done.
which, BTW you need to search in internet which they are because the company doing the software did not bother to make them accessible in a help menu
At least on macOS and Windows the shortcuts are shown in the (traditional) menu bar. With GNOME's use of hamburgers, it's anyone's guess. As someone who likes shortcuts, I found it very hard to figure out shortcuts in GNOME apps that have a lot of functionality hidden away. (Aside from keyboard shortcuts not being very consistent between apps on Linux.)
In Excel 365 pressing ALT shows the shortcut key in the ribbon ( if enabled in options). The issue is that the ribbon looks now like google maps. In Edge pressing ALT sets the focus to the "...". What to do next is left as an exercise for the reader. In Teams, pressing ALT does nothing and the Help button has only "Themes", "News" and "Training" .
In Firefox, on Linux at least, when you press Alt you get a bar with all the standard File/Edit/View/History/Bookmarks/Tools/Help menus. In each menu and then in each item there's one letter underlined and it works as it should, so a sequence of Alt, h, a, will give you an About dialog (for example). The fact that the menu bar is not displayed by default doesn't bother me, because I don't use mouse to navigate the menus, and from keyboard I'd need the Alt key press anyway.
Slightly related: somehow, FF on Linux allows you to rearrange tabs from keyboard. Ctrl+Shift+PgUp/PgDown move the current tab left or right. As far as I can tell, this feature is only available on Linux. I have no idea why. It's much more convenient than dragging the tabs with mouse, because you don't have to worry about accidentally detaching the tab into its own window.
> Is a dumpster of features, you have no idea what you'll find in there.
I feel the same way about notifications. Both native notifications and notifications via e-mail.
It is no longer easy to find out if something changed or happened in most websites/applications. You must be bombarded with notifications all the time, and god forbid you take an extended break or something goes into your spam folder, because if you snooze you lose.
One of the first things I do with a fresh work inbox is set rules to push things into non-default folders. And then, I proceed to never look at them unless I'm digging for something.
61 unread notifications from Az Devops? Nah, I don't need to read those. I was watching that build in the background, I know it failed.
Mobile notifications should offer more filtering options at the system level. Why do I need to trust an app developer to respect my notification preferences? Why should I rely on the developer to build granularity into their push notification frequency?
It all gets simpler when you can filter at the 'inbox'.
So on desktop applications, I assume you prefer all your functions to be on toolbars, and you have similar disdain whenever you encounter a File menu? I guess if there's a lot of functions, you probably need some tabs on your toolbar- so the Office Ribbon interface is idyllic I assume?
The difference between a file menu and an hamburger icon is that the file menu hides file operations while the hamburger icon hides… anything. something. dead bodies.
Almost every desktop application has a File menu whether or not it deals with files. The File menu has an "Exit" item. Does it Exit the File? OBS has "Always on Top" as an option within "File".
Or we can look at it from the other side. "Where do I find the settings for this app?". Is it in File, Edit, View, Tools or Help. The answer depends on the app. Menus can be well organized or poorly organized but I don't think we should be banishing menus. The hamburger icon is just an iconic, space efficient way to say "menu"
There are guidelines for what should be in menu bars, what order the items should be in, and how they should behave for different sorts of application.
> "Where do I find the settings for this app?". Is it in File, Edit, View, Tools or Help?
On Mac, it should be in the "App" menu and called "Preferences...", it should be the first item in the menu, except if you have an "About YourAppName" item which always goes at the top with a seperator underneath.
On Windows, it should be in the "Tools" menu, and called "Options...". 4 On Linux, it should be under the "Edit" menu, and called "Preferences".
If it's not, then the app (in my opinion) is broken. Of course, many apps are lazily ported across platforms or "hamburgerized". The Windows situation is obviously also more messy because of the newer "ribbon" standards complicating things a bit, but there are right answers.
Of course there are ancient, rarely updated guidelines that almost all applications ignore. Microsoft indeed has that great guide, and yet their own applications implement it inconsistently, and often incorrectly according to the guidelines. Apple is a tad bit better, but third party apps rarely get it right. The best thing Apple has going for it is specifically on the question of where Settings is: there is a dedicated app menu, so it just makes logical sense to put Settings there. Or was it Preferences? Options.
> On Linux, it should be under the "Edit" menu, and called "Preferences".
This one is a stretch, there are no such consistent and agreed-upon guidelines.
My point here is that just because things can be abused (or even, in the case of desktop applications, are almost _always_ abused), doesn't mean the general concept is useless or should be removed from applications. I would recommend instead of trying to cram a hundred possible actions into a tiny icon bar at the bottom of your iPhone, or removing 97 of those actions from the app itself and leaving just the most basic three down there, to maybe consider building a sensible and well-arranged menu layout _if and when_ you need more than those three that you can fit at the bottom.
> Of course there are ancient, rarely updated guidelines
You mean well researched guidelines with tons of evidence behind their decisions [1] that are as relevant today as they were 20 years ago.
> almost all applications ignore.
That most current applications ignore because people think they are ancient rarely updated guidelines.
This is especially evident in the MacOS world. HIG were the guide on the platform, and developers tried to adhere to them. And then the new breed of "designers" took over and even Apple breaks nearly every single one of them [2]
Microsoft has always been worse when it came to enforcing consistency of user interfaces, but even their choices were never random until quite recently [3].
Wait, there's a standard for ribbon menus??? I thought that was just a horrible failed experiment that drove a huge number of people away from the MS suite towards Google Docs and was abandoned...
This is a problem of poor app design, not the stunning indictment of traditional menus that you seem to think it is.
A poor menu layout can be fixed. A hamburger menu will always be a byzantine mix of everything because everything has to be in there. There is no organization, except flyout menus--which were a problem with traditional menu designs, but you could organize without them.
Hell, some applications have just moved the traditional menu behind the hamburger. Why? Because that was a good way to organize operations, even if it wasn't perfect, or wasn't always perfectly implemented.
On macOS, both the Quit and Preferences items are consistently in the application menu (the menu left of "File" that has the name of the application). And the shortcuts are consistently Cmd + q and Cmd + ,
Well, unless it's an Electron app. Which is another good reason to avoid them altogether.
> The File menu has an "Exit" item. Does it Exit the File?
Does it ? This is the issue when people see no more as their own OS.
In the past, just like today, there were multiple modes to exit a program. One of them is to press Exit and the programm will exit cleanly. The other is to press the X in the window corner in Windows ( or send a close message from a window manager) which, depending on the implementation, might lose data. There is also the option to kill the process. Now in my opinion this Exit is a good thing and it is clear what is doing.
> The File menu has an "Exit" item. Does it Exit the File?
Happy to know I am not the only one who considers this weird.
It would probably make more sense to have an "Application" menu before "File". It could contain things like "About", "Help", "Setting" and "Exit". (Especially in cases where the current "Help" menu only contains "Show help" and "About". If it has more than five items, then I'd say it deserves a separate menu.)
But instead of being an ordinary honest menu where half the items has their defined places on each OS and the rest can be found easily it is a jumbled mess.
And since there are no standards as for where to put things in a hamburger menu they can move around between each release of the application.
And the only reason to use it instead of an honest menu is because Chrome does it so therefore it must be a good idea. Period.
It is somewhat ironic that even as desktop screens get larger and larger applications has to "maximize real screen estate" by removing menus, while simultaneously working to remove real screen space improvements like the option of Tree Style Tabs on Firefox (yes, it kind of works and it is still awesome but it get harder and harder by the year).
> In modern Windows, press Win-key and start to type, oops, it missed the first keypresses
This does not seem to be the case, is it? I recently, accidentally hit Win+term+Enter into a Windows VM (intending it to hit the Linux host machine) and was surprised when a terminal popped up.
Used to be they had time to get UI better. Now, if you have the same UI for 6 months people think you've stalled and start looking for new. Everything has to be constant churn, constant change. Far too many people today get bored way too easily.
> Another thing is type-ahead. I remember in classic MacOS, people pressed shortcuts and started to type the filename or whatever. It was all perfectly recorded and replayed. In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
The arguably worst offender in MacOS these days is the emoji picker. Press ctrl+cmd+space, and depending on how recently you opened it, it can take some hundreds of milliseconds to open (on an m1). All while eating your keypresses.
I agree 100% with the author, the old GTK button was gorgeous, it will be missed.
> I have had to explain to people tons of times that the random word in the UI somewhere in an application is actually a button they can press to invoke an action.
This is one of my biggest complaints with the super flat modern designs. Many widgets lost their skeuomorphic depth, which encoded a lot if visual information (the clickability, the current status), but in many cases nothing was added to supplant the loss of those visual cues, so now it is just a label (or a label in a white or grey box) and there is no way of knowing if it is clickable or its current status.
> I have had to explain to people tons of times that the random word in the UI somewhere in an application is actually a button they can press to invoke an action.
Exactly this. I often help others use computers and phones. In the old days it was easy to see what could be clicked. Now input and output look the same. It makes it harder to use.
In many ways it is a victory of style over substance - UI's are now designed by the same crowd who designs high fashion, that is clothes not designed to be worn but to be gawked at.
> In many ways it is a victory of style over substance - UI's are now designed by the same crowd who designs high fashion, that is clothes not designed to be worn but to be gawked at.
I'm an art-school educated designer, decade+ full-time web developer for over a decade and regular FOSS contributor for about as long, and regular FOSS users since the late 90s. Like most other designer/developers I know— there are way more than you think— I contribute code regularly but never design work. Why? Because it's a sucky experience.
Most FOSS UIs are akin to someone's first website made from cargo-culted code from free tutorials. Fixing it is harder than starting from scratch and either approach takes significant intellectual work before even seriously proposing changes... and those proposals are received with something on a spectrum of suspicion to outright hostility.
Would you contribute code to a project run by people with no coding experience but were extremely opinionated about code, bikeshedded and poopooed all code changes as a matter of course AND referred to developers and their work with the same glib contempt you and so many other developers here displayed in your comments? Gosh I hope not.
I often hear FOSS developers lament lack of designer involvement, but won't even entertain the prospect of having any culpability for that. I mean, come on.
UI design as a discipline fundamentally assumes the person designing the interface doesn't intuitively understand what's better or what's worse— they should investigate, check, and confirm their strategies. The problems you see in UIs are because the people running the projects Solicited the wrong kind of designers or let people without subject matter expertise trampler on some core features of the design.
To some extent you can tell from their title and previous work, just like with developers. Foe example, I had an entirely non-technical boss that understood I might not be the guy to re-write some printer drivers because I was a web developer. He didn't know the specifics, but being in charge, knew he had to ask someone who did, or do enough research to figure it out.
Likewise, UI designers will specialize in designing UIs and be better at making buttons look like buttons than Graphic Designers, and Experience Designers will be better at integrating user feedback and research into projects. Just like you wouldn't trust any one developer to implement critical functionality you don't understand without outside input, you probably shouldn't rely on one designer to that either. If you maintain a project, though, you can't expect designers to instinctually work around what you don't know. Being in charge means that you're in charge of figuring out how to evaluate it. I'm positive that a "I'm not sure how to interface with this sort of thing. Let's work through it so I can figure it out." will be received kindly by people you should consider working with. Good design proposals should already come with explanation and justification to help you down the path
This is a random selection from a google image search of proposal ideas. It was from another post where I was talking about higher-level topics but the principle is the same. Changing a set of control widgets should require no less thought and explanation.
> Developers absolutely refer to developers with glib contempt.
ok— now copy and paste the rest of what I wrote. An overly opinionated, defensive person with veto powers that understands the purpose and value of your work is fundamentally different.
The complaint about flat design isn't levelled only at FOSS, but at the whole industry. Your points may explain the situation in FOSS, but it doesn't explain the poor work being put out by the thousands of designers and UI/UX people working as professionals in industry.
Part of me wants to throw up my hands in the air and cry out "Why, God, why?" when I encounter yet another UI with flat design.
It is like a mass delusion or something that some segment of the population seems to think it that flat design is actually useful for the everyday user.
We've seen other stupid useless crap spread like a wave across our industry, across our society... so I guess I shouldn't be surprised. Though I've started to question my own sanity.
Modern UI design is optimized for screenshots in a PowerPoint, and from there, to screenshots in a portfolio and on marketing pages where the audience that needs to be satisfied is marketing themselves, and especially people very concerned that every piece of everything be "on brand" and who are entirely sure this matters a ton for making sales.
a) there probably aren't many lead designers, creative directors and art directors on this developer-centric site
b) interface choices for most commercial products are deliberately made by other people
c) interface choices for most FOSS projects are made by developers
Second, I didn't directly address the criticism for the same reason I don't go into game forums and argue with the inevitable teen tech wizard lobbing glib, unsubstantiated technical criticism at the "stupid devs." Their peers might believe their saying that a "microservices architecture" caused low frame rates in the last release, but professional developers will roll their eyes hard enough to sprain an eyelid.
I won't waste my time with a point-by-point teardown, but if you're actually interested, here's the first half dozen unsubstantiated assumptions I've seen here about design/designers and the design process:
- 'Flat designs' are uniform enough to judge their value as a unit.
- A bad flat design was bad because it was flat— not the hundreds of other problems a design can have.
- Designers change things solely to suit their taste or follow trends
- Designers don't commonly test or measure usability with quantifiable, auditable data
- Bad usability is acceptable if it looks good.
- How well you parse something is representative of how everybody else does regardless of their culture, age, experience with other objects, experience with computers, vision, disabilities, etc.
- Skeuomorphism was the most effective form to convey those visual cues.
aaaand the list goes on. It's fine that many developers don't totally understand how design works because nobody expects you to be a subject matter expert in anything other than development. That said, confidently making sweeping judgements about design and designers when you don't understand some important fundamentals is just bad form.
> UI design as a discipline fundamentally assumes the person designing the interface doesn't intuitively understand what's better or what's worse— they should investigate, check, and confirm their strategies.
As a developer who always had an interest in UI design but has zero formal education in the field, I be really interested to know more about this. Could you maybe explain some of those strategies?
My subjective impression is that design has shifted from building a consistent "language" to a more goal-driven and data-driven approach today. The product owner defines a list of user stories and UI is primarily concerned with making those user stories as frictionless as possible - even if this means a less consistent overall design snd even if it means that less common features become harder to use. Performance is measured in a feedback loop through telemetry and A/B tests.
I might be wrong though, so I'd be interested in a qualified opinion.
(Also disclaimer: I grew up with Windows 98, ME/2000 and XP. So I guess this is my "good old times" spot then, where I'm wearing the rose-coloured glasses.)
Yay! UI design is cool. I really love helping people solve their problems with software and often find the intellectual work involved with crafting their interactions to be far more interesting than getting the best algorithm for something, implementing the most reliable architecture, etc.
What I believe you're noticing is the the adoption of ideas under the (poorly named, IMO) UX umbrella. UI design is either considered part of it, or close enough to get the UX/UI slash treatment. It's not quite there as an idea— people can't decide if UX people are the same or UX Researchers are different than UX Designers or if UX Designers just do wireframes and user flows or also design UIs or if that's left to Interaction designers, blah blah blah. The base ideas seem to be an amalgam of human factors engineering, graphic/media design communication theories, and quantitative marketing type work.
While there are hundreds of trillions of articles on the topic by people thirsty for medium claps, I think the most interesting jumping-off point might be an image search for ux design process and using the charts you see to guide an exploration. The Nielsen Norman group has a TON of stuff online about the topic. Not just design itself, but measuring the maturity of usability organizations, research techniques, best practices and data strategies... I mean all kinds of stuff.
I think a lot of the ideas are tremendously valuable but much of what's written about it feels a little bit too much like marketing material. The ideas are presented a bit too confidently considering how often they change, and too much has that LinkedIN magic bullet kind of vibe. Also, like tech, the industry is subject to spike trends (like tech saw with NoSQL databases) and pendulum swings (like centralization vs. decentralization of services, thin vs thick clients, etc.) For something a bit more structured, the former Lynda.com, now LinkedIn Learning has some really fantastic educational resources on modern design of nearly any stripe.
I'll swing back through if I can think of any specific resources worth checking out.
The one that irritates me these days is working out at a glance what has input focus. Window borders are so thin, and the difference between focused and not minimal, app have custom chrome so while titlebars are a good indicator sometimes often they are not, or are just two subtle, etc. MS Office apps are an offender here.
Which of those has focus? There is actually a difference (and to take the screenshot they overlapped making it obvious that way) but it is subtle. Try spotting it reliably when they are on different monitors.
At some point it'll annoy me enough that I'll write a util to scan for the current window that is top of the stack and draw a bright green border (or otherwise unmissable clue) around it… It'll look ugly, but I'll darn well know where what I'm about to type will go!
i don't think flat design is really a question of style over substance. flat design has been popularized because designers like it, but designers like it because it's easy.
a flat element can be re-coloured without having to worry you've gotten the shadows correct on the new color. you can put two buttons beside each other without worrying that one appears to have more depth than the other. we make things flat because we're lazy, not because we think it looks good. a culture of lazy designers has convinced people it looks good, so we can continue being lazy.
Yeah I keep saying that modern UIs are designed to be looked at and enjoyed like museum art pieces, not to be actually used. They look nice-ish when static, but they're unbearable in real use.
In the case of the article, that is somewhat unfair.
> GTK4 has been in development for a bit and has improved a lot of the internals. One of the great upsides is that it can take more advantage of the GPU when rendering the UI.
So there was a clear need to rewrite parts to fix internals.
> The Adwaita theme has also been nicely carried over and looks very similar to the GTK3 counterpart.
...
> When I want to make a gtk4 application for mobile I would need libhandy, but libhandy for gtk4 is not a thing. The "solution" is libadwaita. This provides the widgets I need but it comes with the downside of having some of the worst decisions in application theming.
So, there we have it: it's not "UI folks just changing stuff", its a ground-up rebuild of a UI-language.
Now, whether or not that ground-up should be flat, modern, or just copy the old, is another discussion.
> In many ways it is a victory of style over substance - UI's are now designed by the same crowd who designs high fashion, that is clothes not designed to be worn but to be gawked at.
Is it? Style is no style at all if the thing in question does not fulfill its function. The whole point of something stylish is that it accomplishes its end so well and respects the appropriate constraints that is pleases the intellect when it recognizes this perfection.
So if someone is designing a UI that is difficult to use, that is a failure of both style and substance since there is no style without substance.
A couple of times i found myself clicking on various text labels just to see if it is a title or a button. All these flat minimalist designs are a step in a horribly wrong direction.
I've seen all 4 combinations of "looks like" vs "is actually" {button,static text} on a web app I was once forced to use. Some of the plain text was actually a clickable button (with NO hover effect), and some of the "buttons" (or more precisely, short actionable text with a rectangle around it) were actually just labels. Imagine the word "Order" presented in both styles, and my astonishment when I figured out which one I actually had to click.
I'm also really not sold on those new "tabs" which are just text with an underlined colour. It's low effort and dreadfully unclear. I can only vaguely guess what they are based on their upper placement, but what's to really distinguish that from a menu? or just a descriptive label?
I don't like saying this because I want Linux desktop apps to have every success, but these small and pointless frustrations kill my enthusiasm.
I spend far too long in GTK apps looking for the right menu to do basic stuff. It's like someone once saw an iPad from across a room and tried to implement what they remembered of it.
The flatness of some UIs along with the removal of some visual cues has happened that I have tapped on things that aren't actually a button thinking that they might be one.
When I make my own apps one day, I am going to ignore design fads and only make intuitive interfaces.
we started using Full Story at work. It tracks clicks on non clickable items. It is staggering that this is even a problem. Having to explain to people that links and buttons and radio boxes and checkboxes should look uniform has gone the way of the dodo and now we must instead have people clicking random items to hope something happens.
This rattled my brain, that this is a feature now, says volumes about the quality decline in UI. I am a developer, and mainly use tools on the command line, or with TUIs.
But whenever I use android apps and websites, I constantly find myself longclicking or rightclicking things, hoping for stuff to happen, that just doesnt.
Visually nice but it's almost always going to overlap the thing you've right clicked. Under any circumstance I still want to see the thing I've right clicked.
Interesting concept, but I would add keyboard shortcuts (underlined items) to the menu to make it more of a reminder than be forced to use the mouse all the time.
This and many variants of the idea has been explored in games a lot - in particular in point and click adventures and isometric (or at least top-down view) RPGs. Probably also in strategy games and anything else that has both a cursor and context-specific actions.
Misuse (or no use) of affordances is so, so annoying, both in software and real life.
Things like having pull handles on both sides of a door that has to be pushed from one side. Or having no push plate on either side of a glass door. Do I push? Is it a sliding door that's not working? Don't know!
My car has the climate controls hidden behind a graphic of the current state on the stupid touch screen. It used to be buttons that had three zones for head, body and feet, a wheel for fan and a wheel for temp[1]. Volvo used to pride themselves on having controls you could use with gloves on, so all the buttons had 3D features and a positive press feel.
Now it's not even obvious the graphic is a button at all (it's also incredibly dangerous as you cannot do anything without a multi-click modal process with only visual feedback that you have to look at to use). Thankfully the window demister is still a physical button because that would be incredibly dangerous to hide behind a soft screen interface: when you need it you need it. However, it's lost its 3D profile and is now just a flat button.
My office has an amusing one: there's a door at each end of the building. The building is the same on both sides, neither side looks more "fronty" than the other, and the doors are the same. Only one of these is the main entrance. The door next to the car park (i.e. where anyone unfamiliar with the site will arrive) is not the main door. Therefore there are no call buttons. However, there's also no indication that there is another door on the other side. A whole building basically has no obvious start menu. Just...why do that?
100% agreed. I want buttons to, at minimum, indicate that my click or touch was successful. Even a little wiggle. Flat design has me wondering with no visual indication, and I hit this all the time.
As a side point, I predict Sheriff and Sans-Sheriff fonts to also go into similar phases too though over a longer time scale. We're in a very long Sans-Sheriff phase now but I think Sheriff will be "in" in 20 years.
FWIW, Macintosh System 6 was just black and white (and only 640x480px), so 3D wasn't really tenable.
System 7 and 8 had support for color, and was as 3D as it could muster, like highlight and shadow treatment on buttons and other widgets, and embossed, draggable thumbs.
Mac OS X brought Aqua, which was a pinnacle of 3D and skeuomorphic design. You know you're going hard for 3D when you add a drop shadow on all the text in your menubar:
Recently I’ve been enjoying serif fonts in more places than I used to, because I instructed Firefox not to allow sites to override my font choices (so I get exclusively Equity for serif, Concourse for sans-serif and Triplicate for monospace), which is quite pleasant and relaxing in general (Google’s foolish/poorly-implemented ligature-based icon font technique is the only notable breakage/uglification I’ve found in the couple of weeks I’ve been doing this), but apparently it’s more common than I realised for people to omit the fallback “sans-serif” or “serif” or “monospace” that they should always have on their font-family stacks, and my fallback default is serif. (e.g. I’m just now looking at a `font-family: 'Lato','Helvetica','Arial';`, and yesterday examined a `font-family: some-web-font, Open Sans;`.)
I'm using macOS 12 right now and there are a lot of 3D elements especially if you compare it to Windows 10 or Gnome in the screenshots of this post. It seems to be a mix of both approaches (similar to GTK 3).
It's not very different. I don't know if "flat" was ever the right word to describe the look it started moving toward post-Aqua; I think I'd be more inclined to call it "minimal", occasionally to a fault. With a few exceptions (most notably the bonkers choice to make all keyboard shortcuts gray in menus so at first glance they all look disabled), though, I mostly like the look that Big Sur's ushered in.
Borderless buttons to have the right to exist, but you should be very careful with their use. For example, take this UI I made recently: https://mastodon.social/@grishka/107998100334356147, 2nd screenshot. The button to decline invitation looks like a link (same color) and it's next to a real button. No one would ever get confused by this, it's pretty clear it's clickable.
But a black word in a larger font among black text would definitely NOT be recognized as a button by most people. Context matters a lot.
I disagree. I had to sit and think for a few minutes as it was extremely unintuitive - my first impressions were that I couldn't decline the invitation at all, and the button was disabled or missing for some reason (I've seen some bad CSS in my career so it no longer surprises me when it goes missing).
It does not look like a button. It's just a label floating around.
When designing user interfaces, use the mantra: "Don't make me think!" Don't make the user think about whether something is a button or not. Don't make them have to infer that the label is in fact a button due to context.
I am but an ignorant user, but making you think to do complete one action but not another seems to be a significant part of the point. I find it deeply coercive, and hope to see the other side of this trend sooner rather than later.
Moving the goalposts, but the decline button in your example looks significantly harder to engage than the join button, and engaging with it would cause at least a twinge of anxiety in anticipation of having to essentially coerce the interface to accept that I do not wish to travel the gilded path.
It frustrates me that this sort of design has become not just acceptable but celebrated as 'correct', and I long for the days when interfaces did their best to seek my enthusiastic consent to my chosen course of action.
It's not very clear to me it's clickable.. Is it an action, or a link to a different page/view? If it's an action (accept/decline invite), make it a button. If it's navigation, make it a text link. You are giving totally disparate cues to the user when the actions taken upon clicking are of the exact same type.
My initial reaction to a piece of text directly next to a button is "why is there random unrelated text there?" and then the sensation of anger as I come to the realization someone consciously decided to make me think about the UI more than I needed to. It's purely a distraction.
Just make it look like a button. There is literally zero reason not to, and you can eliminate the chance of confusion. Why increase the possibility of uncertainty at all? What value does that provide to anyone?
Every UI or design element that causes me to think about it wastes precious moments I could have been thinking about something I actually cared about, for example the actual task I was in the middle of. On top of that, when you use unconventional design, even after someone learns the seemingly-arbitrary, specific quirks of the design you've used, you will cause them to think about it all again later when you redesign the application.
Literally zero reason not to? I think it’s emphasising the “default” action quite nicely, and de-emphasising the other action quite appropriately.
“Decline invitation” needs to be something you can do, of course, but it will be used < 1% of the time, I imagine.
I find that having buttons side by side forces me to think in a mildly unpleasant way, like I have to read two thinks more carefully because there is more onus on me to decide which is the typical action.
No, that's still not a reason to conceal the behaviour/purpose of the UI element.
If you want to emphasize/de-emphasize a UI element, you can do something like was established on most GUIs 30+ years ago: make the default/safest option have a thicker border, and the secondary/de-emphasized option have a thin border. Early examples (1985 and 1991 respectively):
Note how immediately-obvious it is which action is the "typical". You don't even have to read the text, you could even squint your eyes or glance from a great distance and still understand which button is the one you probably want.
UI design involves trust, especially for software where people are trying to actually get something done. Making unconventional designs erodes your users' trust. Have respect for them and what you can reasonably expect them to "know" arriving at your software, and they will have an easier time and love you for it. As we can see, these conventions have existed for decades, and deviating from them warrants a very compelling reason, not just "cuz it looks neat".
The button to decline doesn't add an entity. There are already two options to consider whether to accept or decline. Making the external state and internal state match avoids confusion.
> No one would ever get confused by this, it's pretty clear it's clickable.
I don't know how you can reach that conclusion, because we only recognise it because of how many times we've been fed this exact pattern over and over again in flat UIs (and failed to realise it was clickable the first N times)
>No one would ever get confused by this, it's pretty clear it's clickable.
It's pretty clear that it's clickable, but I definitely wouldn't think it's a button. My best guess would be that it's a navigation element, and that clicking on it brings you to some other page where a decline invitation button might be found, or where you have to fill out some form to explain why you're declining.
Part of it is because it looks exactly the same as "Test User", and presumably that isn't a button... I imagine that is a navigation element. But part of it is also because quite a few recent Microsoft corporate products (Sharepoint, Dynamics, etc.) seem to use the idiom you're trying to use for signaling that something isn't a button.
Like so many other pointed out, this is wrong, I got confused, it looks like a link, would that get me to some kind of unsubscribe page? is it phising?
buttons should be actions,
links should be directions,
directions would take me somewhere, actions would has an effect and consequence. That you think this is okay, and assume no-one would ever be confused by this. Is clearly an issue in the industry at large
That's an action link, not a button. In a UI context it would be rendered in a different color and underlined, to highlight it being an active element rather than a simple label.
I too hate the material design kind of flatness. For me, only apple somehow gets UI right (at least for me) and it feels very intuitive for me while also looking very good.
The new "flat" design that pops up everywhere sometimes just feels like a lazy version of UI design, where you basically don't need proper styles anymore and just make everything b/w with maybe a border here and there.
My understanding of material design is that it tries not to be too flat. Material design reduces the 3d roundedness and gradients of many elements but still relies heavily on drop shadows to indicate hierarchy and overlapping.
> I too hate the material design kind of flatness. For me, only apple somehow gets UI right (at least for me) and it feels very intuitive for me while also looking very good.
I dunno about that (both Apple using flat, and the "gets it right" part).
I'm looking at activity monitor now, under memory pressure, and the table of memory types has "Memory used" row expanded, and yet none of the rows are actually clickable. I spent a few seconds yesterday trying to click the other rows.
Then I looked at system preferences, and it's not actually flat widgets - they have relief for those things that can be interacted with. Things that are clickable are visually marked as such.
>I'm looking at activity monitor now, under memory pressure, and the table of memory types has "Memory used" row expanded, and yet none of the rows are actually clickable. I spent a few seconds yesterday trying to click the other rows.
I can see why you might get confused there but it feels like nitpicking. I never expected to click on any of those elements because they don't look like a traditional table or outline view.
It seems like KDE and Plasma is the place where look and feel is constantly but incrementally refined; we've been saved from this flat design trend so far and things keep looking pleasant and modern and being usable. We know they care, and have great attention to details because that's documented weekly [1].
I find the Breeze theme really well done and its GTK port, Brise, is also very nice, to the point Gnome looks good in it.
There was just the KDE 4 era where I didn't like Oxygen at all (and indeed I used to change the theme to Fusion there) but that's over. KDE 3 was fine and KDE 5 is great. At this point, most things that are not Breeze don't look great to me now.
As for the customization and theme support, that's supported and it works well, they prove that it's nothing insurmountable too. KDE comes with themes that look like Windows 95, Motif, Adwaita, GTK 2 and other things and you can download more if you want.
I'd be curious to have a review of Breeze / KDE by Martijn.
• switch to sidebar view and examine the buttons at the top of the sidebar
• go Workspace → Workspace Behaviour → Desktop Effects and examine the buttons at the right
• go Workspace → Workspace Behaviour → Virtual Desktops and examine the buttons at the right
• go Workspace → Workspace Behaviour → Activities → Activities and examine the buttons at the right
• go Workspace → Window Management → Window Rules and examine the buttons at the right
• go Workspace → Startup and Shutdown → Autostart and examine the buttons at the right
• the most egregious example: go Personalisation → Regional Settings → Formats and examine the whole dialogue, it is entirely made of frobable regions
• go Network → Connections and examine the buttons at the bottom of the connections list
• go Hardware → Printers and examine the buttons at the top of the dialogue
I do not understand what goes on in the responsible programmers' heads. Why does the implementer reinvent buttons badly, instead of using a standard button? Is there no one reviewing? Is there no one saying "no, we cannot burden a KDE user with this bad usability, I will not merge this code"?
How's KDE/Plasma with touch these days? I'm running GNOME on my Surface right now because it does pretty well with both Desktop and Tablet modes, but I'm pretty unhappy with the at times patronizing philosophy behind the project.
I have a pine-phone and the virtual keyboard just refuses to pop up for some apps (like firefox!!). I keep looking in the settings for "force virtual keyboard on" or something but can't find it.
I like how sxmo lets you open/close the keyboard whenever you want by swiping up from / down to the bottom edge of the screen, regardless of what apps are open or what’s happening within them.
It indeed works now. Firefox was unusable on Plasma Mobile in 2020, it now mostly works on an updated Pinephone Pro (can't click on those popups when you try to install an extension, but the virtual keyboard shows up and seems to work ok).
Turning scaling down to 1.8 (default 2.0 on Manjaro/Plasma) or lower lets Firefox work on the screen; I didn't have this problem on pmOS; either the default scaling is lower on pmOS, or they have some other workaround.
There's some minimul visual elements, eg if you choose the tabbed layout.
So the comparison for XMonad wouldn't be all of KDE or Gnome, but just how those two choose to decorate their windows. Eg their 'minimize' and 'close' buttons and window borders.
To be slightly more serious than my original comment was:
If you can rethink your UI in such a way that some things can become invisible, that can be a very ergonomic choice.
To give a better example: look at the bad old days of C and memory management via malloc and free.
One direction you can go into is Rust. Compared to C, Rust has a greatly improved user interface [0] for handling memory allocation.
Another direction you can go into is Python. Compared to C, memory management is basically invisible in Python. It just works.
Now, of course, Python gets to simplify its UI by essentially removing control from the user. But for many programming tasks, that's a good trade-off to make.
Similarly, iOS gets to drop the UI elements associated with manipulating windows, because it puts every app in full screen. (And XMonad greatly discourages you from fiddling with Window placement and layout manually; but has some less-intuitive less-discoverable means to do that manual fiddling, if you need it.)
[0] The user of Rust being typically called a 'programmer'.
I abandoned KDE during the KDE-3 (edit: to 4) migration apocalypse, when nothing worked as it did before, or at all, and everything looked like it needed at least another year of refinement.
Is this settled and past now? Is KDE still adament on making virtually every other pixel configurable?
KDE 3 was rock solid. They did break a lot of stuff when releasing KDE 4. That was very unstable / crashy. At that time I moved to Gnome 2, and moved back when Unity was shipped by default in Ubuntu. The late KDE 4.X releases were rock solid. I didn't like KDE 5 at first. A lot of things felt unfinished / not ready. but KDE 5 has been rock solid for years now, and from what I understand they have promised they would make more incremental changes from now on (so no more KDE 4.0-like disruptions). They are currently focusing on ironing out the small things instead of breaking stuff every month.
Now the world seems more mature: Cinnamon, KDE and The Ubuntu favor of Gnome seem to work very well (Each time I see Ubuntu Gnome I think "wow, that looks good!"). Xfce have been rock stable and reliable for years and years for people who like it. Still looks the same as when I discovered it 17 years ago. I've heard people like the current versions of Gnome and that the latest versions are better.
And yes of course everything is still highly customizable in KDE. There are people who make it look like old Windows, some who make it look like Mac and both work quite well. But you are not forced to customize, the default are top notch and that's what I use. They've been working on the settings center which was a mess and which is still not perfect, but it's gone to the point where it is one of the best I've seen. In comparison, the Windows' several setting centers are a huge mess and the Gnome settings are lacking, and you need to install Gnome Tweaks to have some useful stuff and now you have two settings panels.
I used KDE 4 from the first alpha versions (~2007) and never quit using it. However, I had friends who didn't like the mess that KDE 4 was in the beginning and who were very verbose about the good KDE 3 features they were missing and the new behavior they didn't like.
However, at some point they saw that the very few things they were still missing were neglectable compared to the good things that KDE 4 brought with it.
Nevertheless, 2007 to 2012 were 5 years during which it wasn't easy to be a KDE user ;-)
> I'd be curious to have a review of Breeze / KDE by Martijn.
He mentions KDE in the blog post:
> The feedback I get is that I should move to QT/KDE, but I think that theming has had the same issues for way longer already and I do really like the Gnome HIG.
GNOME 3 was a fiasco not because of GTK+ 3. It was a fiasco because they threw 12 years of work and UI/UX design in the rubbish bin instead of thinking of ways to improve on it. MATE now runs on GTK+ 3 and it's leaps and bounds better in terms of UX compared with GNOME > 3. Sure, GNOME now has all the eyecandy and stuff, but it still sucks hard compared to GNOME 2. GNOME 2 was the pinnacle of UX IMHO, it was a novel design that really showed that you could innovate without copying others, that there was a third possible way that was not either a Dock or a start menu. All wasted, I suppose.
Breeze is being ported to Qt 6, and from the screenshots I see (https://www.volkerkrause.eu/2022/01/15/kf6-continuous-integr..., unsure what others), there are no visual changes so far. Time will tell if they change Breeze's appearance further before/after Plasma 6's release (I didn't ask about KDE's current plans). One possibility is O2 (https://pinheiro-kde.blogspot.com/), a reworked theme by the author of KDE 4's Oxygen, though it's not very far into implementation yet.
That's because there's no Breeze theme for Qt 6, so it's using the builtin Fusion theme which is literal cow dung. If you port your theme to Qt 6, the app should look identical.
I do not use a theme. I think you have it the wrong way around. What's missing here is the Qt6 equivalent of KDE System Settings → Appearance, or configuration tool <http://qt5ct.sf.net>, and it's not my task as the end user to provide these, but the toolkit and desktop environment developers'.
You always use a theme with Qt or Gtk. On Plasma the default Qt and Gtk theme is Breeze, which has not been ported to Qt6 yet. ATM the only Qt Widgets-compatible themes I have on my system are "Fusion" and "Windows" (as in "Windows 95"), both of which look absolutely horrible. Fusion is the default, and it's what you currently see when you open a Qt 6 Widgets app (for instance, QBittorrent right now).
By the way, qt6ct exists <https://github.com/trialuser02/qt6ct>, and it's even in Arch's repositories right now. The main issue is that it's useless, because there are simply no Qt6 themes out there yet and KDE still does not support Qt6 so you have to force qt6ct manually.
> It seems like KDE and Plasma is the place where look and feel is constantly but incrementally refined;
Every time I see a comment like that, I try again KDE, only to leave it 15 minutes later because it is a huge mess. Everytime I try configuring the panel to my liking nothing goes where I want to and I end up with widgets everywhere.
> I feel like the designers of this new theme have never sit down with anyone who's not a "techie" to explain to them how to use a computer.
The designers of this new "everything is flat and devoid of visual hinting" trends are maybe, at most, enthusiast, but not professional designers. I don't mean professional in the "hired for work" sense, but in the "really has the knowledge and has studied the psychology behind what makes a good UI design for human beings". Otherwise, nobody with solid background, experience, and/or scientific studies in their hands would in their right mind sketch the current shape of mindless "modern" UIs on a paper and feel like that's a very well thought out work right there.
And here I'm bashing not only OSS designers that think a bland and unclear design is the best that can be done, but also the same for designers of big companies that get paid for their work. Althought ironically now that I think of it, maybe the latter are somewhat less to blame, knowing how the corporate world works and knowing that they possibly don't agree with the results but their boss insists on how all buttons should lack any depth. So in the sense of not having a pushy boss with an agenda to fullfill... yeah it seems to me that OSS designers are acting worse by voluntarily pushing these supposedly "better", but almost quite objetively worse designs, into users' throats.
To back this up, the "actually has a degree in HCI" UX profressionals I worked with at a previous job railed against flat design and ensured our output was clear and intuitive. Though there was only two of them, so I can't say all of them would be like that!
> nobody with solid background, experience, and/or scientific studies in their hands would in their right mind sketch the current shape of mindless "modern" UIs
So... Since you are talking about scientific studies, could you please reference some studies that back you up and show that flat design is in some way inferior?
That flat design is worse is maybe not an universal truth, but no doubt it is a well known issue and concern has been shown everywhere. Quoting from my first link below, flat design has been widely criticized by HCI and usability experts.
This is one of those things that I'm willing to believe, without requiring a whole hands-on research, based on my subjective views, and on the fact that every time the topic arises, most devs seem to agree.
So we know that flat design is worse, or at least we have a very firm intuition. The only remaining thing is to quantify by how much. This paper seems to go into that direction:
Thank you for providing these links! I read the two articles.
The first one is the closest to what I was looking for. It contains three mostly independent experiments, comparing text, icons and web layout. Text and icon do not seem relevant to this discussion. The third part of the study compares a "traditional" website design to a new, flat one. This is more relevant to the discussion, but this comparison has three huge flaws.
Firstly, according to the pictures in the article, they were comparing two specific arbitrary websites. It is not clear how this comparison can be generalized. Secondly, the study is from 2015 when flat design language was relatively new. It could measure just the difference between any "traditional" and "new" design language due to the familiarity. Thirdly, the confidence intervals are huge. I don't think there is any pair of values that are confidently separated, which would make this a null result even if the experiment itself were totally fair.
The second article is very interesting, but its conclusions are non-committal. I clicked to several of the links from this article. Most of them are opinion pieces, and none that I found makes any judgement call about the comparative advantage of flat design.
I'm sorry but I ignored flatisbad.com since I believe Xisbad.com can't be an unbiased source of knowledge about X and should be ignored as a matter of principle.
You note that visual and UI design in open source and in proprietary applications show some of the same idiosyncrasies.
You conclude that this makes open source designers more blameworthy.
I would suggest instead, that this is evidence that those evil management overlords in the corporate world are perhaps convenient scapegoats, but actually less to blame than we might think?
macOS has the same problem: everything is flat. There are no borders around many buttons. It's hard to tell what is a label and what is a button, or which controls are disabled, and which are merely de-emphasized.
Apple used to be held as the pinnacle of design, but what does it say when their latest UI is indistinguishable from an amateur design?
I'm going to take a risk here and say that I'm a designer who generally likes flat design.
I see a lot of hate here against flat design, and it makes sense. A lot of designers abuse it, making their UIs ambiguous. It's probably true that if flat design was never conceived, UIs would be less confusing on average.
But it's clearly possible to do flat design well. iOS is mostly flat, but it's quite beautiful and user friendly. You usually know when something is a button because the context makes it obvious. If it's not obvious, it's probably not a critical feature, just a nicety for those who discover it. There will always be people who hate everything Apple designs, but I think they're usually channeling a hatred of something else, like monopolies or proprietary things.
Moreover, I think flat design is not a fad but an inevitability. As a rule, designs become more abstract over time, because as users get more used to UIs, they need fewer clues. It used to be that UIs needed to be skeuomorphic to be familiar and recognizable, but now simple outlines can do the job.
Why do we need to remove unneeded clues, you ask? To make room for more important things. If Instagram was littered with gradient buttons and blue underlined links, they'd draw attention away from the real content, the pictures. If Figma had too much personality, it'd influence users' designs.
You could argue that we've gone too far too fast. I don't have a dog in that fight. But I would bet my big toe that we're not going to reverse time and end up with skeuomorphism again. Instead, we're going to take advantage of faster GPUs to visualize abstractions in ways we couldn't before. We're seeing that with blurred background effects becoming more common. I for one am excited to see what's next.
> If it's not obvious, it's probably not a critical feature, just a nicety for those who discover it.
That's a really bad way to go about usability. It's pretty frustrating you're looking for a feature to have to hunt for it like you're playing a point & click game. At that point I wouldn't even call it a feature, it's an Easter egg.
I would say this is not entirely correct as a standalone rule.
Just to make it clear, I completely disagree with this statement the OP made:
> If it's not obvious, it's probably not a critical feature, just a nicety for those who discover it.
There is a huge difference between discoverability and hierarchy. If the user can't distinguish between an interactive element and a static label, the UI has failed.
But there is definitely nuance in the degrees of hierarchy and importance. If every button looks the same with every affordance necessary for a user to recognise it as such, it might be great in terms of discoverability, but in most cases it would still be bad design because as a user I can't easily distinguish between primary, secondary or tertiary actions.
This is of course also highly dependent on the business goals and overall context of the design. If you have a marketing page you might want to clearly distinguish actions that lead to the sign up or download of your product (this – in most cases – is what you would like your users to do and also what your users eventually want to do themselves).
If you have enterprise software that is highly customizable and completely different for each and every individual, you might want to shy away from opinionated hierarchy of importance. In this case, it would be very much dependent on the user and therefor their choice.
Typically the context is – at scale – more on the former side than the latter. If I have, for example, a notes app I would say the hierarchy favors the creation of new notes rather than going into settings to change the appearance of my notes. That doesn't mean it shouldn't be discoverable, only that the design should clearly dictate what's more important (especially on a temporal dimension: I'll add notes more often than I would edit the style) and therefor influence the appearance of both actions.
This is an interesting/surprising reply to me, so I've spent some time thinking about it.
The problem is that design is always a compromise between discoverability and post-discovery efficiency. Consider the swipe-left-to-delete pattern. It's not discoverable, but once you've learned it, you wouldn't want it any other way. Designers sometimes compensate for the lack of discoverability by adding a slower but more discoverable way of doing the same thing, like an "Edit" button that reveals a delete button on every item. Thus making swipe-left-to-delete a non-critical feature, and a nicety for those who discover it.
Another example is clickable usernames. We have an unspoken expectation that clicking a username in any app takes you to the user's profile. This expectation is so ubiquitous that apps (like HN) don't provide any additional indication that usernames are clickable. Imagine if every username on every app was a full blown bordered, textured button. Even someone with zero design sense would feel something's off.
It simply isn't possible to maximize both discoverability and efficiency at the same time. Every feature needs to decide who it wants to cater to and who it's comfortable leaving behind.
> Why do we need to remove unneeded clues, you ask? To make room for more important things. If Instagram was littered with gradient buttons and blue underlined links, they'd draw attention away from the real content, the pictures. If Figma had too much personality, it'd influence users' designs.
This is an oversimplification and not always correct in all situations. Take the original article as an example. How did 3D buttons take away from the "more important things"? Does making "New List" a flat button and thus barely distinguishable from the "To Do" title help to enhance, somehow, the "To Do" title? Does not the strong blue in the flat button ("Add Tasks...") draw user attention to it, distracting from the rest of the UI, rather than withdrawing from the rest of the screen to allow "more important things" to shine?
If flat design really succeeded at helping to direct users' attention to what is important, it wouldn't be subject to the criticism that it confuses users. Users would be helped, not hindered, by such design, no?
> A lot of designers abuse it, making their UIs ambiguous.
...
> This is an oversimplification and not always correct in all situations.
You are both saying the same thing, I guess. grandparent states that it is "possible to do flat design well" and gives examples (Apple). I don't have anyhting Apple, don't like it, but I do find their designs awe-inspiring.
To me, this comes from their investment in detail. The issues you describe "Does making "New List" a flat button and thus barely distinguishable" is real and an example of bad design. Not a proof that all flat-design always is bad. It shows that too little time and effort was poored into this piece.
I'm not sure how I would fix this particular issue. But I'm confident that within the contstraints of "flat design" and time and effort (investment) in details, this is solvable.
I'm also confident that without the constraints of flat-design, the time and effort needed would be far less. But here we are.
I should have addressed the original article. As a sibling commenter mentioned, I didn't say all flat designs are good, quite the opposite. That to-do list design is very mediocre.
A desktop to-do list app that I think is super well done is Reminders on MacOS. Its basic functionality is immediately intuitive, and it gives love to power users too with its secret ability to parse natural English (e.g. "laundry at 9") for adding notification times.
I disagree on iOS. Give an iPhone to someone never having used one before, and I'm sure they won't even be able to unlock it.
I almost cringe when I see UX people with iPhones. Yes, it's esthetically pleasing, but please don't take too much inspiration from the way things work.
The first time somebody handed me an unlocked iPhone asking me to send a SMS (she was driving) I wasn't able to do it. I had to ask her where to touch. She was really surprised. I had a few years of experience on Android and 10+ with feature phones.
On the other side, I asked another friend of mine to take a picture with my Android phone a couple of weeks ago and she (an iPhone user) wasn't able to find the camera app. I had to open it for her. It was on the home screen, maybe it's different on the iPhone.
On iPhone you can swipe left on the Lock Screen, or press the icon on the Lock Screen, or you can pick it from a menu you pull down from the top of the screen, or find the app via icon, search, full app list. Maybe other ways I don’t use or know.
I would assume that it’s similar on android, access the camera from lock screen or app. I’d probably try the iPhone style swipe the lock screen if I didn’t see a shutter or camera looking icon, otherwise I’d use search (if I could find that) to find the app.
Yes but on the flip side I don't give two shits about discoverability, I already know how to use my phone. Snapchat is actively terrible for discoverability but it's really efficient once you know.
The best design for someone who is unfamiliar and the best design for the user that already knows how it works are always at odds with one another. Do you balance the game for new players or for diamond league?
Exactly this. Design is always a compromise between discoverability and post-discovery efficiency. Programming languages are the extreme case of the latter; literally everything requires a reference, but once you’ve got the reference memorized you’re invincible.
Seconded. Wasn't able to unlock one the last time I touched an iPhone. And I even owned one - iPhone 5. Some people I talk to share similar experience.
What do you think of the iOS podcasts app then? I have no idea of where to press to view details of a podcast, vs to start playing it. On the main page, the black text starts playing a podcast, and the purple text shows details. Further into the app, the purple text will start playing while the black text will view details. While playing, neither the purple nor black text do anything (they act as title/subtitle). Clicking on a picture is also pretty unpredictable.
This is admittedly not a flat vs non-flat design problem, but it is exacerbated by the flat design. If actions which started something playing were outlined or something, it would be so much easier to use.
I never used the podcasts app before but just gave it a try. I can't seem to reproduce what you're describing. On the top-level pages ("Listen Now" and "Browse"), only the purple play button plays the podcast for me, clicking the purple text (the duration of the podcast) does nothing, and everything else opens the detail view for the podcast. Pictures always open detail views. What version of iOS are you on? I'm on 15.3.1.
I’m also on 15.3.1. The huge “Up Next” menu (the first thing on the first page) is what I’m describing - perhaps you need to subscribe to some podcasts before you see it.
Clicking the purple durations next to the play button always starts the podcasts playing for me.
Something you much see too often is that a designer tries to copy some design, without understanding why that it had been designed that way in the first place.
That is how you got bad flat design.
That is how the idea of metaphors got skewed into skeuomorphism — and likewise, how metaphors got swept away by the movement against skeuomorphism.
Maybe most designers are not even designers but only developers, frontend or backend, and the best they (we!) can do is copy something we already used which could come from other non designers. All the reasoning is lost in the process.
I'd actually be willing to bet that we will go back to a more 3D look before the decade is over. Not my big toe, though. I need that. Maybe, a bottle of beer, or something?
I'd argue that the flat 2D look is cool these days because it is different from the skeuomorphic 3D look we had before. It will look old and faded in a few years, and in the search for a fresh look, we'll rediscover a more tactile 3D look.
Anyway, I actually like the flat 2D style as well. It might be that many current programs are not doing it well merely because they haven't had much experience with it. This will sort itself out in a few years. Much like the super-skeuomorphic apps from a few years ago that included things like stitching and torn-off paper in calendar apps and notepad apps, respectively.
> It used to be that UIs needed to be skeuomorphic to be familiar and recognizable, but now simple outlines can do the job.
Nope. It just makes everything horrible, more pressure on brain to find out and slow in the end to do everyday operations. It's just the designers who love these just for the looks (they don't even know what design actually means), not for any usability.
Tech would have better without these type of "designers".
I think this is a fantastic comment. I've always seen design as just trends, but you're completely right that is serves a specific purpose, and as that purpose changes so does the design. There may still be trends, but we can at least reason about them.
It’s less flat today then it was - because they’re clearly backpedaling.
> There will always be people who hate everything Apple designs
Apple ripped off flat design from Microsoft.
> Why do we need to remove unneeded clues, you ask? To make room for more important things.
Last I checked, some texture on UI elements didn’t take any additional space.
Your arguments are entirely unconvincing.
Blurred background effects are nothing new - and that’s something that’s been done with GPUs from nearly 20 years ago. They haven’t been able to make it anything more than a fad.
> The only issue with it is that font rendering looks horrific, but that might just be my machine.
This is because GTK4 enables pixel/scaling-independent fractional vertical positioning, even with hinting enabled. There's a long (somewhat ongoing) discussion at https://gitlab.gnome.org/GNOME/gtk/-/issues/3787, though I haven't followed the last few months of discussion.
Even though GTK4 aims to achieve scale-independent layout, the 4 horizontal/vertical positions still produce a bit of judder, and fonts do not scale smoothly (even with bilinear interpolation) with hinting enabled, and (unless fixed) there are rendering issues due to failing to clear the texture atlas properly: https://gitlab.gnome.org/GNOME/gtk/-/issues/4322
Interestingly there's a proposal to switch GTK4 fonts to SDF-style rendering. This is somewhat like what Qt Quick 2 implemented already (and KDE turns off and reverts to FreeType rendering, to make QML apps mimic Qt Widgets font rendering more): https://blogs.gnome.org/chergert/2022/03/20/rendering-text-w... However, I looked at https://github.com/behdad/glyphy and it seems to implement vector-based SDFs, instead of earlier texture-based SDF/MSDFs used by Valve games and Qt Quick.
“Scale-independent layout” (layout that ignores pixel aliasing) really requires PPI over ~200, that is, more than most desktop monitors provide. We’re still just not there.
It’s going to be a bloody mess for a long time, because the choices for how to handle resolution independence are all inherently filled with compromise.
With font rendering, I think there is hope. Horizontal subpixel positioning with vertical hinting seems like a good tradeoff to me. Grid fitting vertically is not too jarring, and grid fitting horizontally to subpixels instead of pixels looks pretty good too, on low resolution displays.
But it really is a son of a bitch elsewhere. For example, if you want a crisp 1px border on 96 dpi, you could specify it to be a 1px border at 96 dpi… but then what happens at 1.5x or 1.75x scale? From a purely logical position, the blurry line is actually the general case, and the integer scale case is actually an edge case. That desktop UIs aren’t blurry basically always is because we define them in terms of 96 DPI displays.
It gets worse for APIs, because APIs that want to present a resolution-independent world will cause difficult to tolerate bugs. The VS Code terminal will often be blurry at non-integer scales because it is using HTML canvas. If the canvas width or height is not a multiple of the size of a CSS pixel, it will cause the internal buffer to be scaled horridly. The fix might be a new API that reveals true coordinates… very, very nasty.
Apple’s solution was extreme: dump all font hacks, always render apps at 2x, then scale the whole framebuffer for different scale factors. It’s somewhat blurry, but avoids many ugly pitfalls in the common case, and makes apps simpler.
Unfortunately, the rest of the world is just stuck with really bad scaling and more often blurring on 96 DPI displays, the worst of both worlds.
I really don't understand what was wrong with the X11 approach. I had a high DPI monitor in 2001. I typed the DPI into /etc/XFree86.conf or whatever, and it all Just Worked (TM).
Edit: I think modern web browsers implement ctrl-+ and ctrl-- the same way, except X11 apps kept separate directories of icons rendered for different DPIs, because 1GHz single core still seemed luxurious. Web browsers scale the bitmaps using some reasonable algorithm. Other than that, arbitrary zooms work with zero blur.
For what it's worth, PostScript also got this right back in the 80s.
> I think modern web browsers implement ctrl-+ and ctrl-- the same way, except X11 apps kept separate directories of icons rendered for different DPIs, because 1GHz single core still seemed luxurious. Web browsers scale the bitmaps using some reasonable algorithm. Other than that, arbitrary zooms work with zero blur.
Web browsers scale bitmats if no other version is available but you can provide different bitmaps for different pixel ratios to avoid any blurryness [0]. Resolution independence is one thing that the modern web stack gets right - even 1-pixel borders/lines and space between elements generally works as expected for different scales.
Of couse *mobile* browsers made the IMO stupid decision of only activating these scaling features when you add a special tag to your HTML header.
Well, how much it Just Worked really depended on what you were doing and how. At a point, it all stopped Just Work-ing.
Old old X11 apps used X11 drawing commands. These sucked, and nobody liked them. If you think you liked them, please show me your clean Xlib codebases for proof :P As far as I can recall, these still dealt with pixels, so clients were on the hook for dealing with scaling, though in theory it wasn’t too bad. They don’t really solve any of the pixel perfection issues that I am discussing, though.
More modern apps (— early 2000s should be “modern” enough by X11 standards, but my memory is foggy and I’m too young to really be an expert here —) instead blit pixmaps sent over shmem, defeating both network transparency and the inherent “vector” nature of many of the old drawing commands. X11 didn’t really handle anything other than knowing the DPI (… that you told it …)
At that point, up to GTK+2 and Qt 3, which is to say, even quite a while After 2001, you had at best limited scalability. If you had your CRT cranked up to around 150 PPI, everything was OK — you could get text scaling and the disparity wasn’t so bad. However, GTK+2 and Qt 3, and their ancestors, were not built with DPI independence. At best, they could adjust vector text sizes according to DPI and scaling preferences. Again, this looks OK for nvidia-xsettings and a modest PPI increase, but it’s absolutely terrible for anything more. Margins don’t adjust, padding doesn’t adjust, icon sizes don’t adjust, nothing. There’s no blur or jankiness because there’s no true scaling.
(Just as a quick note, this is literally the reality of GIMP today, right now. It’s still on GTK+2, and so the best you can get is text scaling, or flat out nothing.)
And that’s to say nothing about what happens if the DPI changes, which requires you to effectively restart everything. And that also doesn’t help people who have two different displays with different PPIs. The ever common case of the high DPI laptop with a cheap LCD plugged in. Have fun with that crap.
Modern Linux can do better. The Wayland protocol comes with DPI negotiation that allows naive clients to get blurry upscaling, “simple” clients to pick a set of scales they can support and have the server adjust for whatever one they decide to render to, and advanced clients can render at any DPI, in response to the server advertising what DPI the current display is. With atomicity of configuration changes that allows a properly written client and server to never render an “intermediate” incorrect frame, and scaling that ensures that surfaces across multiple displays display at the correct DPI on all of them (albeit with either upscaling or downscaling on some of them.)
And that still does absolutely nothing to solve the fact that pixel perfect layouts are inherently not perfectly “scalable.” Because truly scaling some vector drawing commands that just happen to be pixel perfect at one resolution will not always result in pixel perfect rendering in another. You would need code that compensates for the scaling. Old X11 apps did not do this.
Of course I could be completely wrong and old X11 could’ve had some amazing DPI scaling technology that I somehow missed for decades. I don’t think so. My memory is that when I finally hooked up a high DPI display to Linux, I experienced tiny Skype, Pidgin (GAIM) with tiny icons and large text, and nvidia-xsettings with weird hinting/kerning. I’d like to move on from that kind of scaling.
P.S.: PostScript doesn’t do anything magic either. Everyone’s graphics systems were PostScript inspired, and yet macOS wound up with the same DPI scaling conundrums as anyone else. Most people wouldn’t tolerate desktop apps as blurry as a PDF at 96 DPI.
> More modern apps (— early 2000s should be “modern” enough by X11 standards, but my memory is foggy and I’m too young to really be an expert here —) instead blit pixmaps sent over shmem, defeating both network transparency and the inherent “vector” nature of many of the old drawing commands. X11 didn’t really handle anything other than knowing the DPI (… that you told it …)
This is entirely untrue. Did you even try ? Qt even at version 6 still supports rendering through X11 commands, and afaik does that by default when ssh'ing on Debian distros.
And I can set my Xft.dpi to, say, 144, ssh -X somewhere and the apps I launch (tried gtk2, gtk3, Qt 4 to 6) will so far all use the correct local DPI. Which other remote UI technology supports that ?
> This is entirely untrue. Did you even try ? Qt even at version 6 still supports rendering through X11 commands, and afaik does that by default when ssh'ing on Debian distros.
When you connect over SSH, it will fail to setup XShm and then it will work as expected, only slower than the speed of smell, because now it’s shipping pixmaps over the network. Not all X11 clients continue to work properly if XShm can’t be established, and hardware acceleration is basically a no-go despite OpenGL/glx theoretically being a client/server ordeal.
> And I can set my Xft.dpi to, say, 144, ssh -X somewhere and the apps I launch (tried gtk2, gtk3, Qt 4 to 6) will so far all use the correct local DPI. Which other remote UI technology supports that ?
Waypipe. Unlike X11, Wayland doesn’t start with network transparency as a principle, but it is completely possible to proxy it. Other than not being able to get a hardware-accelerated OpenGL or Vulkan context, a client connected over Waypipe is very similar to a local client. The proxy can handle things like serializing data sent over shared memory, so UI toolkits and other client code doesn’t need to behave any differently over the network; it just needs to use synchronization primitives correctly.
> When you connect over SSH, it will fail to setup XShm and then it will work as expected, only slower than the speed of smell, because now it’s shipping pixmaps over the network.
no, this is false. Here's a video of dolphin, KDE's Qt 5 file manager, run over ssh on another computer: does that look like it's blitting pixmaps over the network ?
When checking nload, this uses ~8 megabyte/second, I can let you imagine how much it would be to blit a constantly scrolling UI at 140 fps - I can assure you that even gigabit ethernet does not cut it unless compressing a lot :-)
Honestly, I regret arguing on this point. There’s no reason for me to continue on it, since it has nothing to do with what I was really trying to discuss about X11 apps. Still, 8 MiB/s is a shit ton of data, and given that it is screen data I’m sure it would zlib compress very well. Is it shipping the whole app as one pixmap? I am not really making that claim, though I actually thought they dropped XRender based QPainter somewhere in Qt 4, but it’s not plainly obvious that they did. I’ll concede on that. It’s still mostly shipping pixmaps either way, especially depending on how things nest, because the text is absolutely all pixmaps, but it would be more efficient by a decent bit than shipping the entire app as pixmaps due to being able to do compositing on-server.
It doesn’t change anything about DPI independence, because neither XRender nor the basic X drawing functions provide you with scalability built-in.
it is minuscule, and it is the peak I managed to get when moving as fast as possible. At the same refresh rate, blitting, say, 1024x1024 pixmaps would yield 576MiB/s so here we are talking about 72 times less. And it's while running a moderately image-heavy app with most likely room for optimization. One I often use is pavucontrol-qt: this one gives me less than 1MiB/s of network traffic when resizing it madly.
> It doesn’t change anything about DPI independence, because neither XRender nor the basic X drawing functions provide you with scalability built-in.
icons are scaled, images are scaled, text is scaled... what is missing ?
Also, regarding zlib: I took a screenshot of this window and compressed it as png (which uses zlib if I'm not mistaken ?) which gives me 137KiB, or 19MiB at 144fps. So more than twice as much as what X11 manages (and that is raw X11, IIRC there are X11 protocol extensions which also pass the X11 messages through gz, but I've never felt the need for that as things are already perfectly fast).
If you can show me any video-compression-based implementation that allows me to get this close to zero latency with zero image degradation (especially for text, you really don't want subpixel font AA to be video-compressed) and as little network overhead as what Qt gives over X11 I'll be super happy, but I really think it is unrealistic.
> it is minuscule, and it is the peak I managed to get when moving as fast as possible. At the same refresh rate, blitting, say, 1024x1024 pixmaps would yield 576MiB/s so here we are talking about 72 times less.
If 70% of the pixels are the same shade of gray, that’s not impressive at all. If you are serializing image data and storing it over the network, you can do better than uncompressed with virtually no CPU load increase. Even moreso if you’re doing multiple correlated frames.
> when I set Xft.dpi to 144 on my machine and run the same thing over ssh I see this: https://i.imgur.com/JQhEcvG.png
icons are scaled, images are scaled, text is scaled... what is missing ?
Nothing.
That scaling is done by Qt, and has all of the aforementioned issues with regards to scale factor. That’s why we’re talking about X11; there is no “X11” way of handling scaling. X11 clients are responsible to scale things. Even events do not get their coordinate spaces scaled, either.
The point of this thread is not that you can’t scale UIs. It is that GTK+4 looks bad on low DPI monitors because it has stopped attempting to do pixel perfect UI and instead uses truly scalable layout and rendering. In the truly scalable world, 96 DPI is as blurry as 200+, only you don’t see it when there are more pixels.
That said, Qt has plenty of UI scaling bugs.
> Also, regarding zlib: I took a screenshot of this window and compressed it as png (which uses zlib if I'm not mistaken ?) which gives me 137KiB, or 19MiB at 144fps. So more than twice as much as what X11 manages (and that is raw X11, IIRC there are X11 protocol extensions which also pass the X11 messages through gz, but I've never felt the need for that as things are already perfectly fast).
Yeah, because even raw X11 with pixmaps won’t redraw the whole screen at once. It will use dirty rects. When scrolling this could still be a substantial amount of data, but nonetheless.
As I suspected, as far as I can ascertain, it really is just shipping pixmaps. 8 MiB/s sounds very consistent with what bug reports are saying;
This was changed in Qt 4.8, exactly like I remember it. But what I didn’t know was that XRender rendering was reintroduced in 5.10, because of this exact problem.
(Just to be clear, that means you get efficient SSH for most Qt apps, which have native mode enabled, from Qt 4.0 to 4.8, then 5.10 onward. A substantial slice of history to be sure, but more limited than it seems people think.)
If you’re on 5.10+, you should be able to get dramatically better performance with `-graphicssystem native`
> If you can show me any video-compression-based implementation that allows me to get this close to zero latency with zero image degradation (especially for text, you really don't want subpixel font AA to be video-compressed) and as little network overhead as what Qt gives over X11 I'll be super happy, but I really think it is unrealistic.
What can do better? Yes, it’s true, compressing text with lossy algorithms could pose a problem.
However, consider the following: if you wanted to compress frames, you would Never ship PNGs over the network, at least not like this. You’d get dramatic savings just by XORing the current frame with the last frame and the RLEing that. Boom, smooth scrolling achieved. Combine it with dirty rects and possibly some other techniques and it should be good enough.
Besides, at 8 MiB/s, lossless video codecs are pretty doable for fullsceen UI. Modern VNC implementations (Ultra, Tiger, etc.) make a joke of this figure and can still get good text quality.
> Besides, at 8 MiB/s, lossless video codecs are pretty doable for fullsceen UI. Modern VNC implementations (Ultra, Tiger, etc.) make a joke of this figure and can still get good text quality.
Here's how tigervnc looks on the exact same situation:
Sure, it uses less bandwidth (between 2 and 2.5 MiB for the busy part of this video) but it is also full of artifacts (https://i.imgur.com/4QrV9xl.png), super slow compared to X11 and does not respect my local settings. no thanks !
True! VNC is not ideal because it's pretty old by now. Chrome Remote Desktop would've been a better example, and even that is behind what can be done, as I believe it still uses VP8. It's possible even a lossless codec like ffv1 could be plausible in the window of 8 MiB/s, but I'm not sure it's necessary, as even old h264 does a pretty convincing job at very low bitrates.
Here's a snippet of my 2256x1504 screen, uncompressed:
...But they are not really noticeable in motion, and it clears up quickly.
I don't have a high framerate, high DPI display to test, but I'm guessing most people will only strongly care about one or the other since displays that do both are pretty expensive.
And yeah, chroma subsampling on subpixel rendering should impact legibility, but in practice it's difficult for me to tell any difference.
I've played around for a bit and I don't go above 1 MiB/s so far. I probably would need to play a video for that.
The only place where the text looks remotely smudged to me is in the low contrast bits in the header. It’s very difficult for me to tell the difference otherwise, especially considering that it’s high DPI.
And h264 is old, and I’m using software x264 with fairly modest settings. More modern general video codecs like h265, VP9, perhaps even AV1 can eek out slightly better fidelity at similar bitrates, at the cost of higher complexity. (But if it can be hardware accelerated at both ends, it basically doesn’t matter.)
And these codecs are designed for general video content… it would be instructive to see exactly what kind of performance could be achieved if using lossless codecs or codecs designed for screen capture like ffv1 or TSC2.
It would be… but honestly, there’s no point, because all I was trying to illustrate is that I sincerely doubt 8 MiB/s is the best that can ever be done for a decent desktop experience. Judging by Qt issue reports, it’s worse than what Qt used to be able to accomplish. If you really like your X11 setup, there’s no reason to change it, because it isn’t going to become unusable any time soon. Even if you switch to Wayland in the future, you should still be able to use `ssh -X` with Xwayland as if nothing ever really changed.
This is all a serious tangent. The actual point was that again, X11 doesn’t have any built-in scaling. All along, it was Qt 4+, GTK 3+, and other X11 clients that have been handling all of the details. And traditionally, it wasn’t good. And even contemporarily, it still has issues. Beckoning to the “way X11 did it” makes no sense because 1. X11 as a protocol or server never did anything 2. Even then, historically toolkits have had a lot of trouble dealing with it. The fact that you set the DPI for Xft specifically, which is just a font rendering library, hints at the reality: what oldschool X11 “scaling” amounted to in the 2000s was changing how font sizes were calculated. Modern toolkits just read this value to infer the setting, and it still isn’t good enough for many modern setups that Linux desktops want to support.
> For example, if you want a crisp 1px border on 96 dpi, you could specify it to be a 1px border at 96 dpi… but then what happens at 1.5x or 1.75x scale?
The border width should get snapped to the physical (sub-)pixel resolution as part of rendering. Typically, this should come with changes in contrast too, such that if a line is forced to become thinner it also gets drawn with higher contrast wrt. the surroundings, and vice versa. All of this stuff can be made to work.
Also, if you are forced to render a canvas at a resampled resolution because existing APIs give you no other choice, at least do it right using a proper Lanczos-style resampling. This might end up with a quaint "watercolor" effect but guess what, that's a lot better than a blurry, eye-fatiguing mess.
We’ve tolerated a great degree of complexity just to make fonts look good at 96 DPI. Looks like we’re able to tolerate a bit more complexity to enable GPU rendering. However, many years into having high DPI displays, it’s not obvious people are willing to take the complexity to make low DPI and high DPI screens look good simultaneously.
The thing is, with fonts, we already bear the burden of font rendering being complex because that was needed for 96 DPI displays. But, we won’t need much of this magic or complexity when a vast majority of people are using higher DPI displays, because at >200 PPI the difference between a blurry line and a sharp line is basically nil. That is obvious enough on Apple platforms, where many are perfectly happy with the scaling even though it uses 2x as a base for all scales.
I think the future is simply pain. People want cleaner graphics pipelines, and only high DPI displays will get them anywhere.
I’ve come to the same conclusion. Making hi(-ish)-DPI work would be possible with the right APIs. But it’s virtually impossible to also make it work for traditional low-DPI displays at the same time. The departure from pixel-art icons to vector icons alone has already degraded the low-DPI experience substantially. It doesn’t help that developers and designers tend to not use low-DPI displays anymore. But many regular users will, because it continues to be the cheaper option, also in GPU terms for gamers. Full-HD monitors won’t be going away anytime soon. Meanwhile, the mid-DPI space (e.g. 1440p) is in an uncanny valley, often requiring fractional scaling (more than 100%, less than 200%) unless you have excellent eyesight.
The font rasterizer is a massive hack in modern UIs. Subpixel rendering is a serious pain in the ass. When you render text using subpixel rendering, you render the actual vectors at 3x the spatial resolution. But, not simply as if the vectors were 3x wider, because that would look too sharp: it needs to render as if there was 3x as many pixels, which is different.
Then there’s compositing. Normal layers can be composited using alpha blending, assuming some sane format like premultiplied alpha RGBA. But not subpixel rendered text, because alpha blending the components will fuck up the subpixel rendering.
And it goes on, because if you want to handle text like everything else, you need special cases for it to look right. Rotation? Need to render the vectors rotated; can’t rotate in raster. If you need to render to a surface then transform that surface, you’re SOL; it can’t go to rasters until the end.
Normal surfaces can also be rendered at subpixel positions, and of course this does not work for surfaces containing text, because again, it will destroy the subpixel rendering.
OK. So you can get rid of the subpixel rendering and render slightly blurrier glyphs instead. (R.I.P. anyone trying to tell hanzi/kanji apart.) It’s still going to murder legibility if you move it over by a subpixel value because text is already on the edge of readability at 96 DPI.
I haven’t considered gamma correction, hinting, blending different colors, different blending modes, GPU acceleration, etc. because I simply don’t have the brain power to try to reconcile it all. It’s a nightmare.
We already did some of this for text. Which is a herculean effort. We use a freakin virtual machine to power font hinting, and ugly, complex, slow special casing at many layers of already ridiculously complex vector graphics stacks (I mean if you disagree with that assessment, you may just be smarter than I am, but I have serious trouble following the Skia codebase and I doubt Cairo is really that much better.) And speaking of which, there only really seems to be a handful of them out there: there’s Skia, used by most web browsers; Cairo, used by GTK; Direct2D, in Windows; Whatever modern macOS uses that isn’t QuickDraw anymore; and I guess there’s Mozilla’s pathfinder, a promising Rust-based vector graphics engine that was built as part of Servo and seemingly mostly abandoned, much to the world’s detriment. This work is hard. It can be done, but it’s not something I think a single engineer can do, if you want to build one that competes with the big boys even disregarding a few things like performance. I’d love to be wrong, but I have a sinking feeling I’m not.
Even text isn’t done being overcomplicated. As nyanpasu has mentioned above, some software have started implementing SDFs for font scaling. We do this because text legibility is really that important, whereas a line in the UI being slightly blurry for users on older screens is really just not that important. Some languages flat out can’t be read with crappy font rendering, and any of them will give you eyestrain if it’s ugly enough. As much as it sucks, a blurry border on a button doesn’t have an accessibility issue. And rendering at 1x and making the compositor upscale is not a great solution either because again, it’s already hard enough to read text in some languages; the added blurriness of scaling text and ruining subpixels is basically intolerable.
These hacks aren’t free, and with high DPI displays, they’re not needed. There’s a reason Apple did what they did.
OK, but there's clearly an existence proof, and it ran fine on 32 bit machines with slow processors (or even embedded CPUs in the 80's!) way before all the piled hacks you are describing were invented.
As I understand it, all that's needed is a vector renderer, and you keep everything (even text) in vector format as long as possible. RGBA then becomes a special case, as it must be for any DPI independent rendering pipeline.
Trying to compose rendered vectors using pixel based operations is madness, so... don't?
That means you can't have a bitmap-based compositor. So what? GPU's are great at rendering vectors. Composite those instead of bitmaps.
Or, just don't composite at all. A decade later, Linux desktop compositors are still an ergonomic regression vs. existing display drivers with vsync and double buffering support.
> OK, but there's clearly an existence proof, and it ran fine on 32 bit machines with slow processors (or even embedded CPUs in the 80's!) way before all the piled hacks you are describing were invented.
Yes. Driving ~1024x768 framebuffers, on single core processors, with far less demanding workloads, but still, yes. (They still badly needed good glyph caching to accomplish this.) (I’m assuming a Windows XP-tier machine since that was the era most people started using ClearType/subpixel rendering.)
(Single core processors are obviously slower than multicore processors, all else equals, but exploiting multi-core processors effectively is harder and often leads to code that is at least a bit slower in the single-core case…)
> As I understand it, all that's needed is a vector renderer, and you keep everything (even text) in vector format as long as possible. RGBA then becomes a special case, as it must be for any DPI independent rendering pipeline.
I don’t want to sound like I’m being patronizing, but I get the feeling that you may not be grasping the problem.
We can’t just use text rendering logic to power other vector graphics. For many reasons. Text is not just rendered like vectors, as that would simply be too blurry at 96 DPI. Old computers used bitmap fonts or aggressive hinting, and newer computers use anti-aliasing, often with subpixel anti-aliasing. Doing that with every line on screen isn’t feasible even if you wanted to write the code. Here’s an attempt to enumerate just the obvious reasons why:
- It’s slow. Yes, old 32 bit computers could do it, yadda yadda ya. But they did it for text. At the glyph level. And then cached it. They were most certainly not rendering anything near the entire size of the framebuffer at once.
- It’s difficult to GPU—accelerate. GPUs can do vector graphics and alpha blending fast, but subpixel rendering as its done with text is not something that can be done using typical GPU rendering paths. It could still be made to exploit GPUs, but it requires more work and is slower.
- Fonts achieve better crispness on lower DPI displays using hinting VMs. Without them, many glyphs would be quite blurry. Hinting VMs allow typographers making font outlines to make specific decisions about when and how vectors should be adjusted to look good on raster displays. In case it isn’t obvious, the problem here is that doing this for every line on the screen requires you to write special casing for every line on the screen. Maybe you could come up with a general rule that makes everything look good and doesn’t wind up with uneven looking margins or outlines ever (you really can’t, but…) — you have to run this logic for every line. That’s an increase in complexity.
- Glyphs only need to care about their relationships with eachother. UI elements on screen have arbitrary concerns. They have relationships with other things on screen; they line up with other shapes and the whitespace between them is significant. Glyphs only care about other glyphs horizontally adjacent to them (or vertically in some scripts, perhaps) but other UI elements care about their relationship with potentially any neighboring UI elements.
- UI rendering code does not exist in a vacuum. At some point, apps will need to do something that requires them to know the size of something on screen either in physical or logical dimensions. Normally, this isn’t a problem, but if all vector rendering was as complex as text, it would absolutely be an issue. The naive way of handling it would seem correct in many cases, but it would be wrong in many others, just like how old APIs that expose pixels instead of logical units tend to lead to apps with subtle scaling issues.
> Trying to compose rendered vectors using pixel based operations is madness, so... don't?
Yes, of course.
Except that, too, is hard. Think about web browsers: they need to support arbitrarily large layers for composition (like extremely long text in an overflow: scroll div,) and these layers can nest in arbitrarily deep and complex trees. Any node on this tree can apply transformations, masks, filters, drop shadows… In theory, most of this stuff should be doable without ever leaving vector land, but it’s absolutely not without its challenges.
> Or, just don't composite at all. A decade later, Linux desktop compositors are still an ergonomic regression vs. existing display drivers with vsync and double buffering support.
Hrm… I’m not talking about desktop compositing. Even modern desktop compositors render surfaces at pixel positions, so it doesn’t really cause any additional issues. I’m talking about the kind of compositing that GTK or Firefox do.
That said, I do agree that desktop compositing on Linux, especially X11, has been less than ideal. However, it certainly isn’t standing still; the situation with compositing on Wayland and open source GPU drivers has been much more promising. You still get a lot of the trademark issues with compositing that are pretty much inherent, but I have perfect vsync with good frame pacing and a solid 2 frame latency end-to-end in Chromium on SwayWM. I believe that’s close to ideal for a surface running under a compositor. A far cry from the compromise-riddled world of old GPU accelerated compositing.
The underlying logic for rendering "hinted" line borders and UI widgets is a lot simpler than for hinting arbitrary text. It's a matter of snapping a few key control points to the pixel grid, and making sure that key line widths take up integer numbers of pixels. Much of the complexity you point out only arises because we now insist on having physically sized rendering for "mixed-DPI" graphics, like a single window spanning both a low- and a high-resolution display. That's not necessarily a very sensible goal, and it's not something that would've been insisted on back when achieving "pixel perfect" rendering was in fact a major concern, regardless of display resolution.
A similar concern is the demand for arbitrary subpixel positioning of screen content, that basically only matters in the context of on-screen animations. Nobody really cares if an animation looks blurry, but it's somewhat more important for static content to look right. Trying to have one's cake and eat it too will always be harder than just focusing on what's actually important for good UX.
> The underlying logic for rendering "hinted" line borders and UI widgets is a lot simpler than for hinting arbitrary text. It's a matter of snapping a few key control points to the pixel grid, and making sure that key line widths take up integer numbers of pixels.
This is exactly what I was “hinting” at when I said coming up with a universal function that would work for anything. You can’t just snap some/all things to a pixel grid; it would look absolutely terrible because it would make lines and whitespace uneven. Even font autohinting, which does exist, is more sophisticated than just aligning key control points to a pixel grid.
> Much of the complexity you point out only arises because we now insist on having physically sized rendering for "mixed-DPI" graphics, like a single window spanning both a low- and a high-resolution display. That's not necessarily a very sensible goal, and it's not something that would've been insisted on back when achieving "pixel perfect" rendering was in fact a major concern, regardless of display resolution.
It’s not. Even under Wayland, which can achieve this, the application would only render one surface at a specific resolution at any given time. Nothing I’ve been talking about is related to being able to split a window across different DPI screens.
> A similar concern is the demand for arbitrary subpixel positioning of screen content, that basically only matters in the context of on-screen animations. Nobody really cares if an animation looks blurry, but it's somewhat more important for static content to look right. Trying to have one's cake and eat it too will always be harder than just focusing on what's actually important for good UX.
If you scale a UI that was designed for 96 DPI pixels to a screen that is around 160 DPI, you already have subpixels. If you then attempt to snap to a pixel grid instead of rendering elements at subpixel positions, then you have uneven, ugly looking UI elements.
This unevenness is arguably more tolerable for text than it is for UI elements, but Microsoft actually took the approach of not having it for text regardless; to make text look cleaner, text uses more aggressive gridfitting in Microsoft UIs, resulting in each glyph being gridfit. This is exactly why old Windows UI scaling lead to cut off text and other text oddities; it’s because the grid fitting lead to text that had different logical widths when rendered at different resolutions!
You can’t just wish away subpixels. Numbers that just happen to be whole numbers are the real edge cases in a world with arbitrary scale factors.
Are we talking about single-pixel rounding errors, or something else? The former are already practically undetectable at 1080p, and nearly-so at 768p. Given a high standard of "pixel-perfect" rendering, there's basically zero reason to push resolution any higher!
Of course one can even make pure subpixel-based rendering (no fitting-to-pixels at all) look correct, by starting either from pure vectors or from a higher-resolution raster and then using a Lanczos-style filter to preserve perceived sharpness near the resolution limit of the display. This gets us as near as practicable to something that's almost "pixel perfect", without distorting spatial positions to make them precisely fit a pixel grid.
> some software have started implementing SDFs for font scaling
My "wip/chergert/glyphy" branch of GTK 4 does rendering using https://github.com/behdad/glyphy which uses fields to create encoded arc lists and are uploaded to the GPU in texture atlases. The shaders then use that data to render the glyph at any scale/offset.
Some work is still needed to land this in GTK 4, particularly around path simplification (mostly done) and slight hinting (probably will land in harfbuzz).
Regarding slight hinting... currently GTK4 hints glyphs (distorting glyphs by quantizing vertical positioning) then renders them at fractional vertical positions (resulting in blurry horizontal lines). This is the worst of both worlds, achieving neither the scale-independent rendering of unhinted glyphs with fractional positioning, nor the sharpness of hinted glyphs with integer vertical positions. What is your plan for hinting and positioning?
You could do text hinting (snapping to the (sub-)pixel grid) after layout, based on some kind of auto-hinting heuristics. Arguably, this is needed anyway because text gets "laid out" all the time as part of advanced typesetting, including all sorts of complex microtypography that doesn't really play well with the old-fashioned "bitmap font" type of hinting.
The maintainer responded that "[c]hanges to the rounding behavior of glyph positions really belong into pango, though". I understand, but I don't know whether he's suggesting fractional layout but integer-rounded rendering, or integer-rounded line heights and layout and rendering. And I don't know how to change Pango, and lost interest in digging further.
They should take note of Windows 11 and the fallout from new menu item style due to the font no longer scaling properly at DPI “100%”. Many reported it as a bug! But it’s per design in the new Segoe Variable file where hunting breaks down at “low” (i.e. normal, non-hi) DPI.
Which is why QtQuick controls always looked like absolute garbage.
And firefox degraded as well when webrender got enabled.
I'm not sure I follow the upstream reasoning, in either gtk/qt/firefox/chrome... I'm reading text all day. The UI is still built around 90%+ text, except in very few edge cases.
I'm using 4k monitors, and I'm still a minority. Despite this, at 4k, we're still several years away from the point where we can turn off hinting. Probably a decade away for universal support. A lot more if we include existing monitors.
Between 92 and 270 dpi text still looks bad without proper grid fitting. Under 120dpi we're talking about garbage-level quality. And between 250-300 the difference is still noticeable to make it worth it.
It looks like garbage because of poor rendering quality. There's no theoretical reason why a screen could not look quite sharp even at 768p, and literally perfect at 1080p; anything higher would then be pure overkill or at best catering to a tiny minority of users with superhuman eyesight. You don't even need hinting or fitting to a pixel grid for "correct" rendering, it's just generally easier that way. But you can't let the rendering itself blur stuff and waste screen resolution - you need a good resampling filter to preserve sharpness even at the highest spatial frequencies, and the typical bilinear/bicubic approach doesn't do that.
---
Now one of the worst parts is that everywhere I only even hint at not completely loving the new libadwaita theme I instantly get shut down and disagreed with before I can even get the chance to give some feedback. Apparently not liking flat themes makes me a madman in this world. Why am I not allowed to even have opinions about the look of the operating system I'm using?
---
But to be honest, isn't it one of the recurring point of people running away from gnome? Haven't they just proceeded going ahead not listening to anyone?
It's kind of cathartic watching this happen to a new generation of developers. They can't say I didn't warn them!
Ultimately I think this will be quite good for QT/kde in the long term. Gnome/GTK gets a lot of corporate support, especially from red hat, but seeing the issues pop OS had with GTK makes me hopeful that more companies will adopt QT.
I've been hoping that for 20 years but it hasn't happened yet. It seemed like American companies always favoured GTK/GNOME (I guess because Qt/KDE development was more European?), even though Qt/KDE generally managed to do more with less. But I can appreciate that if you're aiming for the corporate market, customizability may be a downside.
I think that Red Hat (now part of freaking IBM) has always been oriented to big corporations, knows how to speak their language, and is an obvious choice for corporate Linux support. If you're already in bed with RH on the server front, it makes obvious business sense to also follow them on the (much, much smaller) desktop front. Since RH basically owns Gnome development (I know, I know), they impressvely used it as a lever to push systemd adoption, for instance: major distros had to switch to it not (solely) on its merits, but because they could not drop Gnome support.
KDE does not seem to have an equivalent behemoth behind them, even though Qt enjoys a lot of heavyweight corporate support.
Which is insane because even back in KDE 3.5 days, and Trinity following that, the support for complex user environments, security features, using any type of remote storage natively through the KIO subsystem, really good support for LDAP and AD and other things... it seemed like a natural fit for a more complex corporate environment. Especially tools like Konqueror and Kmail, both of which to me follow a natural progression out of the windows of the late 90's era, had really good usability and made sense fairly quickly to windows users.
Yeah. SUSE (at one point owned by Novell) was kind of trying to go the same route, but never really got as big as RH. (Though again I wonder how much of that was just an America/Europe split)
Wait ten years. New people will start working, old people will move on to other things. The new people will change stuff, mostly for the sake of it as it always happens, and one of the results will be less flat interfaces. Old people will be infuriated by the change. Very old people will rejoice but also complain that those UIs are not as good as the really old ones. New people will shrug them away and keep changing stuff mostly for the sake of it as it always happens.
It's already happening, new interfaces from both Apple (see Big Sur/Monterey) and Google (Material design) are noticeably less "flat" and going back to 3d effects for "active" widgets. The effects are much as seen in GTK+ 3 - just subtle enough to not look overly confusing when compared to a totally "flat" screenshot, but still helpful to unfamiliar users.
This presumes that all things are equally good and the only difference is familiarity. This isn't generally true of other functional areas of endeavor. Neither computers nor cars are all alike save aesthetics. I see no reason why it would be true of UIs.
So basically the new GTK is iOS but without the professional designers behind it.
What I cannot understand is how Canonical, Red Hat, or some rich SV person hasn’t thrown money at the problem and hired a big wig design firm or person to overhaul it all. Johnny Ive is now even available (if he’d take the project). But even going thru a site like dribbble there are so many amazing designers out there. To me it would be a more impressive portfolio piece for a young designer to properly design GTK/Gnome and put it out there for the world. What is stopping this? It can’t be caring with the history of people sharing their themes. Is it just taste / money?
I think the problem Gnome, and Microsoft, have is they keep throwing money at designers. So designers keep making designs for the sake of designing. They should stop designing, stick to the design they already have and only do incremental updates when new widgets come out.
It's not like everything was perfect in the past and so we're done. Far from it.
Even beyond that, the world is changing. New needs, new expectations, new styles in design. Just like we wouldn't expect cars to be frozen in time in the 1950s, we shouldn't expect UI/UX to be either.
Linux DE's have never had a particularly well thought out, well designed look. There's been a lot of themes that people create to mimic other commercial systems out there, but not a lot of very high quality original work going on. Yet we can see plenty of good work technologically. For a long time when it was largely grey beards most people didn't mind so much, but now people expect good design across all products. The standards have risen, and the FOSS world should meet that challenge head on.
> Just like we wouldn't expect cars to be frozen in time in the 1950s, we shouldn't expect UI/UX to be either.
I think your example points to a peculiarity of UI/UX: cars have changed a lot since the 1950s, but their interface has remained surprisingly constant.
I think there is a point to be made that an interface can reach a point at which it cannot be improved, or at least not without a whole paradigm change.
But even that is not true. Windows are now mechanized, with child locks. The ways to open and lock doors is different, both in the car and from afar. The entire center console now is different, to the point where Tesla has just a touch screen on many models. There are cup holders, EVs have fruncks and often are largely driven using 1 pedal, side mirrors are adjustable with motorized buttons, seat placement is totally different and often a setting that can be memorized by the car, stick shifts and clutches are largely gone, the wheel inner shapes are different and often contain buttons, starting/stopping the car is usually a button, etc etc.
If you’re focused simply on the steering wheel itself that’s like saying there’s still a mouse and keyboard. Yes, that’s true — and even that will likely be disrupted over time. But everything else has changed.
To play a bit in this space, if you check out the settings panel in Ubuntu compared to, say, the Windows XP control panel, you'll see that Gnome's settings panel is more usable. Mainly on account of having a single entry point, and the search bar on the top.
I think a lot of this is influenced by modern MacOS and iOS settings management (much like, to my recollection, Gnome 2 settings panels were very influenced by how Windows was doing things). It's not like it's 100% better, but for my usage it's felt nicer.
There are also a lot more work in this idea that there are many different ways to get to a settings pane. That way it's easier for people to get to a thing even if they have different ideas of where it "should" be
The article is really about GTK3 versus GTK4, using the settings pane as an example. My comment is about the larger picture. However, take a look at what it meant to reset your default settings in 2010: https://www.omgubuntu.co.uk/2010/08/restore-default-gnome-se...
Settings panes are still largely confusing on all platforms, commercial included, as you broaden your user base. They've also grown in utility as iOS has pushed the idea of a centralized place for settings for all aspects of the system (apps included), which you're starting to see elements of on other platforms as well. There's plenty of room for improvement, including simplifying everything down so that few settings can make big differences where the heavy lifting is done for you. Linux in particular suffers from this, from a legacy of a million .conf files everywhere that many people blindly copy-pasta'd random configurations they found online (I'm looking at you XFree86).
Why are designers so different from the rest of us I wonder? They value minimalism above all else, with little debate, but everyone else has a lot wider of a range of preferences it seems like.
I imagine that its a constant refinement between utility and minimising distraction. They end up skirting this fine line because they live the details, much like how anyone can be distracted or obsess over miniature details when you have hyperfocus on a subject.
> So basically the new GTK is iOS but without the professional designers behind it.
Professional designers seem to be the ones creating these problems. Back when it was just programmers you could tell a button was a button, a tab was a tab and we didn't hide basic functionality behind a swipe gesture or long press that users were supposed to just know.
1. Developers with a wide range of understanding of UI from negligible to acceptable.
2. UI experts who actually understand code.
3. UI "experts" in industry whose only skills are image editors.
4. Academics who did said research.
Seems like the entire problem is the ascendancy of Camp 3 which decision makers who being ignorant of technical matters themselves can't tell from camp 2.
That's my impression too. Camp 2 is also a problem when they decide to improve things under heavy time constraints, but it's mostly camp 3 being the issue, and camp 4 being ignored.
Meanwhile, camp 1 mostly won't break anything by themselves (developers do usually learn enough UX for that), but also are unable to fix anything.
I'll just add that camp 4 didn't exist only in universities by the time those things got developed. But they almost only exist there now.
> Back when it was just programmers you could tell a button was a button, a tab was a tab and we didn't hide basic functionality behind a swipe gesture or long press that users were supposed to just know.
There are many crappy UIs made by dragging buttons and labels in visual editors, but I believe this example comes from a discussion of a different problem. Namely, how to make a non-shitty non-toy GUI for a console tool. And the answer is, you can't, unless you'd already had the mental model, the hierarchy of options and modes of operation that could be projected onto both types of interaction before you made them.
Essentially, this is not even a GUI application, this is a printed cheat sheet for console program with interactivity, like HyperCard or '90s context help systems, but with “callbacks” affecting the program state. It is made for people who know and understand the console version, and just want to choose options with the mouse. Of course, it may be advertised as something made for regular people, but that's a honest false belief. It's more of a convenient shell alias transferred into 2D, something which is not expected to be super nice, or handle all corner cases if it helps you in general.
I think it is, actually, something to be encouraged. There are some tools that people made in Flash for their own use or some fan group because that's what they “programmed” in, not to mention big examples like (Visual) Basic, etc. A user is not just a consumer of what comes to the dumb personal device, a user is someone who can make it work in some unique way, because that's what computers are made for.
You're basically proving the point. That is so much clearer and obvious to use than the screenshots in the original article, and it doesn't treat you like an idiot.
Yes, I know which things are buttons, which things are text entry, and which are checkboxes. This was a strength of early 2000's UI design, and it's a shame that so many applications don't give such clear indicators any more.
This application still has an absolutely crap design.
* How do I specify where the downloaded data goes? Why are there onscreen options for literally everything else, but not that?
* Which text entry fields are disabled, and which ones are enabled? This is something Win95 got right.
* What even is "force directories"? And it looks like checkboxes are used for both it and "no directories", so what happens if I check both?
* What's the difference between Spider and Recursive? Those sound like the same thing to me, and I know I'm in a pretty elite club just knowing what web spiders are at all.
* Are jpeg/gif/etc only used when Reject is turned on, or are they also used when Accept is turned on?
* Is the wget-list input field connected to the Input file checkbox? It's right underneath it, and presumably I have to specify the input file somewhere, but the spacing implies it's not.
* What's the difference between "quiet" and "non verbose"? What happens if both are turned on?
* The rest of the buttons are pretty well labeled, but what's AG?
* Does it really need its own quit button? The window manager already provides one.
I'm not saying it's perfect, but it's better than the "small mysterious monochrome hieroglyphics floating in a sea of white/darkness" that "modern" UIs seem to be gravitating towards. At least textual labels are searchable.
I didn’t stop listing problems because I ran out. I stopped because I got bored. Now you got me going again:
* Why does input-file have no “Browse” button?
* Why does log file have no “Browse” button?
* What is the point of showing the command line flags? I’ve seen a few GUI apps that let me “export CLI” commands that match what I configured in the GUI. That would be a much better way to do this.
* Why does nothing indicate that the exclude/include lists are only used for recursive fetching?
* Extra Params can potentially be very long, yet the box is tiny.
* I assume quota=0 turns the feature off. Why no check mark?
* Why do the exclude lists have check marks, when leaving them empty should be equivalent to turning them off, if quota has none.
Wget’s CLI is better than that thing.
> but it's better than the "small mysterious monochrome hieroglyphics floating in a sea of white/darkness" that "modern" UIs seem to be gravitating towards. At least textual labels are searchable.
I did intentionally start my post by saying I agree with this part. iOS 7 was a mistake.
Were you raised in the desert by a cactus or something? Because there is nothing clear nor obvious to me about that motif word salad.
Ah, I can just feel it failing to pop up a tooltip when I mouse over the "spider" option!
I can also feel the entire window disappearing when I click "START wget." Is that what happens? Maybe not, but that was par for the course on a lot of those motif GUIs.
And I can definitely feel how the growing information density was even frustrating the poor dev. "Load sett.", "Save sett.", ah fuck it... "AG"
And again, feeling the disappointment as I mouse over "AG" and get nothing...
Easy for someone who works at Apple to say "just throw money at the problem" while there is so much more to consider. Design as a workflow for Open Source design (as in UI/UX) hasn't even been worked on that much. Ideally, open source contributors work on projects because they care about it, and want to improve it. I'm not sure how many designers are using GTK applications day-to-day to care enough about it to start contributing. There is a learning curve as well on how to contribute to these large projects.
It'd be great if it could be solved with just one pass of design work and then it's done, but I'm afraid the task is much bigger than that, and "big wig design firms" are expensive, so are design people as well. Not to mention the other problems around that.
Inherently, good design requires top down authority to force a consistent look, and implementing it requires a bunch of programmers to spend time on boring work. That just doesn't work well in open source where people only want to work on their own thing, sometimes, and in their own way.
Same could be same about good software design (as in programming), but we've found to design well written software before in a open source manner, I'm sure we could (if we focus on it) figure out something for UI/UX design as well that works in a open source context.
For example, style guides written and enforced by a small team can allow open contributions for the UI design.
I think it's mostly that not many have tried to figure out how to organize and actually run large-scale open source design work for UI/UX and that's why we haven't figured it out, I don't think it's "inherently" impossible.
And look at how these projects are organized. Most of the time it's pretty much top-down with a. It's why we have the term BDFL.
> good software design (as in programming)
That's the second problem. Most programmers probably know about about how good software design works, but have poor taste and prefer designs that aren't suitable for software that is to be used by non-programmers that want sane defaults that work and don't overwhelm them.
Just look at the theming debate. People in software developer forums are mad about it. Most users probably don't even know it exists. They just want settings and a UI that's simple enough for them to understand and allow for some basic settings.
So you would have to force volunteers to implement a design that they feel is bad. And you see how a lot of people react to it in the Gnome project where people leave because of it. But at the same time Gnome is a good example of how to do UI/UX for a big open source software project (even if you don't agree with their design decisions).
> And look at how these projects are organized. Most of the time it's pretty much top-down with a. It's why we have the term BDFL.
To make a more general point: top down leadership is not inherently bad. But in order for this to work well, people need to have the ability to enter and exit a project or polity at will.
(See eg how McDonald's doesn't let customers vote what to put on their menu, but customers are free to eat at a competitor or make their own food at home.
And compare that with North Korea also not letting citizens vote on their menu. But also taking steps to keep people from switching to a competing provider of government services, like South Korea. Similar also for the Berlin Wall.)
As I mentioned before, Canonical and Red Hat have money and could afford proper designers. Programmers are expensive as well, and they have plenty of those on staff.
Someone is putting together standard HIGs for Gnome, writing these standard themes, designing the base apps, etc. So you can't just say it's all decentralized and so there's no hope -- there's a ton of collaboration and top-down work happening with Gnome. Pulling in great UX/UI design talent should be prioritized.
If you're Apple and you make your desktop more attractive and pleasant you drive sales and market share. If you're redhat and you resource that you drive market share, to Ubuntu, who I believe still have a policy of never paying for development except for Ubuntu things, eg their failed desktop nobody else would use, Mir, bzr, etc.
Is that still the case? It would be a huge disincentive for other distros to contribute to the commons when ubuntu just parasite from it to a big market share.
>If you're redhat and you resource that you drive market share, to Ubuntu, who I believe still have a policy of never paying for development except for Ubuntu things, eg their failed desktop nobody else would use, Mir, bzr, etc.
Fact check, Gnome was filled of memory leaks and performance issues until some competent Canonical developer started working back on GNOME and fixing them.
So while Ubunut was not using GNOME the RH could have safely put their money into GNOME and it would not have benefited Canonical but the reality is that Canonical had to pay competent people to fix GNOME bugs, but probably Canonical can't change the desinger dictator so they usewd for a while a fixed theme and not the shity GNOME default.
I disconnected myself from Linux news so I don't know what drama happened in latest 2-3 years.
Who? When? What? They hired Henstridge to work on nothing to do with gnome? Redhat hired a bunch of core gnome devs to work on gnome? No? Happy to find out I'm wrong if there are facts involved...
Are you new to GNOME? this is a few years old when Canonical dropped Unity.
What was happening was that each GNOME release fixed some memory leaks but create a few new ones, the lag was terrible and when Canonical moved back to GNOME a Canonical developer started fixing stuff, GNOME devs were blaming the hardware for the bugs or were busy with removing features.
https://www.omgubuntu.co.uk/2019/10/ubuntu-improves-gnome-sh...
you can google the dev "Daniel Van Vugt" and check his work on GNOME, or you can google "GNOME shell memory leak" , as I said I stopped following Linux stuff (I am still a Linux users) and I have no idea what happened in recent years except the major things like GNOME's reset triggering a new fracture in it's community.
Anway, Canonical was not using GNOME so RH did not avoided paying for better developers because Canonical would benefit. Later when Canonical decided to use GNOME they fixed the major performance issues and memory leaks and put back some of the features the users wanted but designers refused to provide.
You may want to ask yourself why Red Hat despite having a lot more money and investing some of it in desktop Linux still has a fraction of the market share of Ubuntu. They are in a great position to dominate that market.
Instead we get a very rapid iteration with a 6 month release cycle that continually pulls in new tech prior to it actually being ready while it still has massive issues with end user experience including but certainly not limited to gnome 3, pulseaudio, gnome 3, wayland by default in 2016 which is now 6 years later supposedly almost fully ready for prime time.
My experience running Fedora for 7 years is that each release was a chance to play with new tech and fix new broken things. In place upgrades were also extremely dicey making a fresh install of the new version necessary every 6-9 months.
This is great for a toy less great great for something you intend to use. I'm sure current proponents say its great now but they literally have been saying the same thing forever.
I don't believe that Canonical or Red Hat would make any money from having a better GUI. Most of their (paying) customers are probably just running servers, or maybe running some simple dumb terminals for nontechnical employees (like a Point-of-Sale where the user may open a web browser or something).
Every screenshot in this blog post showcases dated design paradigms. Flat UI was popularized around the release of Windows 10 in 2015. When I go through dribbble I see a lot of mediocre and copycat design. Design is about more than copying what Apple is doing. You're not going to get style from fat guys with ponytails.
Sort of. It largely was a copy of Apple, unfortunately. However I think it at least pushed the bar of what might be possible with Linux, and they're no longer around [1]. I'm hoping someone can come and give real high quality work to the FOSS community and take on a wider perspective. They had to basically do their own thing instead of pushing GNOME/KDE along, which may have to be unfortunately the case. It would be better if we could leverage the existing code bases rather than starting from scratch, but there maybe enough major things that need to change and enough resistance from the existing communities to allow for that to happen.
I love ElementaryOS, it's great, everything just works, I didn't have to tweak a single thing or mess with drivers earlier this year when I installed it on a new laptop that even Mint was having issues with.
yep, If you have troublesome hardware then I find the fastest solution is to just cycle through the top 10 distros to see if it works on any of them. If it does then great, you can either stop there or use it as a "guide" for how to get your favorite distro working.
Lots of rich companies threw money at the problem of Linux UI, Linux application packaging+sandboxing, and several other things. The result is the Android OS ecosystem, where each hardware vendor has its own proprietary shell, in capitalist competition with other vendors’ shells.
I’m not even saying that that’s a bad thing. As it happens, Android has been generalized and generalized from its original use-case until it is today a quite-workable desktop PC OS.
All of this is pretty much how I feel on the matter, too. GTK3's interface was a really lovely blend of skeuomorphism and more abstract widgets that came together to make a really unique experience. Even if it didn't work in every context, I appreciated how well it worked for less complicated applications and making great-looking, device-agnostic GUIs. Cawbird was a wonderful native Twitter app made possible with GTK3. Curlew packed all of the important features of Handbrake/FFMPEG into a more streamlined, simple package. Foliate took e-book reading into the 21st century. So many amazing apps were enabled with this switch, and even though it's still a second-class toolkit, it was my guilty pleasure on Linux.
In comes GTK4. Much like the article alludes to, the elegant and simple shadow of interactive elements goes poof. Developers spend hundreds of hours crusading against letting people use third-party themes, just so they can simplify and reduce UI elements to a nigh-unusable pulp. Developing with GTK4 is a nightmare. Using GTK4 is a pain in the ass. For christs sake, there was a devastating font-rendering glitch that existed for more than ten months after the first GTK4 release that was ignored in lieu of simplifying buttons, developing a new forced stylesheet and telling people "don't theme our apps!" Whenever I take this up with a maintainer, they immediately take it personally and write out a litany of reasons why I'm wrong and why I'm not allowed to disagree. The priorities here are almost unbelievably misaligned, I've pinned my GTK packages at the last GTK3 release and await some sort of admission of failure.
I simply can't take it anymore. These are the people making desktop Linux miserable, and I frankly feel no remorse watching their attempts at "simplifying" the ecosystem crash and burn.
I'm sorry but most of this comment is a lot of nonsense. You're certainly allowed to disagree, but when you approach people with comments about their own work that are factually wrong, you'll get corrected. You're confusing those two things and taking it personally and that's a terrible mistake. Don't do that, it hurts you and it hurts everyone else around you.
Just to clear up any confusion:
- People have been working on the font glitch for the last ten months. It's not easy and the solution is not straightforward. Multiple solutions have been proposed but none are without issues. The problem is not being ignored. I want it to be fixed too.
- The choice of whether to allow themes or not belongs to the app developer. This hasn't changed at all from GTK3. When making an app, you can choose to allow themes or not. Some app developers won't, but you don't have to make that choice.
- The stylesheet isn't forced at all, when you develop an app you can configure your styles to always override the platform theme. This also hasn't changed at all from GTK3 and doesn't change with libadwaita. Take a look at the style context priority system for more info.
- Pinning your packages to GTK3 in protest doesn't help you. It doesn't make a difference to the GTK developers at all. If you want to make use of some GTK4 features eventually, it would be best to start working with it and getting your issues sorted out now. You can ignore what the other app maintainers are saying, that has no effect on you.
Oh good, I always look forward to condescending GNOME apologists to reply to my comments. Let's disseminate this comment and use it to help understand why me, OP, and hundreds of other people in this thread are frustrated with the state of GTK and GNOME right now:
> People have been working on the font glitch for the last ten months. It's not easy and the solution is not straightforward.
Then why was it working fine in GTK3? Sounds to me like someone made a breaking change, and didn't anticipate the consequences. The solution is to roll things back or wait until this new implimentation is fixed to push the code to actual users. Unfortunately, the GNOME developers are more interested in scorching earth than they are in maintaining a well-made desktop.
Oh, and it's not because it's a "hard issue" to fix, it's because nobody made a PR for several months. Apparently none of the core maintainers considered it much of a problem.
> The choice of whether to allow themes or not belongs to the app developer. This hasn't changed at all from GTK3.
No, it doesn't. There is not a single application written with GTK3 that can stop me from changing it's theme. I'm sorry if you disagree, but that's just an outright lie (I say this as someone who actually writes GTK3 code).
> The stylesheet isn't forced at all, when you develop an app you can configure your styles to always override the platform theme.
...and I can override it again. Any questions?
> Pinning your packages to GTK3 in protest doesn't help you.
Correction, it doesn't help you. It keeps my applications looking just fine.
> It doesn't make a difference to the GTK developers at all.
That seems to be a recurring trend when negotiating with GNOME/GTK developers. I don't care, I'm perfectly satisfied taking matters into my own hands since they'll ignore me anyways.
> If you want to make use of some GTK4 features eventually, it would be best to start working with it and getting your issues sorted out now.
I don't. What features are there, right now? Worse text rendering? Less accessibility features? More extreme, abrasive maintainers and fewer people writing code? A worse native experience, more middleware and less software freedom? Less native packaging? More Flatpak bloat? Worse touchscreen compatibility and an increasingly fractured codebase? More broken custom widgets that don't adhere to the GNOME HIG? An uglier, flatter overall design philosophy? Militant users and developers who hunt people down when they express their feelings about software they used to use on Hacker News?
Worse looking buttons?
You can keep it.
> You can ignore what the other app maintainers are saying, that has no effect on you.
I don't even know what you're trying to say here, but it sounds like doublespeak. This entire comment does, actually, and I'm not sure what your goal was by posting this. Nothing you've said accurately describes the reality of using GTK applications, it's more like an idyllic reflection of what people think the ecosystem should look like, ignoring everyone who doesn't throw away their current workflow to live in GNOME-land. Unless you start acknowledging the pragmatic reality of GTK's end-users, GNOME will continue to hemorrhage maintainers and stir up unnecessary and counterproductive drama for the sake of a few people's ego. And to think that what I said was nonsense, get a grip...
>I always look forward to condescending GNOME apologists to reply to my comments.
Stop. This is nonsense and adds nothing to the discussion, it's like if I started off immediately dismissing you as a "GTK3 apologist" or something like that. If you have a history of getting hostile replies, your confrontational attitude is directly the reason why, and it's something only you have the power to change.
>Then why was it working fine in GTK3? Sounds to me like someone made a breaking change, and didn't anticipate the consequences.
Yes, that's what happened. The whole renderer was changed to a hardware accelerated one.
>The solution is to roll things back
That's not possible because the renderer touches the entire project. It's probably the main feature of GTK4.
>Unfortunately, the GNOME developers are more interested in scorching earth than they are in maintaining a well-made desktop.
Please avoid these comments. This is a pretty meaningless generalization and adds nothing to the discussion. Just like any large project, there are some GNOME developers that focus on new features and some that focus on maintenance.
>Oh, and it's not because it's a "hard issue" to fix
No, this is extremely wrong. The issue actually is that hard. If you are a font rendering expert and you believe the issue is easy to fix, then please submit your own PR. I'll be the first in line to try it if you do.
>There is not a single application written with GTK3 that can stop me from changing it's theme. I'm sorry if you disagree, but that's just an outright lie (I say this as someone who actually writes GTK3 code).
No, this is also extremely wrong. Any external configuration method you use, a GTK3 application can override it or disable it. I'm not disagreeing and this isn't a lie, this is an actual fact of the toolkit. Feel free to list any of them and I'll explain how it can be trivially disabled.
>...and I can override it again.
I'm sorry, now it sounds like you're agreeing with me.
>Correction, it doesn't help you.
No, this has nothing to do with me. You sound like you were interested exploring GTK4 at some point, if you're still interested, then you'll eventually have to take steps to address the issues. If you don't want those then I don't see what your issue is, your original comment seems to be making assertions about nothing in that case.
>That seems to be a recurring trend when negotiating with GNOME/GTK developers.
This sentence right here illustrates the main mistake you've made. GTK is an open source project, by choosing to use it you're not entering in any "negotiation" with anybody. It's a take it or leave it proposition, and that's the way it's always been.
>I'm perfectly satisfied taking matters into my own hands since they'll ignore me anyways.
In the long run, that approach isn't sustainable. I find that most open source projects including GTK or Qt or any of those other toolkits won't ignore you if you have something useful to add or can take real steps towards correcting the issues that you're having. But being combative and approaching everything as a "negotiation" is the exact wrong thing to do and is probably the root cause if you find yourself getting ignored. Don't do that. You are making everything worse for yourself.
>What features are there, right now?
As previously mentioned, there is the hardware renderer with improved performance. There is also the multimedia framework and the new, faster list models. Those are the big features that I know of. It's absolutely fine if you don't want any of those, by all means stay on GTK3. But if you do want those eventually, then you'll need a plan to migrate.
>ignoring everyone who doesn't throw away their current workflow to live in GNOME-land
What I was saying is that those people don't have to affect you. You can use GTK3 or GTK4 or whatever without dealing with them.
>Worse text rendering? Less accessibility features? More extreme, abrasive maintainers and fewer people writing code? A worse native experience, more middleware and less software freedom? Less native packaging? More Flatpak bloat? Worse touchscreen compatibility and an increasingly fractured codebase? More broken custom widgets that don't adhere to the GNOME HIG? An uglier, flatter overall design philosophy? Militant users and developers who hunt people down when they express their feelings about software they used to use on Hacker News?
To hit this in order: The old text rendering can be restored with a flag. The accessibility is actually improved because of the removal of Atk which had some major issues. The maintainers are the same people and it's about the same amount of people writing code. There isn't any more middleware than GTK3. The "software freedom" is the same, it has the same license. Nothing has changed with native packaging, Flatpak is completely optional. The touchscreen compatibility has actually gotten better. The situation with custom widgets hasn't changed from GTK3. The design philosophy in GTK itself hasn't changed, that only affects libadwaita. I'm happy for you to express your feelings but this whole paragraph and some of your previous comment are unhinged and unreasonable.
>Unless you start acknowledging the pragmatic reality of GTK's end-users, GNOME will continue to hemorrhage maintainers and stir up unnecessary and counterproductive drama for the sake of a few people's ego.
This kind of sounds like a threat, you should delete your comment or reword this. It's also pretty unhinged and unprofessional. Also you appear to be under the false impression that I'm a GNOME or GTK developer. I'm just another app developer, in the same boat as you.
This is exactly how I feel about it. The sad thing is that if the whole gnome ecosystem burns down I'd have to use the alternatives that I like even less.
I'm just not designing gtk4 apps untill it's actually better than gtk3
Regarding flat - theme movement… it was iOS7 that my daughter 3 was happily able to use her new iPad running iOS6 when I set her up and she could press play on her show. Then one morning we had her in the stroller and taking a walk and I handed her upgraded to ios7 iPad… it was instant tears she couldn’t figure it out… just to get the walk started I pressed the play button for her after many tears… later on during the walk her show ended and in her frustration at not being able to find the play button she raged on the iPad … that iPad is no more… IMO following that experience the flat movement was really bad for usability… driven by a form over function that I’ve seen harm younger and older people alike… my then 92 year old grandfather I remember also seriously struggling with ios7 compared to 6… just my observations… but yeah flat looks better…
To me it all looks like an effort to push all of the work related to discoverability on to the users, in a way not unlike this move to self serve self check-out self serve this self serve that...
And it is efficient, from an MBA point of view. This is pushing evermore cost and risk away from the enterprise, creating externalities along the way. The whole thing is pretty toxic to users overall.
And yes, those users can self serve their way back to competency, and eventually reach a point where they're productive again, but they had to pay a lot harder than they would have otherwise.
Remember all those studies that were done in the 70s 80s and into the 90s? IBM did them, some of the major CAD companies did them, others did that too, and there's too many to list.
We should do that again. And if we were to do that, we're going to find out those other studies weren't bulshit, all those points made sense, and what's really changed is the value judgments related to the development, and who makes those judgments and why.
And I bet the root of all this, is far too many enterprises are in a place where they just don't have to care about the users anymore, and they know it, and they're acting on it.
The flat UI trend baffles me. The removal of text from icons and buttons baffles me.
For example, in Windows 11 I spent close to a minute looking for "Rename" in the right-click menu in Windows Explorer. Turns out it's not there! It's been removed out of the flow of the list and put in the top of the right-click menu, behind a small, picture-only icon that I've never seen before.
MacOS is guilty of this too: Buttons along the top of native apps like Finder don't have text anymore, the buttons are flat without borders, and the icons are thin lines. How is anyone supposed to know what they do??
In most UIs (Apple and Android in particular), I can't tell when one of those on-off switches is ON or OFF. Is the dark side ON or OFF? Maybe the light side is ON. Oh, it's the opposite when Dark Mode is enabled; great.
This is why I have reverted to the CLI whenever possible.
There's something deeply assuring knowing that I don't have to relearn my whole workflow every few months when the trends change. ls, grep, find, ps, htop will always be what they are. Even the Windows CLI is thankfully consistent.
All I want is to get work done the way I want, and I've found the CLI is increasingly the path of least resistance.
> ls, grep, find, ps, htop will always be what they are. Even the Windows CLI is thankfully consistent.
Those command-line tools don't have any "engagement" opportunities nor are there bloated teams of product managers & designers having to justify their salaries by reworking them for no good reason.
At 2600 BC hieroglyphs already included symbols used for phonetic spelling, so hieroglyphs are not the way to go if you want to get rid of spelled out words unless you ban the subset that can be directly mapped to greek letters[1]. Otherwise it just ends up a fancy font.
You jest, but I'm not sure that is a bad idea. bash/zsh seem happy with emojis as function/alias names, and I can input emojis easy enough. In fact some emojis are easier to type in my chosen layout than the string necessary to disambiguate completion for some commands.
Thanks for the idea, and I'll redirect co-worker scorn at U+1F984 for `git push` towards you ;)
Edit to add: Using emoji for commands/arguments is actually quite workable with global aliases or custom zle widgets in zsh, and moderately workable by using $INPUTRC to specify text replacements for readline if you're a bash user.
Had a developer change some config file names to emoji based file names. Boss thought his computer was broken when he took a look at some output txt file. He wasn't amused even though it was April 1.
> ls, grep, find, ps, htop will always be what they are.
SystemD screwed with a few that I'm still discovering one by one. "cron", "shutdown" is no longer in the default debian path, I think ulimit was another one.
But at least I don't wake up one morning with the whole interface rearranged.
cron is in /usr/sbin (and is a daemon, not a command you'd ever run manually); shutdown is in /sbin; and ulimit is a shell builtin, not a standalone executable at all.
If I had to guess, I'd say that /sbin and /usr/sbin aren't on your $PATH for some reason. systemd is unlikely to be the culprit.
It can be restored with the `net-tools` package, thankfully. That's among my first installs on a Linux box. Never quite got into `ip addr show` when I've most of the ifconfig flags memorised from use.
Which may contradict my previous point slightly, but at least it is easy to put the system back into a state I want and the system will maintain this state.
This. Material design relies on the browser hint to inform you the mouse/pointer has moved to an active element. Viewed with no focus on an active element, how are you meant to tell which pane of flat colour is a pressable, actionable element?
It looks great in print. It doesn't respect the modality of use for an online world.
I'm tempted to think we have to go to browser vendors and ask them to make <blink> happen..
This resonates so much with me .. especially the On/Off buttons where designers show off how cool and smooth they can make them look, usability be damned. Dark patterns especially around opt-ins uses these type of controls very well putting a lit of mental load on us to opt-out.
I ended up reverting back to the old context menu. Those icons are annoying to parse and I have a tendency to look down each row as 20+ years of context menus has taught. And having so many options tucked an extra click away.
1. Accommodating disabled people is in no way an insult.
2. Being disabled is a normal fact of life, also not an insult.
3. Accessibility affordances are not only for disabled people. Lots of people who wouldn’t identify as disabled routinely and gladly use a wide variety of accessibility features.
4. Having difficulty identifying actionable elements like buttons isn’t about reading anyone’s mind, and it isn’t a universal difficulty regardless of their presentation. The ability to customize it so it’s easier for you is inherently an accessibility affordance.
5. It’s entirely possible this setting (along with several others under accessibility) may be more helpful when needed if they were available in another location, but it’s also possible that would make the experience more confusing, and quite possibly for more people—especially for disabled people.
6. People may benefit from this information—quite a few here based on other comments—and it doesn’t deserve this kind of negativity.
7. With all of that in mind, while I’m keeping this response direct, I took the last point as a prompt to edit some of my own negativity out of it. I had a strong negative reaction both because I think accessibility is a universal good and because I appreciate accessibility affordances which help me. But that’s not a reason for me to be a jerk either.
1. Accommodating disabled people is in no way an insult.
No, but treating non-disabled people as disabled could be. Just like offering help to some disabled people triggers them.
I was going to respond to these numbered points in turn, but that'd be silly. My point is that having basic (previously standard) UI cues like buttons is important and stuffing that option anywhere is a poor choice. To put it under the accessibility options can (if we're even a tiny bit snarky) be seen as an insult - possibly to disabled people as well since it lumps a simple standard usability thing in with stuff designed for people with actual challenges.
It's also not cool for them to make critical UI cues an option when they also take away themes which are the ultimate option.
Ultimately they have given no valid reason to make so many things look the same when they used to be visually (and functionally) distinct.
Sorry for the trigger, I didn't mean any insult to anyone other than the UI designers.
> No, but treating non-disabled people as disabled could be. Just like offering help to some disabled people triggers them.
Putting this option in accessibility settings does no such thing. That assumption is what I take issue with, and what prompted all of my points above.
> My point is that having basic (previously standard) UI cues like buttons is important and stuffing that option anywhere is a poor choice.
I disagree. I often find designs with fewer borders and shapes easier to use—that is, more accessible to me—_because_ there’s less information for me to visually process to find what I’m looking for.
That said, I’d be perfectly fine if they inverted the default… or even just asked on first install/startup, with a note on where you can change it later if you change your mind.
I read the parent as implying that it's the designer who is thinking of “disability” as a begrudged requirement that is afforded an out-of-the-way configuration option so as to not inflict the ugly affordances on the rest of the population.
That seems like the least charitable interpretation of the facts I can imagine. Nonetheless if it’s true, Apple doesn’t just wing it when someone goes rogue on full design language redesigns.
This is honestly a large part of why I enjoy keyboard shortcuts, like F2 to rename on Windows. But it’s of course also a low grade for their UI design because I like it for knowing how to reach it instantly. Unlike by using their user interface.
Just last week I found myself in a position where I was at a major event (SXSW in Austin TX) having to support normal users trying to do various things on mobile and it's really eye-opening when you see first-hand how bad the experience is. I had a conversation with a user recently which went something like this:
'Yeah I know they have an app but your thing won't show up in the app for about a day so you need to go to their website on mobile to see it. Right, you're on the mobile website and I know your thing is still not showing up now. To see it, go to the hamburger menu in the top right. Yeah I know it doesn't look like a menu, or a hamburger. The three lines, that's a menu. Now go "account -> profile". Now you see the text where it says "XXX"? That's actually a tab. I know it doesn't look like a tab, just a word. You can't see any of the other tabs because they are actually off the right-hand side of the screen. I know it doesn't look like you can scroll, but just drag it across and you'll see the other tabs. Ok now click on the tab marked "YYY"'
Trying to figure out this user flow for myself so I could help people reminded me of charging around hitting random walls in "Dark Souls" trying to find out which ones were illusory. It's really embarrassing how bad things have got.
It was awesome actually. My first time but the vibe was fantastic. Lots of long-term "southbyers" said to me that it was because of a general feeling of being delighted to be back.
I think Don't Make Me Think
Book by Steve Krug should be a required reading for anybody who is involved with designing UIs and websites. That old book has so much wisdom about what happens when we see something on the screen and why these little details matter so much.
This is right there in chapter 1:
As a user, I should never have to devote a millisecond of thought to whether things are clickable-or not.
> Now one of the worst parts is that everywhere I only even hint at not completely loving the new libadwaita theme I instantly get shut down and disagreed with before I can even get the chance to give some feedback. Apparently not liking flat themes makes me a madman in this world. Why am I not allowed to even have opinions about the look of the operating system I'm using?
Ah, it was the same way if you didn't love the direction they went with GNOME 3. If you don't love the design-of-the-day GNOME is chasing, they don't want you. I recommend KDE.
Gnome3 is a disaster. When Ubuntu made the switch to Unity and dropped Gnome2 support, I played around it and said, "nah", and went for Gnome... And after using it a bit I suddenly thought they Unity Desktop is not all that bad. With time it even grew on me, especially the saved vertical space.
Current Gnome is still unusable crap, with all those ugly buttons in window titles and lots of wasted space.
Looks like all these "Connected - 1000Mb/s" can be 2-2.5 times more compact in V direction. And huge spaces between labels and controls. Why do we need these?!
It is plague of last years. You have FullHD screen, you see 3-4 options in FireFox settings,when "old skool" UI could pack all of them onto one FullHD screen.
It is everywhere. Huge spaces, scrollbars (which doesn't work with keyboard, as controls intercept focus)... Scrollbars are for text, not for GU!
I've always thought that GTK was great at filling the screen with interactive elements. Almost too good, actually. The GTK versions of certain apps like Handbrake and Transmission have almost no whitespace...
I don't understand why modern toolbar icons are basically abstract shapes represented by lines. It's 2022, we have 32 bit colors, but toolbar icons have less color than back 30 years ago in Windows 3.1.
Old win 3.1 toolbars had tiny 16×16px icons, that could be quite colorful without being overly distracting even on a lower-resolution screen. With the new emphasis on "touch" readiness combined with higher screen resolutions we get bigger icons but simpler, black-and-white shapes, which helps tell apart simple toolbar functions from icons that might represent all sorts of other things within an application.
> I don't understand why modern toolbar icons are basically abstract shapes represented by lines
Much, much easier to style based on whatever the surroundings are. Also easier to maintain a good look if you switch to a completely different theme, and for third party icons to mesh with the OS set of icons (only so many ways you can mess up the sillouette of the shape).
Oh, and they scale to basically infinite proportions without looking weird, which cannot be said for color vector icons.
The reason is because colorful designs were ugly AF in virtually all but the very best (or native) applications so users developed an allergic reaction to them. Minimal design felt fresh and it’s harder to fuck it up...sadly it’s also harder to tell icons apart
It's the cycle of GNOME. Every few years they introduce something new that causes lots of regressions. Some of these regressions are then being worked at, some are defended as "design decisions" (no theming, Nautilus opening a new window for every folder, broken filechooser dialog, desktop indexer you can't turn off). Bugs are worked on and thing stabilize. People learn workarounds for their issues. Then another iOS-inspired thing comes and things break again.
> Then another iOS-inspired thing comes and things break again.
This always puzzled me, there's been a time around 2010-2011 where everyone thought desktop was doomed, tablets would take over its space so you had to design everything to be mobile friendly. Those times have fortunately long gone, tablet are almost a thing of the past, desktop is here to stay, yet GNOME and RH designers still take iOS design as a reference... why?
For what it's worth, I recently discovered that I can move windows on GNOME by holding down the Windows/super key and dragging. It's a small thing but it's actually quite nice.
the ones for resizing and moving are quite handy indeed. Learn them. Sometimes windows have bugs and get bizarrely huge and this is a way to get them back down to a reasonable size
This is one of my pet peeves, too. I shouldn't have to find some pixels of unused space on a title bar before clicking on it lest I bring down another menu or launch something. Microsoft programs seem to be enamored with the "search input box in the title bar" concept. Give me a magnifying glass, and drop down the input box or something. I want to be able to just grab the title bar and move the window.
For me, what you're referring to is one of the reasons I don't like to use Windows as a desktop OS (because you need to hunt for these relatively small target areas to organise multiple windows). The alternative which I greatly prefer is outlined in the other responses.
In the past, I used to disable title bars entirely even on full-sized monitors. These days it's not so easy to do that, though.
A lot of comments in this thread are negative. As a developer using GTK4 and libadwaita, and as a user that uses gnome, I really like the changes. GTK4 brings some much needed changes, and the major bugs like listview scrolling being broken or bad text rendering on non-hidpi displays suck, but both have people working on fixing them. Libadwaita is a huge improvement, and makes it way easier to build good apps.
As for themes, I quite like the new libadwaita theme, and prefer it to the default GTK theme. You're free to disagree, but you can't say your opinion is correct, nor can I.
Keep in mind that libadwaita is an optional library _specifically for gnome apps_. If you don't like gnome, don't use libadwaita. If you want the widgets from libadwaita, but don't like the styling, then either copy the code (it's open source, fork it!), or use libadwaita and disable the libadwaita stylesheet (and add styling for the widgets that aren't part of GTK).
> or bad text rendering on non-hidpi displays suck, but both have people working on fixing them.
It wasn't even acknowledged as a bug in the beginning, even after screenshots with clear signs of regression were posted. Matthias Clasen closed the bug report saying it wasn't a bug but an intended feature. There's really no appropriate words to describe such behaviour, which is fairly common on the GNOME issue tracker, besides calling it "wilfully dense" or "trollish".
> As for themes, I quite like the new libadwaita theme, and prefer it to the default GTK theme. You're free to disagree, but you can't say your opinion is correct, nor can I.
Sure, but in that case, an officially supported method to change the theme should be provided in case I don't agree with your choice. Apparently, GNOME tweak tool was never supported, is not supported, and will never be supported. For now, GTK_THEME is being presented as an alternative but do you expect me to close all of my programs and relogin to my session to change my theme? Should I create wrapper shell scripts for all of my GTK apps?
> Keep in mind that libadwaita is an optional library _specifically for gnome apps_. If you don't like gnome, don't use libadwaita.
The fact that there are no non-trivial GTK4 apps out there that don't use libadwaita or libgranite tells me what I need to know. Even LibreOffice uses libadwaita now. Is LibreOffice a GNOME app?
>It wasn't even acknowledged as a bug in the beginning, even after screenshots with clear signs of regression were posted. Matthias Clasen closed the bug report saying it wasn't a bug but an intended feature. There's really no appropriate words to describe such behaviour
Please stop fanning this flame war. You're making it worse and choosing to omit that the bug was reopened after a better argument was made in favor it, just to make a point and attack them, please stop doing that. I'm like you and I just want the bug to be fixed, these kinds of bad faith comments calling people "dense" aren't helping. This is trying to paint someone as being stubborn here after they already changed their mind and did what you want. Just take the victory, you don't have to be a sore winner.
>Sure, but in that case, an officially supported method to change the theme should be provided in case I don't agree with your choice. Apparently, GNOME tweak tool was never supported, is not supported, and will never be supported. For now, GTK_THEME is being presented as an alternative but do you expect me to close all of my programs and relogin to my session to change my theme? Should I create wrapper shell scripts for all of my GTK apps?
GTK_THEME is mainly a setting for developers, you should probably not be using that unless you're developing a theme to be upstreamed with an app. It has all the same issues as the tweak tool where it's unreliable and some apps may not function correctly with some themes or may not respect the setting at all.
If you're interested to actually help, what you should do is contribute towards fixing issues with the upstream theme. In almost every case when I've seen people complain about the default theme it's because of fixable bugs in the theme that upstream wants fixed. And if you want to do more, you can contribute towards the libadwaita theming API which is intended to be a real theming API and not a hack like GTK_THEME or the tweak tool. It's still being designed so now would be the time to start contributing if you want to get in early.
>The fact that there are no non-trivial GTK4 apps out there that don't use libadwaita or libgranite tells me what I need to know.
That those libraries have widgets and skins that developers really like and want to use? I'm sorry I just don't see what you're getting at here. Do you expect GTK to try to please everyone by merging all the widgets from libadwaita and libgranite and deprecating those libraries? Because I think that would be a lot worse.
>Even LibreOffice uses libadwaita now.
Actually it doesn't, that was just a proof of concept. But even if they did, it would be entirely optional.
> This is trying to paint someone as being stubborn here after they already changed their mind
I just pointed it out because as I said, this behaviour is fairly common on the GNOME issue tracker. There are numerous instances but for now, the Inter font issue comes to mind
and everyone else I've talked to seems to be presenting GTK_THEME as one of the options for changing themes for GTK4 apps, which is absurd user experience, to say the least.
> It has all the same issues as the tweak tool where it's unreliable and some apps may not function correctly with some themes or may not respect the setting at all.
Now I'm wondering if this information was intentionally withheld in that blog post.
> In almost every case when I've seen people complain about the default theme it's because of fixable bugs in the theme that upstream wants fixed.
What if I disagree with basic choices like the dark mode background color? The contrast between the background and text in dark mode is too high for me and I get halation in my eyes within a few minutes of reading text inside a GTK app with the dark version of the Adwaita theme.
I'm fairly certain that this issue won't be entertained on their issue tracker and even if it was, I'm not sure I want to subject myself to their behaviour as shown in the link above and the font issue in context.
> a real theming API
that "real" theming API just changes accent colors and wouldn't solve my issue
> Do you expect GTK to try to please everyone by merging all the widgets from libadwaita and libgranite and deprecating those libraries?
If there are no feasible alternatives to develop GTK4 only apps besides creating your own widgets from scratch, the practical outcome would be that most independent developers would end up choosing libadwaita not because they want their apps to be GNOME apps but because it would be relatively harder to not use libadwaita. This can be seen in this comment from the developer of GTKeddit.
Sadly it's not possible to design a GUI that has correct spacing and stays consistent with every font choice. Variable-width fonts don't function like that, widgets sized to a piece of text will have their layout interfered with when the font size changes.
>this is new information because this blog post... Now I'm wondering if this information was intentionally withheld in that blog post.
It wasn't withheld and it's not new information. You're still assuming bad faith and you need to stop this. Notice the part where it says "Compared to GTK 3", if you're familiar with theming GTK3 you should be well aware of the limitations with those themes. Not much has changed there at all. Theming in this way is essentially just hacking the hardcoded colors to be different.
>The contrast between the background and text in dark mode is too high for me
I have the same problem with some apps and I want you to find a solution but I don't understand what this has to do with GTK. That is going to be a general problem with lots of programs or websites that you visit and reskinning each of them is a huge pain and is not guaranteed to work. Doubly so if you use apps that have any other toolkits or use a custom toolkit, the contrast will still be inconsistent. I speak from experience here. Theming is only a band-aid, I suggest at minimum turning the contrast down on your display or permanently enabling night shift mode. That may not make the contrast perfect but it would at least ensure you never see any high contrast text in any app.
Another option may be to pursue something like a gnome extension that dynamically adjusts the screen's gamma and contrast based on the dynamic range calculated from the contents of the windows. That should be a lot more useful and flexible than manually reskinning every app you use. And it works in other situations like for example, if you have two apps side by side and one has a brighter background color than the other, it would notice that's happening and dynamically adjust accordingly so you don't strain your eyes when moving from one app to the other. I think it's misplaced to present this as a theming problem when this sounds like a general usability problem.
If you really do think it's a theming problem then maybe you could campaign for an officially supported low-contrast option, but that would be far away because every toolkit would have to implement it, and old apps will probably not be updated to support it, and it still probably won't work on arbitrary web sites. From a libadwaita perspective the recoloring API may do everything that you need, in my experience it's web sites that are the worst offender when it comes to eye-straining themes.
>I'm fairly certain that this issue won't be entertained
This doesn't make any sense. Both of those issues you linked did get entertained, and the maintainer was still open to patches to fix the font issue.
>that "real" theming API just changes accent colors and wouldn't solve my issue
>If there are no feasible alternatives to develop GTK4 only apps besides creating your own widgets from scratch
This is the same as it was in GTK3. It hasn't changed at all, the only difference is the library is called libadwaita instead of libhandy. You also don't have to recreate widgets from scratch, that developer appears to be confused. It's certainly possible for an app developer to reskin the libadwaita widgets. But of course, just as in GTK3 they would have to put in the work to write and maintain their own skin, and ship that as part of their app. Or they can just wait for the recoloring API.
> I have the same problem with some apps and I want you to find a solution but I don't understand what this has to do with GTK.
I didn't say that it was a problem with GTK. I said that it's a problem in the dark version of the Adwaita theme. I disagree with the choice of the background and the foreground color.
> Theming is only a band-aid
It's the only solution that comes to mind, unless there's a One True Theme out there that works for everyone.
> I suggest at minimum turning the contrast down on your display or permanently enabling night shift mode.
I've already enabled night mode which reduces the gamma of the display. Turning down the contrast of the monitor doesn't sound right because there are low contrast, medium contrast, and high contrast themes out there. Fortunately, there are objective criteria to measure the contrast. When using gedit, for example, with the Adwaita dark theme, I got `#EEEEEC` as the foreground color of text and `#303030` as the background color which gives a contrast ratio of 11.36 according to the WCAG 2.0.
In my anecdotal experience, contrast ratios above 7 or 8 when using dark mode often end up causing halation which makes GTK apps unusable or uncomfortable for me for more than a few minutes.
Of course, some people may not have this problem and they might find these choice of colors as perfectly normal which is exactly why theming and user choice is important but it seems to be heavily de-emphasized in the GNOME world and the focus is on finding "perfect" solutions.
> Another option may be to pursue something like a gnome extension
unsupported, generally looked down upon, and called as hacks by the GNOME team, just like GNOME tweak tool and GTK_THEME
> No, the current draft allows changing all colors
Does it allow changing the background color of a window and foreground color of the text in a GTK app and would it be officially supported?
>I said that it's a problem in the dark version of the Adwaita theme.
I don't understand what this has to do with Adwaita either, as I said this can affect every toolkit and every app and every web site because most of them have their own theme. For example this site we're on now has an IMO awful default theme that strains my eyes, and doesn't have any official support for theming. We can't blame GNOME for that one.
>It's the only solution that comes to mind
Actually I mentioned a few other solutions that could work. Theming is not even a real solution sometimes, for example some closed source apps just don't support theming at all and can't be easily hacked to change the colors. If that happens you're pretty screwed, unless you pursue an alternate solution.
If the other GNOME apps decide to support these type of themes it will be a while before that happens too because their ports to GTK4 and libadwaita are not finished, and because they will probably wait for the recoloring API instead of doing it all in CSS like the text editor currently does.
>Of course, some people may not have this problem and they might find these choice of colors as perfectly normal which is exactly why theming and user choice is important
But this isn't an argument in favor of "theming and user choice", this is an argument in favor of providing a low-contrast option for accessibility.
>unsupported, generally looked down upon
This is incorrect. There are several official extensions maintained by GNOME developers.
>and called as hacks by the GNOME team, just like GNOME tweak tool and GTK_THEME
That's somewhat true, extensions can be very hacky. It is however a lot easier to maintain one extension then it is to maintain thousands of themes for every possible toolkit, app, web site. If you view theming as the only other solution then your choice is really with choosing one hack versus another hack. If you're not planning to develop any themes or extensions and maintain them yourself indefinitely then I really don't understand why you would even care about themes at all, the best option for you would be a low-contrast toggle.
I think you may be confused about the full definition of "unsupported" here. The tweak tool and GTK_THEME (and in some sense, extensions) are not considered unsupported because nobody wants those features. The missing piece is for somebody to figure out what a reliable solution is and make everything work correctly with a real API, and then volunteer to support that for years.
>Does it allow changing the background color of a window and foreground color of the text in a GTK app
In some way it does. I don't know why you're asking me this because I can't decide for you, if you're an app or theme developer you need to look at the draft yourself to see if it would be adequate for you. From what I hear, it should roughly do what you can see in those text editor screenshots I linked above.
>would it be officially supported?
I don't know what the status of it is, I only saw the draft. You would have to ask the developers. IRC or Matrix is a good place to ask questions.
> As for themes, I quite like the new libadwaita theme, and prefer it to the default GTK theme. You're free to disagree, but you can't say your opinion is correct, nor can I.
I agree... but the gnome devs are directly telling me my opinion is incorrect, by removing the ability for me to change my apps i use to a theme I like.
Oh wait sorry, i guess i should have said gnome's apps.
This is an unfortunately very common misconception: you can still set GTK_THEME on libadwaita apps, it's only the gsettings configuration that doesn't work (due to implementation constraints, unfortunately).
> As for themes, I quite like the new libadwaita theme, and prefer it to the default GTK theme. You're free to disagree, but you can't say your opinion is correct, nor can I.
That's the thing - you can say a particular design decision is incorrect. There's objective metrics around good user design.
Taking the position that all design is subjective, and therefore "just an opinion" is why the gnome team are widely derided.
It's not just an opinion - there's measurements you can make and metrics you can collect that serve as evidence for or against a particular design.
When it comes to UI stupidity in general, here are two from my work machine (Windows 10 laptop).
- there's the prominent notification speech bubble on the bottom right, which seems to be very inconsistently used. Almost all the notifications are duplications of an already popped-up notification, and a lot of important stuff doesn't end up in there. So it's just a nuisance. The worst is that for one application, the notification essentially says "you need to push this button to proceed" while covering up the freakin' button I need to push!
- If you have a Word document already open and minimized, and you open up a second one - at least triggered from the browser in Sharepoint - what happens is the already open document opens up, with the newly opened one underneath! If these are based on a common template, I frequently find myself scrolling up and down looking for the graphic I need to refer to, only 20 seconds later realizing I'm in the wrong document again.
These don't have much to do with UI skin annoyances, but they do have to do with this:
Apple stuff had really good UI while Steve Jobs was in charge. Someone powerful enough to get anything changed who is fanatically devoted to good, usable UI was gold there. All these weird inconsistent UIs are "design by committee" or worse, no design at all. Nobody has overall veto power, or at least nobody who cares enough.
> While a lot of people now instinctively hunt for labels that have hover effects, for a lot of people who are just not represented in online computer communities because they're just using the computer as a tool this is completely weird.
This. This! You see people on Apple devices constantly scrolling up and down just a tiny bit while browsing the web. What a weird tic I thought, when I saw different people do that. Turns out scroll bars are always hidden on Safari for elements that have overflow with scroll behavior. So you never know if you're reading some important stuff that actually has another paragraph you don't see because it's otherwise not obvious you're looking at some <div> with limited height. And it seems others are busy imitating this, I feel this happens with Firefox in Android now too.
Then these new navigation gestures on mobile. Gone are the three mighty buttons of Android. When the iPhone released, it was praised for cutting ties with anything that was established in the old computing world, all the implicit knowledge you had to have about how to use a classic desktop os. You could give the iPhone to any tech illiterate person and they could figure it out. Everything was discoverable, a fresh start with a clean and consistent UI language.
But today, not so much. Now phones went through the same iterations as the desktop os, and it's ok that you now need to learn a gesture to return to the home screen, or go back one step. There is no way someone who never used a phone before will somehow think "I might have to swipe into the screen from the left!" So now, just as the desktop os, we just assume you already know how to use a phone, and add "optimizations" on top of that that someone needs to tell you about.
While I'd like the old button as well, I think flat design is great.
When MS brought Windows Phone and then incorporated same language into Windows 8 I was like "wow, Microsoft got it right for the first time".
Same for iOS 7, same for macOS. I never liked the "Apple" look of UIs before they went flat. After they've switched to flat I started to really love it.
Not even talking about developers and designers needing to do less, and users needing to process less cluttered visual stimuli.
Skeuomorphism and non-flat designs in general had a good purpose and they've fulfilled it perfectly by teaching people about how UIs work in those OSes of digital world by making elements looking like real world things like buttons being 3D-ish. But that era is over and we collectively don't need it, so we've switched to something simpler and more efficient it many terms.
Of course, taste is taste and one may hate it, but I love flat design in almost all aspects, and definitely don't think MS and Apple are "stupid" as the blogger says.
> Microsoft was stupid for doing it. I think Apple is stupid for doing it
Microsoft isn't actually responsible for this nonsense. Metro did get rid of bevels, but it didn't get rid of obvious buttons: they simply turned into trivial colored rectangles.
I'm not entirely sure who "refined" it, but Microsoft did eventually end up copying a bad iteration of their own good idea.
The font rendering on that example of "the perfect button" is rather atrocious. It's kind of sad that Windows is (was?) the last bastion of serviceable font rendering.
Interesting, I always disliked Windows font rendering and much prefer the was MacOS renders fonts. I couldn’t explain why, but I know many graphic designers who say the same. I guess it’s a matter of taste!
Perhaps a better survey would be the one by Firefox[0] since it isn't biased towards gamers. Basically HiDPI monitors are like a combined 2% of all Firefox users.
1080p usage is lower than on Steam but that is because 768p usage is higher :-P.
Just like programmers should have to run their apps on 10 year old computers and feel the pain, so should designers have to try their work on 10 year old monitors.
That isn't going to be the case for Apple users though is it. Apple don't have any non retina devices in their lineup now. The only Apple users who will see a low dpi display are those plugging in a low dpi external monitor. They are going to be a minority.
I guess that is the reason why every musician should copy their master to a cassette tape and go to a shitty car with a tape-player and listen there, why every developer should have a very old crap-computer to run their software on and why every UX designer should have a crappy screen to view their design on. :-)
> 4K (or better) monitors have long since become the choice for most of those who cares about how font rendering looks.
This is the hardware solution to something that can be solved in software. If your game is laggy, of course upgrading to a 3090 is going to help, but that doesn't mean you're solving the problem.
It can't be solved in software, due to the Nyquist limit: Sharp high-contrast edges (text) necessarily have high frequencies, and you need a high sampling rate (pixel density) to produce a good approximation of that signal. People have spent a lot of effort to push anti-aliasing techniques as far as they can go, but displays with higher pixel density are just better for displaying text.
I agree that if you are making something and your target audience will be using non-HiDPI displays, then you should actually design your thing on those displays too. But no matter how good your software is, pixel density places a limit on what you can achieve.
Pixel-perfect rendering (i.e. hinting) is needed in order to reach the Nyquist limit. By forgoing it, you're limiting yourself to an effective resolution that's only 0.7× or so the physical one,
Bitmap fonts (and hinted fonts are automatically generated bitmap fonts) are pixel art, so the Nyquist limit doesn't apply. In pixel art, the pixels are treated as little squares or rectangles, not band-limited point samples. This means you sacrifice the ability to display arbitrary shapes, but I'm not trying to reproduce printed text so I don't care.
I have perfectly sharp text on a 1080p monitor because I disable antialiasing in ~/.config/fontconfig/fonts.conf and force full hinting.
Nitpicking, but a pixel is always a point sample, not a little square. The native medium for displaying pixel art is CRT displays, where these point samples are used to control an electron gun that introduces a Gaussian blur, basically the best case for purely analog upsampling (or DAC, rather) of a pixel raster. No little squares in sight, though! What you're talking about is using pixel-perfect control to reach the Nyquist limit, as mentioned in my sibling comment.
Pixel art is defined by treating pixels as little squares. There is no single native medium for displaying pixel art. The same techniques were used for games on portable systems with LCD displays. Additionally, there are many examples of printed art using blocky pixels.
The Nyquist limit is irrelevant to pixel art, because the Nyquist limit applies only to reconstructing signals from point samples. Using pixel art on a monochrome LCD (no subpixels), I can display signals with maximum frequency limited only by the sharpness of the edges of the little squares.
Linux font rendering has come a long way in the last couple years. With TrueType2 and Windows fonts taken from an installer ISO (and not the ancient corefonts package), you can actually get some overall mostly okay font rendering. Every now and again, the hinting bugs out and it looks unreadable, but its far better than the olden days.
What exactly has happened in the last couple of years? And how would installing those newer Microsoft fonts affect font rendering elsewhere than where they are used (not at all by the desktop environment by default, and as to web sites, it depends on what you use)?
IIRC the last big-ish change for me was around 15 years ago when some patent-encumbered patches were enabled by default by (some) Linux distributions. Which is okay for me because overall I think freetype font rendering is better than that in Windows with most fonts, the arguable exception being certain Microsoft fonts – however, freetype has the additional advantage that it rarely if ever produces pixelated ugliness with unhinted webfonts, unlike what I've seen happening with Windows.
> What exactly has happened in the last couple of years?
Some subpixel rendering techniques became unencumbered quite recently. So newer versions of freetype should have nicer subpixel rendering enabled by default.
Maybe there's some configuration magic you can do, but I don't think I've ever seen decent font rendering on a Linux distro out of the box.
It's one of the biggest reasons I've never switched to desktop Linux, and I'm not even one of those super picky people who talk about the intricacies of typefaces all day.
On macOS Desktop, checkboxes, buttons and everything else are still designed for mouse/kb
On gnome things are designed for vertical and touch screens, so you forced to endlessly scroll, windows is doing something similar, wich is an indication that they target mobile users, they are so out of touch
Only Apple knows what they are doing when it comes to UI/UX for desktop, and they make sure they don't mix desktop/mobile UX
> This has all the features I like from GTK 3. The only issue with it is that font rendering looks horrific, but that might just be my machine
I don't understand linux users' font tastes, apparently. To me the GTK4 screenshots are the only half-way decent font rendering in the lot. All the GTK3 shots have broken kerning in headings, not to mention the awful (lack of?) antialiasing.
I keep wondering how long it will take for putting borders and edges or some graphical hint around functional UI elements (whether 3d or not) to come back into style. It's such a clear example of form over function to remove these (and if you ask me it doesn't even look good anyway, but let's put aesthetics aside).
I would love to hear from any actual UX professional who wants to try to defend that making functional elements look identical to non-functional ones is a good idea. I would be fascinated to hear what the argument in its favor could possibly be.
Non techies have been using flat design interfaces for like a decade now and the world hasn't ended. If you personally like another style, totally fine. But it's odd that programmers always think they know more about usability than people who study it for a living.
I tend to see it from the opposite perspective. Physical looking buttons, borders, and always-visible controls were designs born from rigorous user testing in the 1980s and early 1990s. The push to flat design that really picked up steam around a decade ago did not have, as far as I can tell, the same level of testing, and was promoted largely by programmers who wanted designs that were easier to implement.
Platform developers were incentivized to go flat because it lowered the cost of development and development time if developers were using fewer, simpler custom assets. At this time, both Apple and Google were bragging about how many apps were on their stores, and how fast they were growing, and neither wanted to fall behind the other.
But admittedly, my perspective was that of a third-party developer. I have no direct knowledge of why Microsoft, Google, and Apple all embraced this trend, and I have only anecdotal evidence that it made things more difficult for novice users. There could be a treasure trove of research showing that flat is better, but I have never seen it presented.
Do non-techies have a choice, though? Given the lack of theming in many modern apps, its not like users have a choice between flat design and the designs of the 90s and 2000s. What about sticking to old versions? I can only use old versions of software for so long before it becomes infeasible for many reasons (security and interoperability being the biggest reasons), and even on Linux where users have a great deal of control, not all GUI software is themable.
It's not just programmers who complain about modern UI/UX trends. The Nielsen Norman Group, a UI/UX consulting group that was founded by HCI legends Don Norman and Jakob Nielsen, have written articles against the overuse of flat design (https://www.nngroup.com/articles/flat-design/).
The problem isn't techies but non techies, unless you think only techies should be using Linux GUI...
> I have had to explain to people tons of times that the random word in the UI somewhere in an application is actually a button they can press to invoke an action.
There is a reason why Apple backpedaled on all that flat insanity for MacOS and iOS. Graphic designers and trendsetters usually know very little about usability, these are 2 completely different disciplines, and the Flat style was imposed by graphic designers, not UX people.
> But it's odd that programmers always think they know more about usability than people who study it for a living.
I’m sure it has something to do with them needing to justify their existence. No reason to have a designer if nothing has changed in the world of design. Programmers learn new languages (or go back to old ones), designers do the same with interface design principles.
The old button was good in the desktop context, but I can't get behind this point:
> The design even works very well on Linux phone formfactors.
The author must mean something very different when they say "works very well" than I do when I say that. As much as I want it to, nothing about GTK works well in a phone form factor, IME. I look forward to continued improvement that will make that statement incorrect.
I for one don't want the phone form factor to influence my desktop experience. They are two completely different workflows that have conflicting needs IMO.
Agreed. The major sin of modern UI/UX is the desire to create a one-size-fits-all experience that works for both touch devices and desktops, when in reality the work styles of touch devices and desktops are very different. Unfortunately Windows (since Windows 8), macOS (since Big Sur), and GNOME (since GNOME 3) have sought (and continue to seek) these unified designs that work okay on touch devices but are a downgrade on desktops. The desktop reached its zenith in the late 2000s with Windows 7, Mac OS X Snow Leopard, GNOME 2, and KDE 3, and today's desktops are a downgrade from these (though KDE Plasma is nice and is an improvement over KDE 3).
I'll go as far as to say that the desktop experience hasn't received much love by major software companies; since the late 2000s the money has gone toward smartphones, tablets, and the Web. Since those platforms get the attention, the desktop is increasingly getting populated with ports of smartphone and Web apps; hence, Windows Metro/UWP, Catalyst, Electron, and the like.
The main influence so far has been enabling smaller window sizes for gtk+3/libhandy apps, which is good for all users especially those on older, lower-res hardware. Hopefully libadwaita can get forked into something like a libhandy equivalent for gtk+4, that otherwise preserves sane defaults doesn't enforce their silly "flat" theme choices.
>The main influence so far has been enabling smaller window sizes for gtk+3/libhandy apps
I don't think so, touch targets are a lot bigger. Using the default theme for both a KDE App with equivalent functionality will take a fair bit less space than a similar gnome app. Of course I think plasma mobile uses a theme that makes the touch targets bigger...
KDE also manages to get a lot more functionality into their similarily sized windows. Take a look at this page showing comparisons:
Flat I can just about tolerate, the stupid padding round every single element is what bugs me.
There has been loads of studies showing the amount of mistakes in coding or spreadsheet type work goes up massively with having to scroll windows, but yet, gtk3/4 is a bloated, oversized mess.
Mice precision has increased, so why on earth do we need toolbars that can take upto 1/8th a 768p screen. Stupid.
Another one of those things that irritates me about "modern" design --- instead of showing an empty list, which would've been obvious that it was empty by having subtle cues like gridlines (also now disappeared by this trend), it decides to treat you like an idiot and show a giant banner stating what should be obvious. The "Add Tasks" button probably disappears after you make the list non-empty, and just looking at that UI, I'm not sure where the "normal" one is. But I guess it gave a designer an excuse to create some more bland Corporate Memphis art?
Superficial opinion: GNOME 3 is the best-looking DE for Linux, but I personally prefer using KDE. It's a shame then that the new flat design in GTK 4 looks ugly. I suppose it's all subjective -- but I'm struck by how viscerally bad most flat designs look to me. I suppose it's another "aesthetic movement" that most of us philistines won't understand: https://en.wikipedia.org/wiki/Brutalist_architecture
That's precisely how I feel. Words can't really describe how taken aback I was when I typed in "sudo pacman -Syu" that fateful day and rebooted into... whatever GNOME 40 was.
Not a fan of adwaita at all, I think everything GTK is too wasteful in terms of screen space. I think Qt is much more efficient. But that's why we have Linux. Everyone can make their own choices
I sorely miss the days when it was possible to explain to someone the principles of how an OS UI worked, and for those principles to be reusable across a wide array of applications.
Today, every application seems to just do its own thing, and continually churns its own UI, chasing trends.
There was a time when an elderly relative would say “computers are too hard to use”, and I could recommend a book or manual that would help make it clearer.
To say that it is the pinnacle of design is a wide stretch give what they did to menus in all the apps. Obscure buttons and burger menus replaced menus..
>Now one of the worst parts is that everywhere I only even hint at not completely loving the new libadwaita theme I instantly get shut down and disagreed with before I can even get the chance to give some feedback.
This is so typical for the open-source community. For a supposedly 'open' culture, its full of ivory towers and (layers of) cool kids clubs, meaning if you want to build an app, or change something you don't like, and want others to be able to use your changes as well, you are met with layers of people who are at best indifferent, and at worst hostile, but the thing is you absolutely NEED their approval unless you can make your own distro (your own club) and fork their code and try to somehow keep up with the main branch.
The last thing I want to do with my free time, is solve an issue, only to be met with draconian authority telling me how to build my app, and then have to solve my issue again in 6 months, when the platform people break everything.
I think the culture of abusing volunteers on a volunteer-driven project is exactly why desktop Linux can't break out of the niche OS category.
GNOME is pretty much the worst offender here, which really makes it all the more baffling. They constantly wax helplessness, begging for more contributors or feedback on their OS, and when people try to contribute or give feedback on their OS, they say "no not that part!" and go redesign something they don't take personally. It's especially ironic considering how they worship "usability" in the other hand, insisting that their lack of functionality is so that they can better emulate macOS. It's remarkably frustrating.
But it's also the sort of behavior that keeps other developers going. KDE is great at soliciting user change, and nothing is "off the table" for scrutiny or redesign. Many other desktops share that same philosophy. The pockets of high-and-mighty developers are what end up shooting desktop Linux in the foot, and while I do happily daily-drive Linux, I can say with certainty that it will never hit the mainstream with maintainers like these driving the best-funded desktop environment.
The problem is that UI has reached its pinnacle with Win2k/xp classic. So if you are a designer, you still have to do your job, you gotta design something. But it is never going to be on that level, in efficiency, looks or usability.
It's such a nihilistic proposition, I can hardly be mad at designers of today. Go ahead, make that button flat and call it clever. Why not
Which parts are links? Which aren't? It's a little more obvious that the main text is a link but not obvious that you can click to view the comments. (It can be inferred, of course, but I don't believe that should be necessary.)
A little styling to add underlines back to links, plus a little bold to indicate the comments page, fixes this:
Some might say that this looks ugly, and that's a valid view. But to me, it's usable.
(Why are both the "time posted" link and the comments link bolded? Because they both go to the same place. I didn't even know before I made this style that that was the case! I was surprised to see that the style I made did both, but I'm not complaining.)
> This is an unfortunately very common misconception: you can still set GTK_THEME on libadwaita apps, it's only the gsettings configuration that doesn't work (due to implementation constraints, unfortunately).
> The dark theme, while not officially supported as a normal application theme, works absolutely brilliantly and is a great example of how to design a dark theme.
There are only three small screenshots of UI in the dark theme, but to me it looks very clearly like a naive "invert all the design token colors and call it a day" implementation.
> I personally don't like the flat look. I think Microsoft was stupid for doing it. I think Apple is stupid for doing it and I've been praising Adwaita for being the sane option in an insane world.
I partly agree. Flat look done poorly is terrible. Apple is the only company who's been semi-consistently doing this well.
The Linux ecosystem is having a similar identity crisis as Microsoft and Apple. Some are of the opinion it should be mobile first, desktop first, or server/IOT/embedded. These are all competing concerns and will lead to further fragmentation in an already fragmented ecosystem.
The design of libadwaita is very reminiscent of the time I was at Apple. Around the time they decided to unify the codebase and lay off the entire OS X team and reorg it into a new software team. I noticed the design changes as part of dogfooding internal OS X versions and design changes were jarring to say the least.
In short, a product cannot be all things to all people. It should play to its strengths and delight it’s existing and new customers/users.
I dislike GTK; my opinion is that Xaw is better in many ways.
GTK has problems including bad designs of many things including file selection dialog box, scrollbars (which are not always consistent), bitmap fonts sometimes don't work properly, always uses Unicode and although you can enter Unicode control characters they are not displayed while editing (which makes it difficult to use), some things are not documented well, configuration is difficult, kerning in editable text, etc.
So, I do not use GTK in my own programs.
(Some people say that if a program does not need to be documented then it is easy to use, but I disagree; a program is difficult to use because it is not documented well.)
I miss the days of GTK2 when you could set colours for things individually without having to edit CSS. You could take the buttons from this theme and the window borders from that one, and then make the active window title bar a different colour to the rest. Moving to GTK3/adwaita made that a lot harder for me - like yes I can spend a couple of hours editing and testing changes to the stylesheet but before there was a button in the settings for this.
Look, I get all the advantages of the newer way, but for me it's a feature that I can make my UI look the way I want it, even if that's not the same as some theme designer's preferences.
Perhaps I'll be crucified for saying this, but I actually like flat design. Or at least, I think I do. I always thought flat design was as the name suggests; flat. As in, no gradient, no faux 3D, and I guess no borders. I'm not such a fan of rounded buttons, I think they often scale badly (pretty much anything round will unless its a vector I guess), but I didnt think that was a requirement of flat design.
That being said, I'm not sure what it is now, because that original button looks like flat design to me. The only "non-flat" element of it is the 1px border, but surely thats not the source of all this commotion?
>Now one of the worst parts is that everywhere I only even hint at not completely loving the new libadwaita theme I instantly get shut down and disagreed with before I can even get the chance to give some feedback.
Going flat seems a bit late. Apple are in some places backtracking and (although slowly) going back to some form of 3D. Have a look at the logic UI evolution. 10.4 was flat, 10.5 partly undoes the madness.
The thing about a lot of GTK apps that kills me is the client-side decorations. I think the window manager should be viewed as its own app with consistent controls. CSD make the title bar of each window different so I have to waste time figuring out where the safe grab area is. The title area also ends up being so fat that you barely save space over normal menu and title bars, and usability is worse because of the hamburger menus.
I recommend ignoring the blaming (users) and ignoring (developers) on both side both reading the details involved. Looks like mostly intended and used during development on HiDPI-Displays and that they need people experienced in font drawing matters.
Anyone else happy with Gnome? It's good to have an opinionated DE that just gets out of the way.
It launches apps, provides me with workspaces, and abstracts all the low level details away so I can focus on actually using my computer.
If people want to customise every minute detail of the themes they shouldn't be using Gnome/GTK. The devs don't have to cater for everyone's needs, they are allowed to focus on a target audience.
I'd be happy with it if the developers listened to the users. GNOME 3 had a good balance of "getting out of your way" while still giving power users what they want. I had to leave GNOME 40 within hours because of its unbelievable changes to the UX, and I say that as someone who used to be a pretty big fan of the desktop.
The other nice thing about "the nice GTK button" is that it feels consistent enough with the current tab buttons styling in Firefox: https://imgur.com/a/OEKc4So . It also just feels better for direct manipulation (touch).
Is there any way to vote for keeping it? Does 'high contrast' mode possibly bring it back?
I'm so grateful for today's software diversity which allows me to so easily abandon and unsubscribe from non-consensual interface changes such as this one completely and switch to a different environment as many times as I need to until I find one which just maintains the interface which I want.
(Which is basically Windows 95 with a few minor additions.)
I suppose the price you pay of having a consistent desktop experience is that changes to the desktop look affect the look of your app, and you don't have much control over this.
From a user perspective, I prefer this the hodge podge on Windows which I find disorientating. But I agree that a bit of non-flatness can be very helpful for the eyes.
> In this dark theme the edges and divider in the theme suddenly become light. I find this really jarring and it looks like it's just an inverted theme
You’re just used to the wrong approach, the old one actually is an inverted-grayscale theme.
The new one correctly follows the principle that foreground == lighter, background == darker in dark mode.
User here: the only thing I don't like about GTK buttons is that they are large, and the toolbar they are in has a lot of empty space around them. Does everybody have multiple large monitors on their desk? What about 13 inch laptops?
I don’t like conspiracy theories, but I always wondered if the flat design trend wasn’t a genius way to nullify the advantage of higher powered graphic platforms.
I may be alone here, but maybe there are mountains of functionality never tackled in Gnome. Basic things like trackpad gestures are just now arriving for example.
Meanwhile, the visual design has been hashed over and over: I've lost count. Why is L&F consistently prioritized over basic capabilities?
> Basic things like trackpad gestures are just now arriving for example.
All they had to do was go fix the input drivers, the user-space library to communicate them, the compositors to be able to mediate gestures, the toolkits to process them, the X11 server extensions, wayland protocols, etc...
One of the biggest problems in this is that you can't just fix one input driver. You have to go fix them all so that the experience is consistent across as much as hardware as possible.
Oh yeah, I get it, there's a ton of work there. And I am grateful to everyone touching Gnome over the last 2 decades. I have even supported the trackpad fellow. I'm just asking where the developer attention cycles are being spent.
> Why is L&F consistently prioritized over basic capabilities?
My lips curl ever so slightly apart, my tongue perched between both rows of my teeth. A faint whistling sound escapes between my two incisors and tongue, reminiscent of the "th" consonants that one might expect to hear in the word "thumbnails". Before I'm even capable of getting to the word "filepicker", hundreds of people are stepping on one another to come tell me what I already know. A wave of developers have risen from their standing desks, eager to beat me down for such a blatant disrespect of their favorite desktop. Fourty-some people have convened to demonstrate how I can click and drag icons from Nemo onto a file selection dialog as a replacement. Others yet are deriding me for not implementing it myself, and lacking respect for the developers. Some people have taken to referencing their code of conduct, while more are furiously typing on Twitter, desperate to label me a troll and move on with their life.
They all filter off after around 25 minutes. None of them used that time to put thumbnails in the file picker, thou... oh my god, they're coming back again...
A lot of work in OSS desktop environments is driven by someone having a personal interest in building a feature.
I wonder if that plays into in (in addition to the effort hurdle mentioned by audidude). I turn off trackpad gestures every time I get a new machine - I just don't get any use out of them. I can't be the only one. And since Gnome lagged behind on gestures, we could presume many Gnome developers either have perfectly good gestureless workflows already, or haven't even had the opportunity to see what gestures can do.
Sorry to side-swipe this. The pictures in the article all have a fuzziness I've come to associate with Linux GUIs over the last couple of decades. Is there a specific reason for it? Is the font the problem? Some kind of font-hinting patent they can't impinge on?
I hate rounded corners, giant title bars, no clear borders or 1px borders for grabbig, disappearing scrollbars. I hate when artsy minimalistic designs waste my screen space. Win95 & co got these right for desktop. (touchscreen would be a different matter I admit)
Flat design is pretty awful. It really does seem to be popular because it's the minimum possible amount of design.
Skeuomorphism definitely is harder to get right and different elements can clash. But that's less of an issue when you're making one unified theme for all.
>I personally feel like Adwaita has been the pinnacle of Linux design
Unity to kde
Never liked gnome and endure it for more than 1 hour.
Yeah he stated it's personal feel, but still strong disagree of existence of concept of pinnacle of design
I think the Disney+ app has to be rock bottom of button design - the blue rounded one is selected, the white flat one is not. I always get it the wrong way around, even after several months of making the mistake.
While I'm here, what is wrong with menus? The ribbon interface needs me to learn what 100 symbols refer to, or else hover repeatedly to find what I'm after. I can read. Been doing it for a while.
GNOME is in a damned-if-you-do damned-if-you don't situation here; if they don't adopt the flat theme, they look "old fashioned" and if they do they have a crappy UI.
The Flat Design fashion is coming to full rage. Everything looks like a bad web page now. Make users waste their time guessing how the interface works instead of getting things done.
For everyone mad at at "Don't theme my apps," consider why developers are doing it.
Getting complaints about how your app is broken because of an overzealous theme that is beyond your control sucks. And after 10 years of dealing with it, GNOME developers decided it was enough.
And... I don't wholly agree, but at the same time, themes had a decade to get their act together and stop angering GNOME developers. They didn't.
But why Qt/KDE developers don't lose their minds ? Either GTK theming is broken or the GTK app developers are not using it correctly or GNOME devs are assholes and really.really want to force their branding and vision. The above OR is not exclusive so it could be all 3 things.
I develop GTK3 apps. Theme breakage is generally a very real concern, as it usually points towards an issue with either
a. The usage/implementation of widgets in the application
or
b. The stylesheet the end user is implimenting
In either instance, the solution is very simple and within arms reach. It will always make more sense to encourage robust development practices over building fragile application stacks.
The don't theme my app people were complaining about their bug trackers being full of theme-related issues. Instead of just setting up a filter rule in their tracker so they could ignore those issues, they decided to go super draconian and remove theming for the entire desktop.
That disproportionate response to what comes down to an organizational shortcoming on their end made people upset, I'm not sure what they expected.
Note that they don't care about the themes their end users pick, though. You can theme your app just fine. They just want operating systems using their software (with their logos, trademarks and support links) to stop shipping their custom themes by default.
They just don't want Canonical or Fedora to ship a theme that makes all applications that didn't come bundled look like shit. If you like the Windows 95 Hotdog Stand theme, you can configure that and everybody probably agrees that that's great. If you inflict that pain upon yourself, that's your problem.
The current "solution" to themes breaking applications is to just not follow the system theme any longer. Everything gets packaged with a hardcoded theme in a sandboxed environment and you'll just have to live with that.
Now you know why the application developers are so loud and angry. We can’t have nice things because distributions shipped broken default configurations, and that’s messed up.
Don't say "the application developers" like we're all one person. There are multiple GTK developers in this thread that don't care what distros ship with: this is exclusively a GNOME complaint. Don't drag the rest of us into this.
So what? Why should they continue to support some minor feature if it costs them a lot of resources and they think it's not worth it because it doesn't really fit their project?
If you have a feature that is used by one customer out of thousands but it's causing problems at every update you push out. It might be better to remove the feature and fire the customer, than to keep supporting it no matter what.
honestly i would rather remove the ability of developers to make shitty controls and widgets. GUIs either ask you for some set of data or display it to you. Weve had a basic set of controls that work great for decades now. Im all for a platform that just requires the use of those and maximizes the ability if users to theme away.
Wow, that's definitely something. Implying that tinkering with the low level aspects of a system are acceptable, but don't you dare apply a different stylesheet, because adjusting colors is delicate work that shouldn't ever be done by anybody but the developer.
I agree with the key points though, which is that distros need to stop messing with themes unless they can validate that all applications work with custom themes.
They bear no ill will against users who download or make their own theme.
Getting rid of themes is a decision distinct from that of choosing the one true theme. Going with FLAT was a poor choice and is what the current discussion is about. If there is any irony it's that more people will want to change the theme now, since the default is worse than before.
basically fashion. e.g. if you asked people to describe this app http://ptkdb.sourceforge.net/demo.html i'm sure a lot of them would use the word "dated", but the actual widgets are pretty similar to what we're using today, and layout hasn't changed much either. it's just the theming fashions that keep changing so radically.
I don't think adwaita is that bad, but rather feel like they _yet_ have to polish some small details.
Like the interline spacing on things, sometimes it feels inconsistent. KDE menus, for example, have a nice spacing - but GTK ones feel cramped. And those submenus that they place on things like the top-right menu on the panel have different line heights.
Some other third party apps, for example that mail client I tried the other day (it wasn't evolution, but I can't remember its name) had serious layout issues. libadwaita was supposed to fix those inconsistencies and make devs lifes happier, but...
And speaking of buttons and top right corners, I will never, ever get why they place the open/save/select dialog buttons in the top right corner of the dialog. Where you are used to find the 'close' button. Why?
They're probably referring to the functional regressions it made with it's first official release, and how it furthers the idea of GNOME/GTK lock-in. libadwaita has made it extremely difficult to package cross-platform desktop apps, especially while appearing native on different desktops (eg. adopting the native Breeze look on KDE while retaining the native Adwaita look on GNOME). The lack of this functionality at launch (and subsequent empty promises of a replacement) have rightfully left a bad taste in some people's mouth, particularly now that much of the GNOME leadership denies that this is a problem in the first place.
I've been using computers since ZX Spectrum days. I've seen all kinds of design-activity and design-philosophies. What's missing from today's design-thinking for me, is context-appropriate direction.
Some things should be flat, or more precisely - subdued. An Instagram photo page shouldn't be all buttons and underlined links, sure - the photo is the important part so that the rest should be subdued. But the settings page should be all skeumorphic buttons and links and dials and switches, that's what it's for.
It looks to me like there is an unhealthy tribalism invading design-thinking. As long as I have the correct ideology, no matter the context - my design is "good". Get on the flat-design bandwagon. Forget decades of UI research. Think in screenshots, doesn't matter if it's all jank and moves things in and out from under the user's fingers. Put everything on one screen, don't contextualize.
"I'm being paid to deliver", not to think through. To ship, not to test.
My designs require a gaming PC to run a document-like page, and don't scale down. "I'm designing for the future", it will all work out. Eventually. For some.
My tools and my education are a sliver of what a designer used to think about, so I can't really do anything about it anyway, that's just the way it comes out. You don't honestly expect me to learn about the systems and devices my users are running this on, do you? That would make me a coder, that is to say not a designer.
How did that happen?
How did being knowledgeable and well-rounded at your industry become rare? Business reasons? How did using a computer become constantly surprizing and never straightforward? Business reasons? Why do I have this and that humongous app just for one or two things it does, and now there's 200+ apps in there? Always waiting for something to load all that, just so I can do this one thing. Business reasons?
It's not business reasons. OSS is no better.
Maybe it's that the approach that got us to where we are, the moving fast and breaking things, keeping your nose down in your own corner isn't what will get us out of here?
I mean, my computer usage went down over the last 12 years, like 5x. I just don't want to. I'm more or less done with all of it.
It all feels like it got bolted on, and then bolted on, and then ducktaped around.
Terminals, mobile UI, desktop apps, the Web, games, appliances, work tools, control systems; ALL OF IT.
Sure, there's loads of it, and it's cheap. But there is still no choice, not really. This jank or that, it doesn't do it for me. I've checked out, more or less.
I stopped looking at new laptops, and when my phone breaks I pull a couple of old ones from the drawer and make me a working something. I'm not giving them my money, why would I? They got nothing I want. Apple, Google, Lenovo and the rest of them equally.
The last MS product I enjoyed using was MASM. That's until I found out about NASM.
There is no dumb car I can buy. Everything is finicky touch and not user-serviceable. Many things are so flimsy you can't lean on them at all.
But then, one has to understand that if the whole world around you seems to have lost the plot, chances are it's you. So, is it me, what do you think?
Because I can't handle it, I'm legit checking out.
The problem with buttons that have shadows (even subtle ones) is that you can't rotate the screen 180 degrees and have it look nice. This is a problem if you put your phone or tablet flat on the table and someone sits opposite to you and looks at the same screen.
Better to keep the design flat. It's also simpler.
The old non-flat button styling looks entirely symmetrical to me. Not to mention that designing applications to look nice when viewed upsidedown is just silly. Even if it was your primary use case to have two people opposite each other view your app your biggest problem is all the text being upside down!
You can't make an omelette without breaking a few eggs. (You can't break the back of modularity-induced fragmentation and make a consistent GUI without making a few people unhappy with the UI design that the majority chose).
That is Gnome. Gnome has become a top-down project that values consistency/coherency over modularity/theming. It's an extreme, and I suspect that they went too far, but with it comes a number of benefits, such as a unified visual style across all apps, and an easy to use internationalization/localization subsystem.
Gtk+ 3 with a custom theme allowed for a unified visual style across all desktop environments, not just GNOME. I could use a MATE- or Xfce-bundled app and not have it look out of place in a GNOME desktop, or vice versa. It's sad to lose this because of a combination of pointless churn and active hostility ("don't theme my app") to outside efforts that might improve the ecosystem.
>> You can't break the back of modularity-induced fragmentation and make a consistent GUI without making a few people unhappy with the UI design that the majority chose
Not sure what that even means. The majority are not the Gnome developers, and many users will just adapt to whatever they're fed, even if it's worse.
>> That is Gnome. Gnome has become a top-down project that values consistency/coherency over modularity/theming.
No. People aren't complaining about a lack of consistency or a lack of theming (some do). They are complaining about a shitty design where the elements are consistently nonsensical and harder to understand than past version of the same.
Top-down means a few people who are at "the top" think they know better. Not that they're great designers either - they are copying stupid trends that other "design" people came up with. This stuff is complained about on all OSes these days. It's not a vocal minority either, and you can tell because nobody complained that GTK was falling beind and needed to update to a modern "flat" design.
What you're not understanding is, just because something seems nonsensical to you doesn't mean that the majority of users don't find it perfectly reasonable. But you've already waved that possibility away by saying that most users will take whatever they're given, so I don't suppose you'll consider it.
> That is Gnome. Gnome has become a top-down project that values consistency/coherency over modularity/theming
They care about that so much that they break Gtk+ every few version, to the point that in 2022 there are still Gtk+ 2 apps around, and there will be Gtk+ 3 apps following old design paradigms for years to come. So much for consistency.
It's 2022 and I don't see Qt 4 around anymore, or apps using it still being widely in use. Guess why? Because Qt developers actually give a damn about people using their library, and they give a damn about people using the _latest_ version of it.
What GNOME is doing today is telling everyone to go fuck themselves, and pushing people towards writing their next app in Qt, which by the way integrates nicely with GNOME and does not look like garbage everywhere else. Now with PySide being finally a first class citizen and working well with QML there's not even the "but C++ is ugly" card to play against going Qt.
Today I get the feeling it's mostly just opinion. Either the designer's opinion or the wish to copy the look of something.
Whenever I see a hamburger menu I silently think "Here someone has given up".
And there are a lot of behaviors that are not functioning well.
Is something a button? Should I click it or double-click it? How about long-press on it? How can I know when there's no visual clues?
Things like "Hide cursor while typing" in Windows. It has not worked properly for decades and today only work in some super old apps like Notepad.
Another thing is type-ahead. I remember in classic MacOS, people pressed shortcuts and started to type the filename or whatever. It was all perfectly recorded and replayed. In modern Windows, press Win-key and start to type, oops, it missed the first keypresses, presented the completely wrong results and made a mess of your workflow.
I feel confused and disrespected as a user every day and I've been using WIMP graphical user interfaces since 1986. Sure, computers do more today, but there's less consideration of almost everything.