Flat UI has been the downfall of the Windows UI. Firstly, it is ugly. Secondly, it is not intuitive. Thirdly, it is hard to differentiate from Web apps.
From a developer point of view that last point is very important. If your native app is not going to look or feel any different than an app built using Web technologies why limit your customer base to Windows users?
Sure, native apps are better at reducing resource consumption but if your app offers significant functionality users won’t mind if it uses some extra memory. This is the reason Microsoft’s own Teams and VS Code products were built using Web technology.
Another issue from a technical pov is that XAML, MVVM and 2-way data binding are outdated. MVVM introduced 2-way data binding to the world. Since then every JavaScript framework copied 2-way data binding including EmberJS and Angular. Today we know 2-way data binding is gimmicky and makes programs hard to debug. This is the reason React uses unidirectional data flow.
Totally agree. I occasionally work on family member's Windows PCs and hate it every time. Everything is huge, I can't tell what is supposed to be an actionable element and what isn't, and its just subjectively ugly.
Information density took a huge hit in the later Windows versions and I find it incredibly frustrating to use. The first thing I always have to do is figure out how to open the old version of control panel so I can actually accomplish something.
Glad to know I'm not alone in these thoughts, thought maybe I was just being stubborn in adopting the new style.
I think Flat UI can look absolutely awesome, the problem for Windows is that Microsoft hasn't done it well. I don't know what it is exactly, but their new UIs look bloated, yet barren. It feels like there's a lot of noise, yet no functionality. It's certainly not helped with the serious UI layering MS has been doing for the last decade (take all the basic functionality, create a new UI for that, create a link that opens a dialogue for the old functionality). Nowadays you have to enter 4 dialogues to access anything slightly advanced in outlook.
Since then, details were improved, but the overall concept has never felt natural or appealing for me. It always feels like something I have to work against. (And I usually do appreciate reasonable white space, distinctive typography, and clear geometry.)
Can you show me some "absolutely awesome" examples of Flat UI? Our sense of beauty and aesthetics comes from the physical world, which contains gradients, shadows, dimensions and so on. Simple geometric shapes can be elegant, but I haven't seen examples of Flat UI that is awesome as well as doesn't suffer from usability issues.
Screen with blue todos is using white text on gradually lighter background in the end the contrast is terrible IMO.
Also the gradient looks tacky and pointless
For a to do list, I think that makes a lot of sense. The items at the top of your list are higher priority, so having higher contrast text and a more saturated background up there draws your attention to the stuff that's most important.
I'll grant that -- but it's also a to-do list, which is pretty much the simplest possible application. It's a common "hello world" for GUI toolkits, and it's the first use case in the org-mode tutorial.
Is there any software more sophisticated than 'a 3x5 card with a pencil' which uses this user interface paradigm? I think any paradigm, no matter how confusing or clumsy, is sufficient for that case.
I agree they can look awesome, but I still struggle to find actionable components when using them, so I find them to be counterproductive most of the time.
It is definitely possible to devise a flat UI visual language that would make anything clickable easy to identify, etc.
Two problems: (1) It takes a real and complex design effort, and (2) it's nearly guaranteed to be at odds with the "clean look" aesthetics.
I think it's the minimalist aesthetics that caused the fad of flat design and unidentifiable controls. It strives to make a page / form visually simple, thus hiding its real richness and the irreducible complexity stemming from it. In a misguided attempt to avoid cognitive overload it turns UIs into impenetrable puzzles.
There is a fundamental tension between "clean and simple" and "rich and comfortable". It's much easier to sell "clean and simple" and pretend that it's "intuitive" and does not take any learning. But once you've bought that, and want the power features, they are either not there at all, or are very well hidden ("have you tried triple-drag this to the right?").
Ideally there would be a switch between "beginner mode" and "pro mode", but there's little incentive form the market, because people are just used to put up with the limitations.
I think the "clean look" is not aesthetic. It is like a white canvas that has just been purchased from an art supply store but has not been painted on yet.
> Today we know 2-way data binding is gimmicky and makes programs hard to debug. This is the reason React uses unidirectional data flow.
It makes sense for simple apps that are mainly rendering and outputting information/data.
Writing a React that handles even simple forms is however a trip back to the early days of web, where you gotta handle your own DOM events. I'm unclear when repeating the same DOM handlers all over the place became "easier to debug"
That’s because React isn’t a form library, it’s a rendering library. People have built form libraries on top of it. Here’s one I’ve used and contributed to that builds two-way binding for form state in React:
If you're going to use a "if React isn't going to handle X for me, why am I wasting 100KB on it?" argument, then it wouldn't be 100KB for very long, now would it?
For the record with Angular 2+ two-way binding is greatly deemphasized and not the default behavior. Everyone collectively learned that it’s not great for performance to do thousands of dirty checks a second. React definitely showed the way here. Nowadays you can implement the flux pattern with Angular using NgRx, but if you’re creating simpler things the binding concept is more intuitive. With Angular 9 you can breakpoint debug your template as well, so tooling is definitely catching up here.
>Today we know 2-way data binding is gimmicky and makes programs hard to debug. This is the reason React uses unidirectional data flow.
Uh no we don't. I've spoken to a lot of engineers who curse redux and the entire ecosystem around it. The reality is, 99% of apps don't need that kind of craziness. "time travel debugging" is a meme. I'm tired of it being highlighted as some insane victory. Over what exactly?
In Back to the Future — I can't remember which — Doc says at some point: Why are things so heavy in the future? Is there a problem with the Earth’s gravitational pull? If you told me he was talking about websites I'd believe ya
> If your native app is not going to look or feel any different than an app built using Web technologies why limit your customer base to Windows users?
BTW, It specifically mentions that WinUI supports React Native for Windows.
> Another issue from a technical pov is that XAML, MVVM and 2-way data binding are outdated.
UWP XAML emphasizes OneTime binding more than OneWay/TwoWay when you use the new static bindings (using x:Bind rather than Binding), it's been a source of frustration for some people coming from WPF
Here's the key developers behind Embersj talking about why they moved away from two-way data binding: http://thechangelog.com/131/ (fast forward to 0:44):
"It is easy to end up in a state where you can't yourself explain how data flows through it. Even though both Angular and Ember have events and data bindings, data bindings feels so cool that people started tunneling events through data bindings. People are abusing two way data bindings to express something that is fundamentally an event. Ember 2.0 is moving away from two way data bindings as the primary method of communication to events. We added too much sugar around two-way data bindings and that led people to use two-way data bindings as an event bus."
"Silly" is a strong word. I imagine one of the points would be that a native app has direct access to your filesystem so you'd rather have users be cautious with it?
But I agree that having in native apps the UI patterns people are used to seeing in webapps is simpler for the majority of users.
I'd love to know what group or person at Microsoft decides what looks "good". I want to know why they decided to go the direction that they have, and why exactly they cant get everybody on board to go the same direction.
My guess is there was probably one design committee that decided that things needed to look different, because the "future is now", we can't remind people of older versions of windows because visually that isn't progress, and we can't be reminiscent of the "bad old days" when things crashed often.
Then after that committee makes it's decree, everybody who has control over their own little gui fiefdom each decides "nah.. that looks terrible and we aren't going to be cohesive, so lets do our own thing over here."
And so, everything looks and acts a little different. Since not everybody got on board with the change, the original committee digs in their heels and says, "Well it obviously doesn't work because not everybody is on board with OUR design."
The problem with Microsoft is that they “simply have no taste.”
Windows has always had their Windowish / Microsoftish smell to its design. Much like Linux, You could spot a linux desktop from 100 miles away. And everyone thought these were simple you could just copy it. Turns out there are trillions of small details that make it all coherent and beautiful.
I think I put it best: "Apple likes to make people think they have taste and that they're good at design, but they're actually really, really bad at design and their tastes are quite obviously very shallow."
Windows and all the Linux desktop environments that copied Windows have so much more functionality than the brain-dead macOS that can't even get basic features like window management right.
Imagine thinking an OS that won't let you easily maximize or snap application windows is any good?
You can do those last few things pretty easily, by the way, but the desktop philosophy of macOS is just very different. It functions a lot like a real desk. Things are sized however they are meant to be sized, and you kind of pile them up. I don't line up things on my desk edge to edge maximizing space, its not necessary as some overlap and slop is fine. Apple has a lot of tools with sorta redundant functionality that make it easy to sift through the pile and see everything at once. Three fingers swiped up on the trackpad spreads everything out on your desktop, just like spreading out papers on your desk.
It might seem like an issue if you are coming from a tiling window management environment, but over time it becomes quite familiar and intuitive to use because of all the parallels with the physical world and how we interact with physical objects. I have tried the tiling window manager route, and it's more clumsy than anything imo.
Parallels to the real-world certainly make technology easier to learn, but not necessarily easier to use, and I'd argue it goes against the main benefit of computing - simplifying physical processes through abstraction.
You could design a VR environment that lets you walk around your virtual house, open your virtual front door, walk down your virtual driveway to your virtual mailbox, pull out virtual envelopes and open your virtual mail, but you'd just be replicating all the old real world cruft that doesn't matter to achieve your goal. What you really want is just to read some text in an email.
I had an older professor in college who still read and replied to all of his email via terminal, and he was multiples faster than those of use that used Gmail, but I remember thinking I'd never devote the time to learn to do things this way when the way I'm used to works just fine.
Apple has done a tremendous amount to make technology more approachable, and in doing so has brought millions into the digital age, but I do think the pendulum has swung a bit too far - a monitor isn't a desk, it's a monitor. Too much value is placed on "intuitiveness" these days, and while those real-world parallels might feel nice to interact with, there's a time to disavow ourselves of them to take full-advantage of what the technology can be.
I don't line things up on my desk because it is difficult. However using Windows or Linux it is the default and I get to see more. I don't think copying the existing use case blindly makes sense. In some cases it does, and in some cases it may be more intuitive but there is no need to copy the limitations of our past when it is not beneficial.
For me depth is having emacs keybindings everywhere, support for good fonts and high resolution displays, and great touchpad support. Really basic but important stuff that seem to have been in short supply from other vendors.
> Windows and all the Linux desktop environments that copied Windows have so much more functionality than the brain-dead macOS that can't even get basic features like window management right.
The window management on macos is light years ahead of anything on Windows. Ever try to deal with full screen windows on either platform? The macos approach is a breeze where fullscreen windows get their own virtual desktop and window chorme, titlebar, dock, etc are managed by the OS consistently. Switching between fullscreen windows and regular windows is sane.
Meanwhile, the Windows approach to fullscreen windows is to just draw a borderless window on top of everything. A stupid approach that some linux desktops decided to copy for some reason.
> Imagine thinking an OS that won't let you easily maximize or snap application windows is any good?
Macos lets you maximize windows easily. Double Click on the titlebar works in most applications. Clicking on the maximize button makes any window a seamless fullscreen window in ways I wish a linux desktop could.
Yeah, snapping might not be baked into the OS. But there are 3rd party apps available for this so it is not that big of an issue.
I like full screen on macOS because I usually get a 2 minute break to read the news while the junior programmer that I'm helping swipes furiously to find the full screen workspace that they lost. Also opening a popout Chrome devtools window on a Mac gets me a break because nobody knows what workspace it ended up in.
The reason the macOS pushes full screen by default is because their idiotic UI design takes up way too much vertical space with the useless Dock that isn't even half as useful as the Taskbar, and the awful, ever-present global top menu bar.
Honestly, I don't full screen anything on Windows or Linux unless it's a movie that I'm watching, because I don't need to.
I usually get a 2 minute break to read the news while the junior programmer that I'm helping swipes furiously to find the full screen workspace that they lost.
Three-finger swipe up. Click on the desktop most to the right.
It's four fingers for me. I've not yet managed to train my fingers to click the desktop most to the right on their own, it still involves visual processing and thought. It'll probably come naturally me in a few more years.
Well if you want to use spaces you need to use fullscreen, and since I find multiple workspaces invaluable I need to put up with the pain of using fullscreen.
Spaces works just fine without having anything in full screen mode. Just do a 3-finger swipe up to bring up the spaces bar and click the + button on the side of the screen to create a new space. Then you can drag windows to the new space.
The global menu bar is one of the things that macos just got right. You always know how to acces your basic functions, like closing the application, saving, etc... without having to look in the UI for that small button that does the same thing.
Another advantage of the menu bar is, that you can always see which shortcut belongs to which action, because most items have a shortcut displayed next to them. On Linux and windows it is trial and error, or looking them up online.
Also it makes the windows look a lot less cluttered.
I think that by "global" they mean that it's on top of the screen, rather than on top of the main window of the app to which it belongs. It's actually hella confusing for new users when they get to running apps in parallel, and switching between them. This is exacerbated by not having well-functioning maximize on windows, so a typical macOS desktop is cluttered with windows from several different apps - but the top menu is relevant to only one of them.
Shortcuts for menu items are normally shown in menus on Windows and Linux, right-aligned next to the label. For Windows, it's part of the platform UI design guidelines, and if you're using standard APIs to define your UI, any item with a shortcut assigned will render appropriately.
I used to move the dock to the side, but with dual 4k 27" monitors there was too much mouse movement to get across to it. Now it's back on the bottom and hidden, and I just use spotlight to launch everything
You can't resize it horizontally to fit the screen, such that icons don't shift position relative to the screen when new icons appear (and either enlarge the dock, or make other icons smaller, to fit).
Or that you still call your OS "intuitive" after shuffling keyboard shortcuts around so hard that not only is your OS running a completely different set of hotkeys from everything else you use, but so is every app you used that will run on it.
Or after making it a three-key shortcut to take a fucking screenshot.
Still, that's a common reaction from people who use Windows. I've used both Mac and Windows on a near-daily basis for the past ~30 years. Both have spent roughly equal amounts of time as my dominant platform. My observation is that maximising is such a default behaviour on Windows that most apps seem to be developed with that mode of operation in mind. Whereas Mac apps tend to be developed without that assumption. This in turn influences how people use apps on these respective platforms.
I have a smaller vertical monitor next to my main monitor which I full-screen my code editor. It looked weird at first but nothing beats having oodles of vertical space, enough to convey entire blocks of code at a time. And even though it's a verticalised widescreen monitor and a comfortably large font, I still have enough width for almost 150 columns of monospaced text—plenty for my personal coding style.
Full-screening code editors on a large horizontal widescreen monitor tends to have no tangible benefit other than to conceal clutter behind a bunch of useless whitespace.
I work with a similar setup, on my company-provided Mac. My IDE sits maximized on one vertical screen, with the other vertical screen divided up into two browsers and a bash console. The laptop screen is usually slack, a database app, or reacreational browsing.
Annoyingly, even at 1440p, the code window gets crowded, if I'm using both the file browser and builtin database client. Luckily you can double-click the tab for the file you're on and everything tucks away nicely.
Though I did grow up obsessing over Windows, so you're right about that.
And for the record, having to hold option to maximize is bullshit. I rarely use fullscreen mode.
Dude, you're missing out. I love using a four-finger swipe on the magic trackpad to quickly jump my side monitor between various spaces: my full screen code editor, a space with two web browser windows (typically showing monitoring tools), and a black screen.
> even at 1440p, the code window gets crowded
I loathe customisation, preferring everything to be as generic and standard as possible, but one thing I have done for my own sanity is assign F16–F19 to hiding and showing sidebars in all the various apps which have them. Typically F16 for left bars, F18 for bottom bars, F19 for right bars.
I've never been a big fan of virtual screens. I'm always losing track of what is where, and why do I need all that anyway when there's the taskbar (on Windows) or dock (on Mac)?
I know the shortcuts on mac and I don't even use them. alt+tab on Windows and Command-Up on mac are the habits I've been able to remember
I agree, traditional virtual screens are a horrible usability nightmare. Wouldn't ever use it on my main screen. But on a secondary monitor that is used for a narrow set of specific tasks, it's a brilliant way to jump between stuff.
A modern code editor (or better yet, an IDE) is going to also have a file browser next to the editor proper, and usually some other panes like embedded terminal etc. You can also use whitespace to provide a minimap of the code on the margin - the larger it is, the more useful it is for quick navigation.
I run VSCode maximized on a 27" monitor, and the editor field is just about wide enough for 150 columns, with all this other stuff.
I coveted the minimap until I got a vertical side monitor. It's not a direct replacement for it, but it gives me enough vertical context to do the work I do without feeling blinkered.
(Sadly my code editor of choice doesn't offer that feature.)
Option-clicking the green button (usually) maximizes. Full screen in a separate desktop space may resemble maximizing in some ways, but it significantly changes the interaction model when switching between windows, especially when some of the windows are not maximized. Example:
1. Open two apps, at least one of which supporting fullscreen.
2. Open three windows in the app that supports fullscreen.
3. Fullscreen one window.
4. Cmd-tab to the other app.
5. Cmd-tab back to the previous app.
If your window from step 3 was maximized (filling the screen in the same desktop space), you would return to that window, but instead you are now looking at some completely random other window that you were not interacting with at all.
For people who cmd-tab (and cmd-`) frequently to switch between apps, this is super jarring and not at all what we intended to do.
In my experience, the green button would make things maximum tall, but not maximum wide. I did not have any extensions. As opposed to everywhere else where maximize makes the selected window fill the full amount of windowed area.
And I'm reminded by other posters that they changed it to full screening (which is also not maximizing), complete with a disorienting animation, ugh. Pretty happy being Mac free again.
Yeah, but what does it really mean to "fit the content", if it doesn't fit the window at any size, but can reflow horizontally - as in e.g. most webpages? Mac apps seem to prefer increasing the vertical size until the window is as tall as it can be, but for horizontal, they usually stop before they reach full screen width - even if this would fit more content on the screen.
It's supposed to be "smart", in a sense that it gives you the largest window size that it considers usable. But that's too subjective an assessment to enable it by default, and even more so to not make it configurable at all.
No, that's for "zoom" - which doesn't work the same way for every window (after you enable under the Dock category of the system preferences for some reason). Many windows will zoom to the size of their content only, like the Finder window. Maximize is different.
> The problem with Microsoft is that they “simply have no taste.”
You can say that about Apple today too. After all, in iOS7 they copied the Flat UI from Microsoft, including thin fonts, and buttons that don't look like buttons and so on. Although more recently they have removed thin fonts in their designs.
I felt that is what happen when you put a Industrial Designer in charge of Software Design.
I know it feels a little harsh, but Jony without Steve is a Designer without Product Manager. Steve Jobs is great being an Editor and Product Executive.
The decisions made since Ive has left Have me hopeful for a better future at Apple. They’re never going to be perfect, but things are going in the right direction. Their laptops are getting ticker for crying out loud!
I just got a new 16” MBP after hanging onto a 2013 MBP forever (with a brief detour to Windows and running screaming back). It’s practically perfect in every way.
It was also the first time Apple was reacting since the imac days. They were leading of computing UI innovation almost everywhere since.. and then they started to have a me-too moment, trying to make flat-ui right the Apple way. Very odd.
It all comes from walking back the changes in Windows 8 without acting like they made a mistake. Nobody was prepared for Metro UI and it was widely disliked by users. But going directly back to Aero would make them lose face.
Why do you think it is that these sorts of things have been disappearing from software?
It used to be that we could change the colors and themes of our UIs, in this case all of the way down to the OS level. Nowadays it's a major feature for a bit of software to introduce "dark" mode.
I believe a part of this has to do with some software products wanting to control the look-and-feel of their applications as part of brand recognition, which runs counter to the idea that GUI applications should conform to the UI guidelines set in place by the operating system or desktop environment. Some people blame this on modern Web trends, but this phenomenon is not new. A lot of video games for Windows or the Mac never conformed to their platforms' UI guidelines. I also remember when Apple ported iTunes and Safari to Windows during the Windows XP era and the controversy caused by these programs refusal to conform to Windows' UI guidelines (interestingly enough, the Windows port of ClarisWorks conformed to Windows' UI guidelines). Even Microsoft doesn't always adhere to Windows' guidelines; Microsoft Office 97 introduced the Tahoma font, flat toolbars, and other elements, which did not adhere to Windows 95's look-and-feel guidelines, and although Office 97 could run on Windows NT 3.51, it didn't even bother conforming to the look-and-feel of that operating system, which still followed Windows 3.1-style semantics. (To be fair, though, the styling of Microsoft Office 97 was a foretaste of the styling of Windows 2000, which used Tahoma as the system font. Office 97 blends in perfectly with Windows 2000.)
This "look-and-feel-as-branding" philosophy has now entered parts of the Linux community, as evidenced in this public letter to the GNOME community titled "Please don't theme our apps" asking Linux distributions to not apply custom themes to GNOME applications (https://stopthemingmy.app).
Personally I'm a major proponent of native GUI applications conforming to their platform's UI standards, and I'm also a proponent of users being able to theme their environments. In my opinion the purpose of personal computing is to empower the user. Users should be able to control their work environments and their workflows as they see fit. Unfortunately I feel that this philosophy of user empowerment has been slowly challenged, where the user experience is being controlled.
I used to run the application skins section on deviantart way back in the day. What is now marketed as "dark mode" is what every kid would put up as their first skin. "I turned white black and black white!" It always amuses me to see people absolutely freaking out when programs add this feature today.
To be fair, the feature today is not so much "dark mode" per se, as the ability to centralize the setting so that everything can follow it - especially the websites. I remember using dark OS themes in early 00s, and it was always annoying when you opened a browser or a PDF, and it drowned out everything else with a field of white.
I think it has a lot of potential answers on a case by case basis.
Often people don't want to maintain an old code path. Sometimes people get into that line of thinking without clear understanding of just how many people like what the old code path is doing for them and why. Sometimes they also over-estimate the maintenance cost of the old thing.
In the case of win8, there was a strong drive for "we're doing a tablet now", very little intuition about how their thing was going to be received in the market, and the company culture and talent pool was already markedly different from the one that produced their 1990s greatest hits, so it's not like they were automatically going to reproduce prior success.
Also notable that in vista and win7, turning on the classic look disables dwm.exe, the compositing engine, and some GPU-based optimizations.
Part of it is laziness, part of it is this lurking impression that users should be coddled and are too dumb to hold the reigns on a machine they own outright.
It was disliked by users but designers really loved it so they forced it everywhere they had an ounce of power. Notice how UIs that didn't have to please some designer didn't bother with flat UIs.
No, not at all. Metro was tiles, sliding overlays and everything full screen. It works on tablets but poorly on desktop PC and most laptops. I don’t see anything really like it in the “work computer” domain.
I believe Flat UI was the invention of Albert Shum (as reported by CNET at the time, he came up with Windows Phone Metro UI). You can see his articles here: https://medium.com/@alshumdesign
> "nah.. that looks terrible and we aren't going to be cohesive, so lets do our own thing over here."
I wish this were the case. Instead all Microsoft products have switched to the ugly Flat UI, including Word, PowerPoint, Excel, etc.
The thing is, flat UI worked really well on the phone, which it was designed for.
It's just that some idiot manager came up with the "unified experience" idea and shoved that shit onto desktop Windows without any time for polish.
Ironically I think pretty much anyone's first experience with metro was Windows 8 on PC, which was such a disaster that people wouldn't even give Windows phone a try because it had the same UI that they learned to hate. Not to say this is the reason Windows Phone failed, but probably one of the many factors.
Windows Phone had the best UI of the three major phone operating systems, in my opinion. The Metro UI tile system is a great home screen system, and the layered swipe left/right process made a lot of sense.
At the same time, there are other people who like the newer UI style much more. I think it looks and functions better. These are matters of personal opinion, including the end judgments of whatever team created it.
the argument that the article brings forward is that flat design is harder to figure out because it lacks some of the real-world metaphors we associate with interaction ("things that look raised can be pushed down", "things that have depth can be filled")
has there been research done on whether these difficulties occur in all age groups? Because I can easily see this being an artifact of tradition.
People who don't use a lot of analog devices may be confused by a skeuomorphic interface, and may appreciate the simplicity of a flat design.
You are making the "digital natives don't need the crutch of physicality" argument.
Here's what tobiasandtobias had to say about that in their blog:
Even 'digital natives' live in the physical world. We start learning how it works before we ever touch a computer, and even the most dedicated nerd spends more time interacting with physical objects than with digital interfaces. It doesn't take additional learning to know that an object casting a shadow on another is in front of that other, for example. Failing to leverage that existing knowledge is tantamount to shutting down whole swathes of users' brains.
A word of advice: I'm well in my 30s but I often feel like an old ranting fart.
Every time I read of some new amazing framework like WinUI or SwiftWUI or whateverUI, I always stop and wonder: will applications written with this thing today ever run unaltered in 10+ years from now?
Microsoft has been doing an amazing job in these past 30+ years in keeping things working: stuff written on Windows 95 can still run without any particular issue on Windows 10.
Sure, there's a bit of "cheating" here and there to keep the compatibility working but the bottom line is: there's retrocompatibility.
As an example: if a developer wants to use Win32 to write something on Windows, it's still possible and it _IS_ still supported, with APIs that basically didn't change for like... ever?
Apple seems to be on the far side of the spectrum: they have amazing technologies but - as a developer - I always feel I have an immense work to do just to keep up and allow my code to keep working as it used to.
So, the question for the community here is: how can we balance the "sugary" to make things as accessible and sustainable as possible with the "stability"?
Sadly, most OS X developers notice way too late that the constant pressure to support what feels like 100 different OS versions makes their app financially not viable.
At least that's been my experience: Plenty of OS X apps abandoned after each major update.
Can you name at least two apps that has been abandoned? That hasn't been my experience at all with macOS.
I don't have a single app that was abandoned. BBEdit, 1Password, Little Snitch, MailMate, Tweetbot, Typinator, Rocket, and many more are all still updated when need be. Key note is "need", stable apps do not need constant yearly updates.
In addition, you can say the same thing about Windows. Metro, UWP and previous attempts didn't go anywhere either, their MS Store is pretty much a flop. (Granted, MAS as well).
Linux...it's not bad but there are issues with various UI frameworks, stores, etc. GTK/QT, Flatpak, snap, etc.
As far as I know Caffeine still works, you just need to manually enable it in Preferences.app (I remember getting it to work on Mojave, but my MacBook has been on a shelf collecting dust since I updated it to Catalina two months ago)
I don't know. My library has since switched to the Tolino app.
And since most of my pro audio apps also stopped working on OS X and the USB dongle works the same with Windows and Mac, I myself also abandoned OS X for productive work.
will applications written with this thing today ever run unaltered in 10+ years from now
I wonder the same usually, but less when it's a fairly large/established framework from MS, exactly because of having Win32/COM/... in mind; they are known for many things but less for suddenly dropping backwards compatibility. We have 10+ years old WPF code which still does the same as it used to do and builds with the last IDE as it did with the first we used it on, and it doesn't look like it's going to stop functioning soon. But yes, it's not quite 30+, so only time will tell..
At my workplace (vaccine r&d), we're using software written for XP on W10. Like we actually just double click the original installer that came on a CD and it works out of the box. Basically, if you purchase software and the vendor goes out of business, or they want you to pay for Version 2 (so it works with OS Version 2), or if its an F/OSS app - the project is abandoned, etc, etc, that shouldn't affect your ability to still run the software you purchased, decades later. I don't believe any other platform comes even close to this kind of backwards compatibility. Not saying Windows is perfect, its just far far ahead of the others.
That's just Microsoft's business model though, it works great for businesses like in your scenario, but not too well when they spend a decade trying to abolish a crippled browser. No one comes close because no one is trying. Also, saying that Windows is far ahead in staying behind sounds oddly poetic.
I don't see how helping users maximize their investment in software is "staying behind". But anyway, can't really argue with an opinion. If it was some trivial $50-100 piece of software, then maybe an argument can be made. Some of the stuff we buy is expensive to the tune of $10,000, and even more sometimes. There is a lot of industry specific software that is super expensive. On the side, I don't care if they want to also peddle some new UI framework or whatever other snazzy thing that the kids want these days :P. Just let me use the software I bought for as long as humanly possible. Microsoft gets this more than any other vendor, and so they get our business, and maybe a small amount of good-will.
Not even Microsoft brags about their 30 years of bloated software, and their idea of mending their past mistakes is xkcd.com/927 every time. Microsoft maintains archaic software because it still means profit for them, not because of the developer experience. Realistically speaking no one wants to use or maintain — let alone brag about using or maintaining win95 software unless you have a serious case of stockholm syndrome. PS. Do you want to install updates? XP SP2 is ready to install.
Clicking through to the repo behind I see this in the README:
Data/Telemetry
This project collects usage data and sends
it to Microsoft to help improve our products
and services. See the privacy statement for
more details.
For more information on telemetry implementation
see the developer guide.
This has been probably my biggest peeve with C#/.NET so far. I actually like the language(s) and the ecosystem but why does everything from Microsoft need to contain telemetry? Like, Visual Studio okay, it's a fairly large application and architecting it might need some intel on how people use it (even though I believe you can do without but I'm not an expert). But the dotnet SDK? That's just a punch in the gut, especially to developers who oughta be more enlightened on tracking/privacy concerns than average users.
And I know you can opt out but I've done that and still seen dotnet make network requests on runs. Maybe it's essential for the functionality but I'm having a hard time trusting a company that puts telemetry in their SDK.
Which opens the question: Who else is doing this? I know JetBrains has some telemetry in their IDEs (although like I said IDEs are more justified I think and I haven't tested their opt-out). choosenim also recently asked me if I wanted to report anonymous usage data. What about npm? The Rust stuff? How much hidden telemetry is there inside developer tooling and what can we do against it?
It would be neat if there was some sort of database for this. Kinda like what Exodus Privacy[1] is doing for Android apps.
I'm not aware of any low-level devtools having any kind of telemetry. Rustup, cargo and rustc don't have any telemetry. Rustup used to have some ages ago, but it was removed.
Similarly, NodeJS/NPM, LLVM (clang, lldb), GNU (gcc, ld, gdb, glibc), Java (OpenJDK), etc... none of them have any telemetry as far as I know. If they do have some, google isn't being helpful at finding out about it.
Honestly, I'm really getting tired of this peeve that everyone seems to have. Every site is practically tracking everything. Emails are not private. Every damn mobile app is trying to access all it can. COVID-19 isn't doing any favors either.
And when someone creates an open source product with this clause in the README right up front along with the option to opt-out, people get peeved about it?
I can understand if the norm is to NOT collect data. But since the world doesn't seem to work that way, despite what some techies would want (since all this tracking has been created by techies anyway), I think it's time to grow up and face the real world for what it is - avoid where we can, adapt in our own ways where we cannot. And try and not get peeved.
I'm fairly certain that a lot (if not most) regular tracking can be avoided today. Sure, you can never know for certain what exactly you are leaking and what not and total anonymity is pretty much impossible. But I dare say, most tracking is probably low-effort and can be circumvented (especially if you are technically inclined). So your picture of the world may be your reality but certainly not everyone's.
But apart from that, even if the world was really on fire in terms of privacy, it would still not be an excuse to shrug everything off and "give up already". You wouldn't sign off your rights and shut up just because everyone else does, right?
I agree, harassing developers for including telemetry in their programs is absolutely disgusting and not the right way to push privacy. And usually the best response to terms you don't like is to just not agree to them and not use the app.
But in this case, I think it's a bit far-fetched: The dotnet SDK (for instance) is not overly complex (as in: the CLI; crash reports are a different topic), has a nice public GitHub repo and is only used by developers who, if they don't like something, probably know how to file an issue on said GitHub repo. So why does it need telemetry? I don't know what's going on in the dotnet team and maybe there's a good reason behind this choice but from my POV it just seems like it's unnecessarily shutting people who disagree with it out of .NET. And I think this is kind of an injustice to .NET as well (since - and I can not stress this enough - it is really good!)
Alright, </rant>. I'm not angry but "popular apps" (let's just call it that) have largely become a minefield for me, so I just avoid most of them. It's kinda sad that developer tooling is now also shifting in that direction.
The norm isn't to collect data when it comes to development tools. In fact, Microsoft is very unique here. As I said in another comment, they're the only SDK in widespread use that has telemetry that deeply ingrained.
This has been a cultural difference I noticed between Windows and Mac since the 1990's. List of features but somehow forgetting to show what it looks like. A few screenshots go a long way to get a feel for it.
I don't know about anyone else but when I get a new program and it turns out to be using a "modern" looking API I just assume it's going to be awful. Not just the UI but everything about it. I've rarely been wrong.
This is my gut feeling too. Every time one of these awful minimal flat interfaces pops up I know I'm going to spend ages trying to find where they've hidden all the functionality just to keep the UI 'clean'.
It's weirdly nostalgic, like playing the original "Wolfenstein 3D": there were effectively no visual cues for which wall panels were secret doorways so to find all of the goodies I ran along the edges of rooms pressing each panel in turn, endlessly hearing the flat farting tone that confirmed that yet another one did nothing useful.
Note to UI designers: nostalgia doesn't necessarily mean "good". I can be nostalgic for my grandmother's cabbage soup, doesn't mean I want a daily banquet of the stuff.
In particular, Microsoft has been pushing a "modern" UI which has been a disaster. That is, UWP apps, Windows store, etc. Inevitably if there is a UWP app and a real Windows application, the real one is head and shoulders better.
You think Microsoft would have learned that "modern" has the stench of death.
This peaked in Win8, but it's been a lot more fuzzy since then. Desktop Win32 apps can use UWP APIS, and can be distributed via the Store (complete with updates etc), for example.
Interestingly, I think this is the opposite in the Mac ecosystem. If an app uses the latest APIs and design trends, I expect it to be well crafted and cared for and am also rarely wrong.
I assume it's going to ignore the system-wide autocorrect settings, and lack any option to disable spellcheck, which is one more reason to use the web version. With Teams I just rename "node-spellchkr.OFF" each time it updates.
Yup. I see the three pane layout, I forget and move on. I don't need a layout for an ipad on a desktop device. A drunk can click a line of text with a mouse. Give me a compressed, information-dense list view without triple spaced lines. Give me options rather than strip features. Let me make my own decisions for how this tool should work for my purposes.
There's something strange about the new flat UI that Windows uses that doesn't convey that my actions are doing anything.
For instance, making changes in Settings feels... flat... I can't tell if something is loading or if my system is hanging or if nothing happened! I really don't like it.
Personally I've always disliked settings panels that apply the moment you click a control. I like being able to hit OK, or better yet, click Apply and see the button then get greyed out as a little visual confirmation that the settings have applied.
I can tell you that that behaviour is learnt. People unfamiliar with classic desktop paradigms find annoying and confusing to have to confirm a choice they’ve already made.
The most intuitive setup, imho, is to apply the change immediately, but also materialize some sort of UNDO button to revert back.
Sometimes you are doing multiple changes at once, at which point it's better to have an apply. Take a game. Change the resolution, delay. Change the textures, delay, Change the shadows, delay. Change vsync, restart. I could wait the 30 seconds between every click twiddling my thumbs, or I could just click it all at once and restart the game.
This makes sense when applying changes is expensive. If applying the changes is cheep (or invisible while the settings dialog is open) then you don't need the confirm step and you can do it immediately or when the dialog is closed.
Windows itself is an offender in that regard - e.g. the setting to select when updates are (and aren't) installed is a modal dialog that itself requires confirmation. And on top of that, it has two time pickers, each of which has separate scrolling columns for hours and minutes, but then you also have to press the "Confirm" button to actually apply changes after you scroll. And if you just click away to get back to the dialog, it 1) doesn't prompt to save changes, and 2) doesn't apply them.
I've found that having a good undo is far, far better than a commit step. I think the best ux would be
- Apply changed immediately.
- Provide indication that the change was applied (maybe pop a checkmark our for a second or two)
- Provide an undo that at least revertes to the state when the dialog was opened, but ideally even be able to recent to snapshots from before previous changes. (So that if you close the dialog then decide you don't like it you can go back).
COMMIT is useful in databases because you have made a "permanent" change and for atomicity reasons. If you make undo effective and there is no concurrent access it is unnecessary.
This is especially annoying on a slower computer that takes time to activate the individual controls. I would rather set them all and then have the PC take a moment to ready.
Yeah, MS has been very inconsistent of late. They went from Chen-style decades-long support for foundational tech, to a whirlwind of different things, promoted then chucked aside like they were a Google product.
Not really, a bunch of apps from the WinRT era are very broken under Windows 10. It’s disappointing given how much from the early 90s still works, but code from 2013 is now totally broken.
Silverlight is where things have started to go downwards wrt long-term stability. But Win32, MFC, and WinForms are still very much alive and well. I mean, WinForms even got .NET Core treatment and high/dynamic DPI support.
Yes. UWP already mostly uses WinUI 2.0 I think. 3.0 will expand the minds of applications that can make use of the component library and provide a way to gradually incorporate newer controls into older technologies (WinForms, WPF etc).
Is there some WinUI 3 application code I could compare with a AppKit/Qt/whatever? I look all over, and surely must have missed something obvious before getting lost in XAML Islands, WinRT, WinUI2 and other random shit.
There aren't any as far as I know (if you're asking for actual real life code), WinUI 3 is still very alpha stage. The first actual release is going to be later this year with .NET 5.0 release.
Even then, we won't know for a while as most devs aren't likely to switch on day one since it will have to be justified from current working stable codebase.
Apparently there’s an app you can install (if you are running Windows) through the Microsoft store that demos each component. Why wouldn’t they just throw up some screen shots? Beats me.
Honestly, this looks terrible. Even though Windows hardware has gotten much better over the years, and Windows 10 is a much nicer OS, you can tell that the developers in their UI frameworks don't sweat the small details.
All of the screenshots are lowish resolution. In the second screenshot, the spacing on the icons is funny (more bottom margin than top). Blue on dark gray is hard to read, even for someone with young eyes like me (I cannot imagine my parents enjoying looking at that). In the first screenshot, the code isn't syntax highlighted (I also think the font is not monospace, which is just appalling).
Anyway. I installed it and just at the start i notice something i really dislike: they slide a panel of text and the sliding is slow (in terms of sliding speed, not performance) and done in a subpixel manner, making all the text blurry and unreadable until the sliding finishes.
Also i noticed a bug in the "AutomationProperties" demo (second one) - if i use the wheel to change the size (no indication about being able to do that btw, unlike spinboxes in Win32) and i also have the "page" scrolled, both the page is scrolled and the value is changes.
In general scrolling seems to be broken - if i use the wheel to scroll, the page scrolls normally until a code block with a horizontal scrollbar enters into view at which point scrolling only scrolls the code view horizontally. Now i might be wrong but i do not think the mouse wheel should scroll a widget horizontally (this is what the horizontal wheel scrolling - that is wheel tilting - is for) and even if it does, it should only be done when the widget is not inside a scrollable container. Because as it is right now, i have to move my pointer to a "safe" area outside the code box to keep scrolling the page. Also regardless of horizontal or vertical scrolling, IMO if you reach the end of the scrolled area in a widget that is itself inside a scrolling area, then the outer widget should continue scrolling.
Finally on scrolling, the scrollbars do not seem to respect the shift+scroll behavior (normally clicking on the tray above/below the scrollbar thumb moves a page up/down, but if you shift+click it moves the thumb at the position you clicked like middle clicking in most X11 toolkits - however this doesn't happen in the demo).
Another thing (not a bug), i mentioned above that there isn't any indicator like a Win32 spinbox, but in the NumberBox demo there is a spinbox - and if you focus it, the spin buttons become enormous wasting almost half of the spinbox's editable area. Also - and this is a bug - if you click just left of the spin buttons before they become enormous, the big ones appear and then disappear immediately and the spinbox loses focus.
Funny enough, this is the "compact" configuration for the spinbox, although i guess it makes sense since the "inline" one (the alternative configuration) is even bigger. But at least it doesn't change size nor loses focus arbitrarily. Ideally it should be the "compact" one without changing size or focus unless you used touch input. Windows can differentiate between mouse and touch input, why doesn't Microsoft take advantage of this?
Another mouse wheel issue, now with the RadialGradientBrush sample (i'm testing those in succession)... the sliders do not respond to mouse wheel events meaning i can't change them with the wheel. However i'm not sure if it s the slider not understanding the wheel events or something else because the wheel behavior is weird in the panel there: if i click somewhere in the page outside of any panel (e.g. at the "Paints an area... blahblah" text at the top) and use the wheel over the MappingMode combo box, the values in the box change. But if instead of the MappingMode combo box i use the wheel over the SpreadMethod (that is, i click on the text at the top and then use the wheel over the SpreadMethod as the immediate next thing), then the page scrolls. However if i click on the SpreadMethod combo box and then use the wheel on it, the values in the combo box change (as expected) instead of scrolling. Note that i didn't have to click the MappingMode combo box for it to use the wheel. Also the slider does not change with the wheel regardless of me clicking it or not.
Another visual glitch, in the TabView demo: if you close the tabs and they remaining tabs do not fill the TabView line, the tabs line doesn't fill the entire width but gets shrunk to seemlingly random sizes. Note that if you close all but one tab but then you create a new one, the tabs line does fill the entire width. Also, if you close all tabs and open a new tab, none of the tabs are active and the content area remains empty. This might be intentional, but i doubt it since you can't "select no tabs".
Another bit of weirdness, with the TeachingTip demo: clicking the "Show Teaching Tip" button shows that teaching tip (i guess this is the new name for tooltip?) normally, but if you try to close it, it doesn't respond to mouse events (click, hover, anything) if it is at the area at the top of the window where the titlebar would be (and also that area doesn't seem to respect the system settings for its size - i have changed the setting in the registry to use a smaller area because i find the default titlebar size huge, but this demo ignores that setting). This means that i can only click the "X" button to close it from the middle and below. Even more weird, there is a small patch of pixels at the top left side of the button that accept mouse input... perhaps there is a hardcoded area at the top right of the window in case a window has all titlebar buttons visible (minimize, maximize, close and context help)?
A minor visual issue, the "three-state" AppBarToggleButton uses a background color shade to indicate its state. However the third state (neither checked nor unchecked) looks 99% the same on my PC where i use a gray "accent". Using an image editor i see the checked state uses an RGB value of 63, 63, 63 and the in-between state uses an RGB value of 57, 57, 57. On my PC they practically look the same.
At this point i lost interest, pretty much every single demo has some issues (except the AppBarButton, but if you can't even get a single rectangle to respond to clicks i'm not sure why you'd even bother working on UIs). I noticed a glitch in the menu bar demo (e.g. if i scroll the page and press space, the file menu opens... outside of the window) but i do not feel like investigating more or trying anything else.
Yeah, sure, i guess this demo did its job perfectly fine: it showed me why i'm good with sticking with Win32 :-P
I'd like Microsoft to succeed with a new vision for the Windows desktop but IMO the current style isn't it.
As for the framework itself, it seems needlessly complicated to get started. The XAML based frameworks use a whole different paradigm from web development or classical RAD tools.
And it's hard to take Microsoft UI frameworks seriously when they hardly use them outside of the core Windows apps. VS Code, Teams, even the VS installer are all Electron apps.
"And it's hard to take Microsoft UI frameworks seriously when they hardly use them outside of the core Windows apps. VS Code, Teams, even the VS installer are all Electron apps.
"
that's the problem. How can you have faith in their frameworks if they don't use them themselves?
I really tried to get into WPF a while back, I learned PRISM and MVVM and tried to go all in. What I found is that most of the time when I build a tool or a windows app its usually a tool for automation or data manipulation. What using MVVM did was make the UI take 3x longer to build, usually (and this is my admitting Im not a designer) would not be as easy to use or read as standard winforms, and honestly was complexity for complexities sake. I still use plain old winforms and c# and can make really usable, skinnable UIs with my DevExpress controls. Maybe if I were building an app that had requirements for chat, multimedia or some other use case that I dont need the Xaml ui paradigm works. I dont like Flat UI, and I personally think that the UI in Teams and One Note are a step backwards in usability. I had a PC that for a specific reason needed to have hardware acceleration disabled for the video. What that meant was that OneNote was broken. The UI would not display correctly- the note pages were blank unless you grabbed the UI with your mouse and moved them slightly. Winforms just works on every windows box...
You don't need to use MVVM for WPF - it's a design pattern, not a framework requirement. If you prefer the typical WinForms approach of just handling events directly and manually managing the data flow and actions in those handlers, it can totally do that.
It's such a shame this isn't a cross-platform UI toolkit. I'm not implying that I think this is the best toolkit or anything, but we can definitely use a bit of competition in that space (although kudos to Qt, they're doing an excellent job).
Not really the same, but it at least is designed with the idea of providing UI components to cross platform frameworks (React.Native,Xamarin.Forms etc.) to make them look more windows native.
It’s very bad because if you’re a C# developer you now have to worry about which COM apartment your Window is in and you get silverlight era error messages when something breaks.
Linux, amazingly, has some of my favourite UIs right now. It used to be different. I used to hate the Linux UIs, and preferred Windows 7 or OS X. Now that Win 10 is dominant, and I can't justify a new MBP (though I still love my 2015 MBP), I've jumped to Linux Mint. The UI is clean, easy to use, uncluttered, has obvious affordances. It isn't quite as polished as Win 7 was, but it's better than this new crap.
The main issue with UI on Linux is that toolkits have a higher churn than in Windows - this is because there isn't really "Gtk", "Qt" or whatever, but "Gtk2", "Gtk3", "Gtk4", "Qt4", "Qt5", etc (and i'm sure that if i mentioned "Qt3" i'm sure someone would point out that nobody uses that anymore :-P).
Writing something in Win32 will ensure it will run in 10 years from now and chances are it'll also run 20 years from now. As a user you have a very high chance of finding applications that work regardless of when they were made. As a developer you can focus on your tasks without wasting time to "upgrade" to the latest fad (Microsoft does release new toolkits, but the old stuff keep working and Win32 is the stablest of them all). And your knowledge will still be perfectly valid, no need to waste time learning how to do the exact same stuff in a different way.
Writing something in Gtk3 (the current stable one) will ensure... well, once Gtk4 is stable, Gtk3 will be deprecated and the one to avoid. Qt can't even remain stable in the long term if they wanted (though they do not seem to want to - after all Qt is middleware targeted to application developers, not a platform component regardless if KDE pretends that it is) because of C++. As a user you are limited in the applications you can find and have to keep around all sorts of different toolkits - which wouldn't be that great of an issue (hard disk space is cheap) if those toolkits were still supported and compiled. Except the only distribution i'm aware that provides -say- Gtk1 is Slackware and Qt4 is already on its way out by most distributions. Gtk2 will soon follow (and perhaps some distributions will already have it dropped). As a developer you have to chase after the brokenness that Gtk and Qt developers introduce (though TBF Qt seems a bit better here, probably because they are getting paid by selling Qt licenses and they cant annoy the developers using Qt too much) and waste time "keeping up" just so that your existing application will keep working.
Above X there aren't any stable UI APIs (unless you count Athena but that hasn't received any updates since the 80s - even the updates 3rd parties made during the 90s to improve its look a bit never became part of the "official" distribution). And people want to get rid of X's stability too with Wayland.
Yeah, it is kinda amusing that one of the stablest ABIs for GUI applications on Linux is the Windows one :-P.
Sadly the Wine GUI controls, while they work, always felt off to me and have multiple minor issues. But at least they work. I think a big reason is that despite libwine being usable as a general purpose toolkit for (ELF) Linux application, it never saw much use as that so these issues were never paid much attention.
Vulkan would make things more complicated and wont really fix anything (if anything it'll make things worse because that added complication will make it harder to fix things and most likely will break compatibility with anything not supporting Vulkan - e.g. older and/or weaker devices and other OSes like macOS) since it isn't really rendering that is the issue but behavior (there are also some drawing issues but those are at the GDI level and you do not want to break GDI).
> Qt can't even remain stable in the long term if they wanted (though they do not seem to want to - after all Qt is middleware targeted to application developers, not a platform component regardless if KDE pretends that it is) because of C++
> So, what did we learn? It did not take that much effort to port tutorial 14 from Qt 1.0 to Qt 5.11. It probably took me longer to write this blog post.
Qt 1 was released in 1995, Qt 5 has been released in 2012 (and did not break ABI / API since then).
I think it's not unreasonable to spend a few days every 7 years to update your app to the new Qt major version... especially when the hardware evolves and you have to update to follow that anyways (touch screens, hidpi, permission requesting).
This shows exactly what i mean (and i did note that Qt is at least better than Gtk here).
An application written in 1995 against the Qt1 API will not work on Qt5 because both the API and the ABI has been broken. A user who wants to run such an application (not necessarily released in 1995, Qt1 was in use into the very early 2000s, even though Qt2 was already out) is not able to do so. A developer who wants to modify such an application has to become familiar with both Qt1 and Qt5.
In contrast an application written in 1995 that uses Win32 will run out of the box in Windows 10 without any changes. A user can run such an application perfectly fine (and personally i have several applications like that that i'm using) and a developer who wants to work with the code will be able to focus on the application itself (since chances are he already learned Win32 at the past if he worked in any other Win32 program). As an example, some time ago Microsoft released the source code of File Manager which dates back to ~1990 - i was able to navigate that code easily and added the ability to filter a window's contents using multiple wildcards.
(note that in both cases i am assuming the applications do not do anything stupid that happened to work at the time but doesn't work nowadays)
An application may not even be maintained any more. Or it may be maintained but the user(s) might dislike the changes (e.g. i like Paint Shop Pro 7, released in 2000, but i never liked the UI overhaul that PSP8 got and nowadays after Corel bought the software they have severely... well... gimped it :-P - similarly i do not like the changes introduced in Blender 2.8 so i stick with 2.79 and i know a very talented artist who still uses Photoshop 5 - not CS5, plain 5).
Those few days are also very optimistic and really depend a lot on the software in question. AFAIK Lazarus, for example, took ages to update form Qt4 to Qt5 (same story, but much worse due to all the breakage Gtk introduced, with Gtk3 which still isn't really usable). And as i wrote above, most of the time, they are a waste of time. Adding support for touch screens and hidpi (not sure what you have in mind with permission requesting, so i ignore that) doesn't require breaking your API nor your ABI.
EDIT: BTW, it isn't impossible to have Windows-like backwards compatibility on Linux, here is a screenshot[0] from a toolkit i used to work on at the past showing the same binary working in both 32bit Red Hat from 1997 and 64bit Debian from 2017 - essentially 20 years of backwards binary compatibility (the colors are off because i didn't bother to implement indexed color support). It is just that the toolkit maintainers do not care, but it is perfectly possible to do that if you care.
So to some extent this just reflects my prior ‘commitment’ (it doesn't feel like one at this point, because it follows from many, many preferences about what I like to use) to a Linux/free software life, but I just... don't use unmaintained software and the idea of doing so kinda disturbs me.
Maybe art stuff is kinda special here?
On the ‘disk space is cheap’ side of things, packaging strategies like those of Nix and Guix can go a long way toward solving the human-level problem here; I can install packages that have long been removed from NixOS on current NixOS systems just by pointing to the old copies of the package definitions. That means that if I want a Qt3 application and Qt3 has been removed, I can install it just by pointing my package manager to a copy of the package definitions from way back when that application and its toolkit dependencies were still included. App containerization stuff like Flatpak also seems like another good way to mitigate this issue without committing distros to eternal maintenance of each major version of each GUI toolkit.
What do you think? Is the future for backwards compatibility of end-user applications on Linux brighter than its past despite the persistence of the API-and ABI-level churn problem?
Honestly i do not think that bundling everything to a container is really a solution - more of a hacky workaround for a problem that shouldn't exist. It does solve the "old program doesn't run" issue, but doesn't solve anything else.
For example your old program doesn't receive new features through the shared libraries it uses - a simple example i gave elsewhere was old Windows applications receiving the ability to input emoji in recent updates to Windows 10 even though emoji wasn't a thing (outside Japan at least) when most of them were written.
It also doesn't solve the issue at all from the developer's side. A developer who comes later to an application still has to find old development libraries and make sure they work on his modern system and/or update all the code (perhaps without even knowing how the original program worked). And developers who maintain an application still have to waste time and resources in keeping up (see Xfce as an example).
FLOSS doesn't really solve this - if anything your average (big) distro repository is full of programs using several different "versions" of toolkits and other APIs. At "best" (from the distro maintainers perspective, not necessarily form the users' perspective) the distro drops applications. It does help with having the potential of fixing things, but that is all that there is to it - the actual effort (and thus time) needed largely depends on how much things have broken.
As a personal example, a few years ago i decided to try one of the very first web browsers made, MidasWWW. I downloaded the code and tried to compile it. But of course it didn't compile out of the box, having the source wasn't much of a help here. Now, i am a bit persistent, so i decided to fix it and after a while i ended up with this:
This is an example of what i mentioned in my comments here: the browser was written in Motif and Motif has generally been API and ABI stable over the years (it'd be nice if its theming support was improved - without breaking backwards compatibility of course - as i think that would make it more popular among developers) so i didn't had to do almost anything. But at the same time MidasWWW was written in some old Unix workstation that had a modified version of Motif which was incompatible with the mainline Motif - the result is the panel at the top being inside the content area instead of on top of it. I didn't knew about that, i actually realized after i made that shot :-P and i couldn't fix it because i am not really familiar with Motif and especially not familiar with the modified version that the workstation used - but i'd need to at least know something from both to figure out what changes i need to make.
And see, that was just a little breaking incompatibility between versions. Having access to the source and Motif being backwards compatible helped to get it compiling, but didn't help with having it work properly because there was a breaking change somewhere in between. And since the original developers aren't around, it was up to me alone to figure things out - to do it properly, i'd have to spend considerable time. Time that i didn't want to bother spending, so i didn't :-P
> In contrast an application written in 1995 that uses Win32 will run out of the box in Windows 10 without any changes. A user can run such an application perfectly fine (and personally i have several applications like that that i'm using) and a developer who wants to work with the code will be able to focus on the application itself (since chances are he already learned Win32 at the past if he worked in any other Win32 program).
I'm sorry but your definition of "works fine" is likely very uncommon. Old applications won't even have @2x pixmaps which leads to everything being a blurry upscaled mess.
If it were that easy we wouldn't be getting regular requests to port MFC applications to Qt at the company I work at ;)
If it's not a high-DPI screen, then it will look the same.
If it's a high-DPI screen, then it will be upscaled to look the same as it did on the old low-res screen, so it's not any worse than it has been before. It's only a "blurry mess" when contrasted to an app with high-res text and graphics, making use of that new screen.
So it won't magically become a high-DPI app, of course, but it will still run perfectly fine, if your baseline is "runs the same as it always did". High-DPI support is a new feature, so it stands to reason that it requires app to be updated.
As a sidenote, it does look worse with bilinear upscaling, sadly. This is why both Windows 10 in recent versions as well as new GPU driver offer the option for nearest-neighbor integer scaling where possible.
For example my laptop has a 1080p monitor - not exactly hidpi, but the monitor is small enough for Windows to suggest a 150% scale. But this makes anything that doesn't support scaling look blurry (and also break several games, but that is another matter) so i prefer to use 100% even though it makes things a bit harder to see.
I do not know why you are getting requests to do that but keep in mind that people who are fine with your MFC applications are not going to contact you - what you are seeing is only people who have issues, not all people (EDIT: also i'm certain you can make MFC applications work in hidpi monitors, you do not need to port them to Qt for that).
Old applications will certainly not support "@2x pixmaps" (though many do support hidpi monitors if you use the "Override high DPI scaling behavior" to let the application do it - however the majority do not, which is why this isn't the default) but despite being blurred they'll still work (and that is assuming you are using a hidpi monitor, in practice nowadays hidpi monitors are still a very tiny minority so chances are you wont even notice it). Though FWIW i personally do not use hidpi monitors and one of the reasons is that (the other is that i play games a lot and i do not feel the slightly higher crispiness is worth the massively reduced performance).
However breaking the ABI wont work at all, blurry or not.
(also note that Windows 10 will not use bilinear filtering for integer scaling so in -e.g.- 4K monitors running at 200% scaling you shouldn't get blurriness)
To be fair, that churn also means the system you get when you install Fedora Workstation or Ubuntu is almost entirely Gtk3 or something built with Gtk3 in mind. You don't get this weird combobulation of utilities built with different UI toolkits with different kinds of scrollbars (and different levels of support for high dpi, touch screens and scroll events) because the old stuff "works" and we lost the event viewer source code ten years ago.
I guess the other chunk of this is the development model seems to prevent that sort of fragmentation within these larger products. The broader app ecosystem is a separate story, and I realize that's most of what you're bothered by, but having that core system be consistently modern is a big thing on its own. And to get there it seems like you kind of have to sacrifice one for the other.
But that isn't very helpful nor really that useful. Computers aren't single purpose appliances (even if some people treat them like that but even in that case, they tend to have different opinion about what sort of appliances they are). Ubuntu (and i guess Fedora) out of the box isn't that useful and the moment you install anything that wasn't made with, say, Gtk3 in mind things start to crack. Actually i think in Ubuntu you do not even need to install anything, just open the web browser and things start to not mesh that nicely.
And IMO if anything, the reason you get all those different behaviors is exactly because things break. If Gtk3 was backwards compatible with Gtk2 (in a fantasy world where the Gtk developers did learn from the mistakes of Gtk1 to Gtk2 breakage and designed their Gtk2 APIs to avoid it in the future) then all Gtk2 applications would gain, if not all, then at least some of the new functionality in Gtk3. Programs that were abandoned would still get some of the new stuff while programs that are under development would be able to keep using whatever they are using and upgrade in their own pace only the minor differences that were needed for the new functionality.
Writing something in Gtk3 can break even in minor releases, apparently. I remember there was an update that broke a lot of themes, and apps that did custom theming for their own purposes.
I recently installed Fedora 32 and I am extremely happy with it. I was not a big fan of Gnome 3 for a long time, but now it seems that the project is mature enough. All the Gnome apps look gorgeous and very clean. The whole interface is responsive and quick.
I tried Windows 10 with WSL 2 for a few months, but the experience is far from being good. WSL is slow and the new Windows terminal needs a lot of polish.
Windows 10 it self is a subpar experience. The UI in general is a mess, all from explorer to control panel and settings. You often find windows and text that look awful on high dpi displays. And I'm not talking about third party apps, I'm talking about Windows itself and some MS apps like Skype. Windows also is not as responsive as I would like. Hitting the windows key to quickly open an app, which is the most repeated action by me, is sluggish.
The only reason I was using Windows was because my Thinkpad had issues with the microphone and speakers on Linux, but now that Fedora 32 is out with a new kernel, hardware works like a charm, so good byE Windows!
There are UIs on Linux (& *BSDs) that you cant get anywhere else - tiling window managers. I'd argue that a tiling WM is the greatest UI for a poweruser/programmer/devops. There is a learing curve, but once you memorize a few hotkeys you wont be able to go back to a floating windows mess.
Why a tiling wm instead of tmux? The only thing I want to tile is my terminal anyways -- tiled GUIs generally don't work well and alt-tabbing through them is a better UX.
You probably answered your own question there. They might want to tile things other than a terminal.
I used to use Ion3 on a tiny Fujitsu Lifebook P1510D to divide the screen into quadrants containing things like a terminal (mostly used for IRC), Pidgin, a browser, and whatever videos I might be watching.
Quadrants with the same stuff are easy to do in Windows (with Win+arrows). The tricky part is in various autolayouts &c. But that is also where usefulness diminishes for non-terminal stuff. GUI apps usually expect stacking.
As soon as you are tiling mouse-driven GUIs your having to do more work to traverse your screen with the mouse vs a stacking wm where your stacked GUIs can be flipped through with keyboard shortcuts.
I guess it makes sense if you have a bunch of apps you're predominantly reading from instead of interacting with but in my experience those are predominantly terminal UIs anyways.
Why not? I use a tiling wm and I love it. I'm able to make my desktop act exactly the way I want it to act. I don't appreciate my windows being rambunctious.
If I'm on a machine without a tiling wm I generally using everything maximized and alt-tab like you say. But that's still usually an extra click after opening an application.
I also love tmux, but I have a coworker that doesn't see the point.
So, having options is nice. I want to be able to work the way I want. I also want you and my coworker to be able to work your way.
I guess it depends on your setup but being able to attach to an existing tmux session from another machine is something a tiling wm won't be able to offer.
> Linux, amazingly, has some of my favourite UIs right now.
Of which as a desktop app developer it is harder and more complicated to test.
I have to test if my GUI app works properly before someone on whatever Linux distro complains that it doesn't work on either of their GNOME, KDE, Xfce, Cinnamon, LXDE or Mate setup or worse yet, if the problem is found deeper in the distro stack which could be the user's incompatible configuration or if it is the distro itself.
Windows and macOS have a default desktop environment and windowing manager that I can test against which is very predictable and stable unlike the complexity of the Linux desktop stack and its distros.
If you have a problem that manifests itself under one DE but not under another, you are doing something very wrong - unless you are doing something very deeply integrated with specific DE.
Seriously, for a normal, user-facing app, all you should care about is at the level of the framework you used.
> If you have a problem that manifests itself under one DE but not under another, you are doing something very wrong - unless you are doing something very deeply integrated with specific DE.
I see. So you're saying it's the "app developers" fault when a Linux user changes any system component of the Linux desktop stack i.e. display manager or desktop environment and the app fails to start or crashes because it only runs on GNOME, KDE or its X11 or Wayland-specific. It is actually an entire distro problem. When someone decides to make the switch to full blown Wayland or some other configuration, all bets and guarantees are off on stability and they start finding alternatives for other apps because the apps they tried don't work on their setup. [0] There's a reason why they say "UNSUPPORTED".
For example, Firefox couldn't start on SwayWM and the number of issues and combinations of people not running it on their distro setup was mounting. [1]. Such issues are easy to test on Windows and macOS but very difficult in Linux distros, resorting to trail through the whole desktop stack to find the issue as demonstrated in this ticket [2]. Such issues like this still exist in many Linux apps not just Firefox.
> Seriously, for a normal, user-facing app, all you should care about is at the level of the framework you used.
Like I said, no guarantees are made if the user ends up changing system components or old frameworks and the app crashes on their setup when it is specifically made to work on a default install of XYZ distro. The point is that the freedom to mix and match subsystem components makes it complicated to test your app or trace a bug which could at worst be a regression in one part of a Linux subsystem component, which is why some see it as a blessing and a curse here.
I mean you're free to release an app and say it's only supported in GNOME or KDE on $distros. What the parent is getting at is that in most cases when developing with GTK or QT you have to make an effort to tie yourself to a specific DE. And as long as you don't do that the crashing is a bug in the DE/WM/User Error, not your app.
Basically if you don't try to be super fancy you make it other people's problem if your app crashes.
Imo, the key difference between these two worlds (which you hint at in your analysis of the ascription of fault at the very beginning) is that when most or all elements of an operating system's stack of essential software is fixed, that leaves the capacity (and responsibility) of deciding the nature and quality of users' experiences with a piece of software in the developer's hands alone. But when an operating system is flexible and its components are somewhat interchangeable with alternatives, or the target platform is actually a whole family of operating systems, developers can't and don't have such sole responsibility or capacity. Distro and package maintainers especially do crucial work in sharing that burden and making sure those end-user experiences are coherent and pleasant, and in general they do a great job of it.
When there are problems, I think it's equally useful to consider them as logistical and social problems among community members (developers, distro maintainers, package maintainers, and end users) in managing that shared control and responsibility. Some examples:
• End users who are used to navigating the alternative world of mostly-uniform platforms may not understand that not everything is up to the developer, which can misdirect their frustrations (and their bug reports).
• By the same token, advanced users and smalltime contributors form a crucial layer of these ecosystems wherever they are healthy, by not only running up against integration issues themselves so that they can be identified, but by figuring out how and where to report them and how to mediate information between the authors of applications and people who work to integrate them into their operating systems (distros). Developers who want wide support are faced with the task of seeking or building those relationships somehow (or just rolling the dice).
• Developers of new pieces of software may find themselves faced with a social bootstrapping problem: they need people using their software, or at least excited about it, in order to generate the interest in their software required to get someone working on integrating it with a particular distro.
• Users who change out major system components in a distro (like the init system or the display server) take on (or create a need for) a new role in doing so: they become integrators of the system just like that advanced layer of users I mentioned, or distro maintainers. A clearer understanding of that might ease a lot of frustration for users and developers about things ‘not working’. (This is reflected in Gentoo's self-description as a ‘meta-distribution’.)
I'd like to see more of this acknowledged explicitly, maybe by two official standards of support: original development target and explicitly co-supported. A lot of F/OSS projects sort of already do this by pointing to the program's inclusion in various distro repositories on their official pages, sometimes with a word of warning about especially experimental cases (i.e., packaged but not co-supported).
Supporting more distros will always require more work from developers, but such work doesn't always have to be intensive development work. It can just be a working relationship between a developer and a motivated user of some distro who the developer agrees is more or less competent to figure out the technical boundaries of integration problems and work with them in a reasonable way to solve them equitably. I know it _can_ happen because I see it play out all the time when it comes to packaging software for my favorite Linux distro¹, which because of the distro's weirdness can require some hints or even minor source code changes from application developers. When the motivations are communicated clearly, application developers are often willing to incorporate minor compatibility changes for the sake of some boutique distro's compatibility, with very little fuss.
Idk what really can be done to communicate this to users, or how to better manage the sort of clusters of related configurations that exist on meta-distros like Arch or Gentoo, but both seem in principle doable to me.
1. You compare dev version of linux compositor to stable versions of the other OSes
If the app fails to start or crashes when it runs under one WM or another, then following happens:
- app is using something/talking to something/assumes something in a way it is not supposed to -> bug in the app
- the WM or compositor doesn't take some message or poking in a way it is supposed to take -> bug in the WM or compositor.
The example you posted is sway, during the time it was in development. I stress again, it was in development, it was not supposed to be bug-free, and no distro shipped it. Applications do crash under dev version of macOS and Windows too; you just don't see that because the respective bug trackers are not public, and the respective dev builds of the os are not very public either. In Linux, you have access, but then you have to break something, with the implicit understanding that when that happens, you get to keep the broken pieces.
Nobody sane would consider that supporting development versions of any package on any OS is something you are supposed to do.
Related: Switching to Wayland is mostly non-issue. If your app previously used X11, it will continue run using X11 under Xwayland. If anything, the Wayland developers made it too seamless. Unlike Apple, which made XDarwin eye-sore, so the users would be aware and demand updates.
2. You compare different layers in the stacks
In Windows, you have to use MS provided UI toolkit (whether Win32 or UWP, doesn't matter). In macOS, you have to use Cocoa. Neither of these vendors would allow you to talk directly to their respective compositor, no way around it.
In Linux, just because you can - everything is transparent - doesn't mean you should. If you want to play at the same level as you do on Windows/macOS, use it at the same level: use a toolkit, that will abstract the same things for you, as the vendor supplied frameworks on the other OSes do. Gtk, Qt, Electron, doesn't matter, they do the heavy lifting for you. Do not talk to compositor directly - then you do not even have to care, whether it is X11 or Wayland, the toolkit will arrange it for you.
By inserting yourself on the different level in the stack, you assume on yourself solving all the problems that are being solved there. If you want add your workload, you are welcome, but then be honest at least to yourself and admit, that it is easily avoidable self-inflicted pain.
---
I understand, that there is no system-wide SDK with a given ABI, and that can be frustrating sometimes. I understand, that ABIs that distributions provide are semi-stable. There are solutions for that:
1. Pick a distribution you will support. Blackmagic supports RHEL/CentOS, nothing else. GoG supports Ubuntu. If it runs anywhere else, cool, but you are on your own. Similarly, you can even support several of them, but when someone decides to do X, where X breaks other apps, he is on his own. When the user is competent of breaking something on his own, surely he is competent of fixing it too. You don't support users who hex-edited their system libraries in Windows either.
2. Use flatpak. You choose the runtime, you choose when your app switches to newer runtime. Most runtimes have defined support periods and document, what can change during that support period (i.e. freedesktop one allows for security updates and updated OpenGL drivers). This has the advantage, that nobody is going to try using their self-compiled libraries as a replacement for the runtime-supplied ones.
> Pick a distribution you will support. Blackmagic supports RHEL/CentOS, nothing else. GoG supports Ubuntu. If it runs anywhere else, cool, but you are on your own.\
I'll take "reasons it still isn't the year of the Linux Desktop" for $1000 Alex.
Well, for Blackmagic products you usually need to attach hardware that is well beyond most home budgets. What distributions they do or do not support is immaterial for the linux desktop, but they were provided as an example of their approach.
For GoG, they run very well everywhere; they just explicitly support Ubuntu.
The point was, that WMs are not even a factor in the support matrix. They either manage windows or don't, but your app doesn't care, it just paints its window. How the user places it on the screen is his problem.
I'm a long-time Mac user who is planning a gradual transition to Linux and FreeBSD. However, I have used various desktop environments over the years in the context of installing Linux and FreeBSD on older hardware and in development VMs, and so I'm familiar with the Linux desktop ecosystem even if Macs are my daily drivers.
One of the things I appreciate about the Unix ecosystem is the wide amount of choice of window managers and the broad support for themability. This means that users are less affected in Linux by current design fads. For example, the reason we have MATE and Cinnamon is because some people didn't like the changes made in GNOME 3. Now there are at least four well-supported GTK-based desktops: GNOME, XFCE, MATE, and Cinnamon.
If Apple or Microsoft makes a UI change and I don't like it, I generally have to grin and bear it unless I switch platforms. If KDE, GNOME, Windowmaker, or another desktop environment/window manager makes a UI change that I don't like, then there are a lot of options I have to change things to my liking. That's the power of the Unix desktop ecosystem.
I too am a big fan of the UI that comes with Cinnamon DE(the one that comes with Linux mint). I sometimes install other UIs just to see how they've evolved over the years but keep on coming back to Cinnamon for daily use.
I’m using Gnome in dark mode and I agree for the most part, although I find LibreOffice looks absolutely dreadful in recent versions. I’m not sure whether it’s a regression or something wrong with my setup.
The webpage thinks it’s wonderful but in use it’s ugly, slow, distracting and horribly optimised for desktop use - everything is too big and bold and it creates cramped UIs where controls are needlessly hidden in menus.
> They're clearly going for a gradual changeover to the new UI.
The thing the GP highlights though, shows it's never really "gradual."
Just by being observant, you can pretty much date the last time any particular component of Windows was touched. Windows 10 has dialogs, controls, UI themes, HIG sensibilities from all of Windows 3, 95, 98, XP, Vista, 8, and 10 mixed together. It feels very haphazard.
They shift progressively more functionality into the new Settings app, and in most cases when they’re done doing so I think the end result is markedly better than what it used to be. It’s a long, drawn-out process, but I do like where it’s headed. (Until they finish each thing, though, it’s regularly jarring to go between the two. Control Panel is perhaps hidden away too well; some things like Network & Sharing Centre need to be accessed through the Control Panel, but are basically hidden so that it takes some skill to find them any more.)
I hope they never get around to changing the Event Viewer or Disk Management. I know they are far from perfect, but can you imagine it getting the same treatment as the Task Manager?
Are you joking? Event Viewer's "filter event sources" UI is unusable - the scroll causes the check boxes to vanish. It's been unusable for 20 years. Virtually every ops person needs to be told to use the keyboard and just accept that a few erroneous sources will be selected or use the spacebar to deselect them. Even more frustrating, just typing the name of the event source seems to work INTERMITTENTLY!
In the same time, Task Manager has gained nice charts, it's begun to display stats on network and disk as well as CPU usage, and it's gained a simplified view that my mother can use to close a crashing program. I've been spectacularly impressed with the task manager changes - simultaneously making it easy for low skill users and adding advances features making it unnecessary for me to install Process Explorer on every machine I use.
Am i missing something? The Task Manager has barely changed since Windows NT 4 and if anything it got both more features, output panels and became more usable over time. Also it doesn't use the modern UI stuff, it still seems to be in Win32.
(btw, just in case... you did click "More details" at the bottom left side, right? Otherwise it looks more barebones than the Windows 3.1 task manager :-P)
I felt liked the Metro design which felt hip, new, and modern when first coming across while developing an app for a windows phone back around 2013. But they lost the phone os battle to iOS and Android.
I still think it was a huge misstep to push Windows down the tablet route. If they wanted that sort of device they should have extended the phone operating system to it. It really does feel like the worst of both worlds.
Windows CE had a really nice developer experience IMO. Visual Basic worked on it! Knocking together basic data entry mobile apps was within the range of anyone who can do advanced Excel.
This might be of questionable value to "real" developers but the business value of this type of power is immense and gets things done that would be unaffordable to do today.
It's only relevant because it explains why this WinUI3 is still using XAML, even though XAML is extremely painful to work with (it's verbose; it mixes content, presentation/style and behaviour in the same XAML document (they learned nothing from HTML+CSS... which makes sense because WPF was designed around 2003 when IE6 was at the peak of its dominance), the built-in layout controls are anemic and have now fallen way-behind CSS's layout functionality, and so on and so forth.
What Microsoft should do is snapshot a recent revision of CSS and make that the basis for the style and layout of XML or JSON-based structured documents, and then extend it where appropriate. Like it or not, eventually all layout systems are all going to become alike in the end (be it a desktop GUI system, web-page HTML+CSS, WPF/Jupiter/XAML, Qt, Apple's constraint-based layout system, etc) - it feels like a waste of resources to build yet another layout system (or in this case: trying to squeeze more life out of XAML and whatever WPF's latest descendant is called). Remember that using CSS does not mean having to use a web-browser engine, or using JavaScript, or even HTML - CSS was designed from the start to be agnostic to the types of documents and environments it's used in.
You can definitely move templates and styles to other files and reference them from there. You can do that to pretty much any XAML element. XAML is actually a compilable-to-ILR markup unlike HTML/CSS/JSON that need to be interpreted.
I don't see how HTML/CSS does anything any better, XAML is verbose, but the tooling takes care of most of it.
>I don't see how HTML/CSS does anything any better, XAML is verbose, but the tooling takes care of most of it.
CSS's modules for layout are far more advanced than WPF/XAML/WinUI. Never-mind the more brain-dead aspects of WPF/XAML/UWP like `<border>`, WinUI only has <Canvas> (absolutely positioned elements), <StackPanel> (similar to CSS's flex-box, or normal block layout), and <Grid> (which is much closer to HTML <table> than CSS `display: grid;` in terms of capabilities) - all other layout systems have to be done by hand - which requires a deep understanding of WPF/XAML's layout system internals - and who wants to invest the time in that when there's still uncertainty about the supported lifespan of WinUI?
I've noticed that Microsoft only commits to supporting a UI framework when at least one of their major products relies on it (which explains why WinJS became irrelevant, _fast_, while MFC and WPF are clearly here to stay (I note that Microsoft has only one major product relying on WPF: Visual Studio - literally everything else uses some non-public UI platform like DirectUI (Office) or MFC.
Only a small number of programs and utilities that ship as part of Windows uses WPF. Windows has more WinForms-based GUIs in it than WPF-based GUIs.
It’s a far cry from the Longhorn demos of 2003 that showed the entire Windows UI been made with WPF, and Explorer itself hosting third-party applications’ WPF components.
Yeah, I meant WinUI/UWP XAML. Starting with 8.1 it's been the standard for any new or overhauled Windows shell UI, along with the built-in apps. As you mentioned, this was the original intention for WPF, and the failure of this plan (which the Windows team largely attributed to the dependency on .NET) was the main reason WinUI/UWP and WinRT came about in the first place (as basically a do-over of the Longhorn plans, but ditching the .NET dependency to build on COM instead). So it succeeded at getting Windows to adopt it (at least for a while - apparently in 10X they are starting to use web tech for shell pieces like the new Start menu, although I think the actual window manager itself is still XAML based), while WPF failed.
This is a subjective take. I don't know whether any of you share this concern but I feel like almost all new UIs are unattractive. (Maybe except Apple, and I'm an Android/Windows user). The new windows UIs are flat, colouring is off somehow (I don't have the lexicon to express what is missing), wastes lot of space, and appears like applications made for kids.
Take the calculator application included with Windows 10. It takes time to load. Earlier it was instantaneous. It takes up way more screen real estate too. Maybe the new design helped them to make this app easier to use on touch devices, but from my perspective, in all other respects it is going backwards.
I also think the modern Windows 10 UI is ugly, but the worst is that it's very inconsistent e.g. right click here and get this popup style, but if you right click there you get another one. Same goes when you browse some computer settings (not talking about the most obscure or very technical stuff).
And yeah it's ridiculous that we have incredible CPUs compared to what I started with yet simple apps take way more time to load with no benefit for me (argueably even looking worse). But that's the state of things and I don't see it changing - well, maybe the next calculator will be in Electron.
I agree. I remember when I bought a refurbished ThinkPad T430 to play with back in late 2016; it came with Windows 10. The interface looked gaudy on the laptop's 1366x768 display, with all of its oversized elements. It seems to me that Microsoft has a penchant for UI designs that consume a lot of vertical screen real estate (i.e., the Microsoft Office ribbon). On the flipside, I've had the opportunity to use Windows 10 on a 4K display at work, and it looks much better on it. It seems to me that Windows 10's UI is designed for high-resolution displays, but the problem with this assumption is that there are a lot of business laptop users out there making do with 1366x768 displays. The old-style Windows Classic or Windows 7 UIs with their conservative use of screen real estate would fit perfectly with these displays, but unfortunately these themes are unavailable in Windows 10, and it still wouldn't address the problem of applications written using Windows' modern UI guidelines.
macOS still looks good on my 1440x900 2013 MacBook Air, and I like how the Cinnamon desktop looks on my ThinkPad T430; I use Linux Mint on that ThinkPad now.
Tip: you can purchase a nice IPS 1080p display for ~110 (EUR)/~130 (USD) at aliexpress. Swapping it with the existing one is ~30 mins, if you are doing it for the first time.
(I've damaged my original 1600x900 TN panel and the quoted price for material was more than the third-party IPS, even without the work. The third party panel is much nicer).
Also related: macOS was originally optimized for 72 dpi; retina at @2x is 144 dpi. Windows (and also Linux DEs) optimized for 96 dpi, 200% is 192 dpi - which is much bigger jump and they look atrocious if the display is in the 140-180 dpi range. That's why macOS can look nice at ~WQHD displays that MBPs ship with, and Windows laptops needs at least 3200x1600 look nice at 200%.
I find the flatness to make things look horribly cluttered, for example in Outlook 2016 : too few borders, no distinction between buttons and plain text, no hierarchy, uselessly huge components ...
(Edited to be more specific)
microsoft didnt start these trends, and it's sad that they are following. win7 was a gold standard of usability and speed. Humans have fast 3d contour perception for a reason, it helps us find buttons and fruits.
i dare you to find your sound device's control panel in win10
It's fascinating to watch how even babies pick up on what's a pressable button (and what's not) in the real world via contour perception. Before they even have an understanding of cause and effect, they love to find buttons on physical items and press them.
I wonder why UI designers stopped trying to leverage this principle.
IIRC Microsoft did start these trends with Metro. Before Metro, all UIs (from MS, Apple, Google and even Linux) used more visually rich themes (it was during the time when Apple went a bit too hard on skeuomorphism). Also AFAIK even though most associate Metro with Win8, it started on Windows Phone 7.
My Windows 10 Calculator.exe uses around 20 threads. Seems a bit excessive for a calculator app. Any one having access to old classic calc for comparison?
Open up Task Manager and go to Details tab and show the column Threads (right click on column header and pick Select columns).
As a macOS / iOS user I am with you in a way. For iOS I much preferred the older Skeuomorphic design compared to the modern flat design that is seen everywhere these days.
Seriously can you do a Visual Studio or Photoshop UI with this?
I hate the fisher price mode for the control panels on Win10 (which I guess is made using WinUI). It’s so much wasteful of space and half the options are missing. I wish there was a button to get rid of these entirely.
Will be cool to update our MFC app to look better. Wonder if it will still work on wine then, it does now with VS2019 and the old win 98 look. But the app is almost 20 years old... Still maintained and worked on today. And until ms stops support MFC, no change needed. Change just costs money for no significant profit or advantage to us or our customers.
The new API is only supported on the latest versions of Windows 10. If your app has users on older Windows versions, the suggestion of using this API will almost certainly be shelved for at least a few years. That’s how it works on Android - google shows off a new API and we make plans to use it 5 years later when users actually have it on their devices.
WinUI 3 is decoupled from the platform and is one of its major improvements, apps built with it will run on Windows 10 builds as far back as 1803 (that’s two years old and is EOL even for enterprise customers).
However the person I was replying to has a 20 year old app. I'm going to guess they still have users on Windows 8, 7, maybe even Vista and XP. Using the WinUI API, even a bit of it, will leave those users in the lurch.
True, but anyone making commercial apps have to support Windows 8.1 for a few years (still not EOL!) at the very least. Some even support Windows 7 (only EOL'd a few months ago).
Haven't thought about MFC in many years. Your comment reminded me of a time in the 90's when I was learning how to build UIs using Win32 calls and then I learned about MFC and I thought it was the greatest thing ever.
Can anyone offer any insight on this? If memory serves, WPF is backed by DirectX. How can they hope to beat that performance? Performance improvements over its MVVM? Modern UI toolkits tend to add high-level abstractions and degrade performance (e.g. by relying on web technologies), rather than enhance it.
The framework layer of WPF is implemented in C#. WinUI is all C++, including the framework and the compositor that makes the DirectX calls.
In the framework layer, typical UI updates are faster because the WinUI data binding system uses code-generation rather than the reflection-like runtime binding system that WPF has.
WinUI also includes the rather incredible Windows.UI.Composition API’s, with things like ExpressionAnimation that enable lots of cool animations (like parallax, sticky header, or cursor-relative) at a stable 60 FPS.
Disclosure: I work at Microsoft on the Windows team.
> code-generation rather than the reflection-like runtime binding system that WPF has
That's a good move. If a convenient abstraction can be implemented in a way that 'compiles out', it ideally should be implemented that way. Does this enable more compile-time checks?
Unmatched doesn't mean good or bad, it just means it's not the same, and implies it's good.
If it were a real claim, they would have some numbers and an example of old vs new or new vs web or compare it to Android UI or something else known to be bad. And then we could pick apart the code examples and have a good time.
Desktop development is a complete mess at the moment on Windows. Core 3.1 definitely has many good improvements and is good for class libraries but adding a GUI currently has few desirable options
1)Winforms designer doesn't work with complex or 3rd party controls
2) WinUI doesn't work on Win32
3) WPF works but lengthens development and will soon be out of sorts with this WinUI
4) Really everyone wants cross-platform GUI but not much forthcoming except Uno or Avalonia but these are risky for large projects
5) Blazor could be interesting but only works as Desktop standalone using Electron.NET which seems a horrible dependence.
6) Old Winforms on defunct .NET framework works but looks horrible
7) Old 3rd party Winforms on defunct .NET framework is expensive
I don't get it either. Developing on windows used to be trivial. Winforms designer was drag and drop, click into code, write an extra module, done.
I was told, oh, you are an idiot, you need to be using XAML / WPF etc etc. I tried - the stuff looks works, much more complicated to develop and does LESS!
Even Visual Basic (remember that?) made GUI development trivial I thought compared to these new improved approaches.
The reason is because humans have two eyes, which give us the ability to distinguish depth easily (meaning low cognitive load - our brains are optimized to process this).
In a UI, whenever there are more that two dimensions, depth should be shown (simulated, technically) because with our two eyes we cannot discern depth easily on a flat surface.
Obviously you can go overboard. For instance, in HN, flat makes sense. But in a windowing system, shadows(1) are essential to show depth. Otherwise it increases the cognitive load required to distinguish between the foreground and background.
(1) - you could use blurring (or other techniques) as well, I'm just using shadow as a simple example.
I wonder if it is the same person who has worked on the implementation of e.g. Win32 "BUTTON", MFC's CButton, Windows.UI.Xaml.Controls.Button, System.Windows.Forms.Button, System.Windows.Controls.Button... :)
So many red flags in such a quick succession. The setup page (https://docs.microsoft.com/en-us/windows/uwp/get-started/get...) says you have to download Visual Studio, enable your device for development and register as an app developer. The latter two indicate just how much control MS has of your machines, you have to pretty please ask MS if it's ok to make your own programs, the former shows that they still can't make a decent API and need an IDE to do all the work, conveniently locking you into their tool suite.
A little bit later this is confirmed when the show you hello world (https://docs.microsoft.com/en-us/windows/uwp/get-started/cre...) which instantly creates a mess of json, xml and c# for you so you don't realize how ridiculously complicated it all is.
It's the other way around, isn't it? They're not going to reimplement the Win32 API on top of WinUI, but WinUI uses Win32 in part for its implementation on some platforms (that platform being classic desktop Windows).
Is not WinUI the extraction of the UI part of the UWP stack? Yes namespaces break, but concepts and controls are from the same linage. UWP is more than the UI. It has also sandboxing and deployment.
So essentially, it is not a new UI. Just a new version of an old one.
(consider e.g. WinForms was also ported from .NET Framework to .NET Core. That is also not counted here ;))
> Is not WinUI the extraction of the UI part of the UWP stack? Yes namespaces break, but concepts and controls are from the same linage. UWP is more than the UI. It has also sandboxing and deployment.
The UWP has everything except a GUI.
> So essentially, it is not a UI.
fixed itl
> (consider e.g. WinForms was also ported from .NET Framework to .NET Core. That is also not counted here ;))
So everyday a new framework. How many days will pass until UWP will be obsolete ?
Actually, I think they made this specifically because win32 is here to stay. UWP is officially dead and you can upload win32 to the Microsoft Store, now. This paves the way for some parts of the UWP dream (works on high dpi, supports touch, fancy graphics without needing custom assets) in win32.
Between this, MSIX, containerization, etc the fat app landscape could be pretty good. Too bad all my vendors who are still releasing fat apps will never modernize and all the new vendors are web-only.
They say that the VM and the remote desktop connection are "optimized" but also that UWP/Modern apps will be faster and better for battery life, so it doesn't sound like a great future for win32.
The value of Windows is in the Win32 API, due to the existing investment into software. If you have existing code base, you want to keep it running with minimal ongoing investments.
For a case study of interest for newer APIs, see also Windows RT.
Windows RT failed, because it had no applications, except for MS Office.
So Win32 was there, ISVs just were not allowed to recompile their apps for the ARM target. They were expected to port them to Modern API. Which of course, they didn't.
> UWP is the future of Windows APIs, Win32 is mostly frozen since WIndows XP.
Frozen is usually a good thing.
> Microsoft is just making UWP similar to what Google is making with AndroidX, detaching the technology from the OS version.
... and from users also. Will they get a lawsuit from Google for copying their UI as they got from Apple ? I think not because the UI is so bad that they will be ashamed to go to court with something like this.
Frozen is good because it means you can develop reliable software that will continue to work in the future without running on a constant treadmill of useless updates.
The API is frozen, not the features and bug fixes. New APIs can be introduced (and are introduced) and bug fixes and features are added. As a very simple example, see how you can enter emojis in any unicode-aware Win32 input box application with Win+; even though this functionality was introduced in (IIRC) Windows 10 Fall Creators Update.
It is because even they come via XAML Islands, or UWP/COM interop, at the end of the day it is still UWP/COM that is going forward.
Looking forward how Windows will look like now that it was officially communicated that Windows 10X is also coming to desktops and laptps, with its sandbox model for 100% of all userspace.
All that is irrelevant, the point is that having a frozen API doesn't mean that the API's implementation is also frozen. You can still get new features on a frozen API as well as new APIs alongside it.
> What else did you want them to do? Continue adding features to long deprecated APIs?
Yes. It isn't like deprecating APIs is forced on them by some otherworldly power, they decide to do that and they can also decide to not do that. They are in full control of their technology, they can do whatever they want.
It used to be possible to click on a list of items and hit a alphabetical key and it'd go to the result starting with that key. Also typing more would go to that particular matching item. This isn't possible in WinUI as far as I could test in native windows apps.
You can't ctrl+f in settings - it won't highlight the search
And there're tons of more missing features like that. Did anyone try that UI with just a keyboard?
This supports "WebView2(Chromium-based engine)" natively, which may add one reason to why Microsoft chose chrome and not firefox as a browser engine:
- they share one tech for their browser and UI
- they rip the benefit of their work (and the community work) on electron
- it opens the door to sharing one instance (or at least main libs) between different apps using this engine, solving the 'electron spawn one engine per app' problem
Also, I find the UWP monochrome UI style extremely confusing and difficult to use. I don't know why it is to different from the Office 2019 UI, which I find usable.
No one wants to use touch on Windows devices, because Windows devices are for doing actual work, and touch is a shitty, shitty input method.
I'm going to stick my fingers in my ears and keep trudging along with Win32, because it will never die. Trying to kill it would be the monumentally dumb, and I don't think Microsoft is that brain-dead.
I can't tell if you're joking or not, but this seems very sensible to me, because it could offer great performance, and it's easy to create bindings from almost any language to C.
Microsoft products's documentation is always bad. What i expect the first is sample of code and screenshot to know what's actually like if i want to try it.
It's like Microsoft documentation team doesn't have developer mindset in mind.
Not at present, at least not reasonably. There are Python WinRT bindings under development (https://github.com/Microsoft/xlang/tree/master/src/package/p...) but they don't (yet?) support the "composable types" type system feature (basically a form of implementation inheritance) that the WinUI and Composition types require. So you'd basically have to write your own support for this.
And on their copy of Windows, they're running Chrome and 10 copies of Electron.
In 1995, Marc Andreesen said "Netscape will soon reduce Windows to a poorly debugged set of device drivers", but I guess that turned out to be Google instead.
LOL, succeeded means something very different in this context than what you are implying. Linux did succeed on the desktop and in fact it has made many great strides that makes performance and desktop management better than Windows in almost every way. Just because it doesn't have majority marketshare, in part due to Microsoft making backroom deals to ship computers from the slave labor factories that Microsoft supports in order to sell $200 pieces of junk adware laptops at Walmart, doesn't mean that it hasn't been successful.
But many companies will be on it for a lot longer and this is version 3 of WinUI. The question is "what tech stack can we use to target all current versions of windows and isn't being deprecated anytime soon?", as windows 7 dies out this may become the answer but for the past several years and probably the next few Win32 is the only possibility.
Windows had to partner with Linux to stay relevant or risk losing their developers. They also had to build containers into their OS or risk losing the server market. While Windows may come pre-installed by most manufacturers, when it comes to tech and engineering no one uses Windows unless they are subservient.
I don't really see the problem here. An alpha release isn't supposed to be a rock solid use-it-everywhere solution, it's meant to show the path forward instead.
> Developers flocking to WSL in 6..5..4..
Maybe some, sure.
> Linux dead in 3..2..1..
What? Linux on the server won't see any change. Linux on the desktop, maybe, but I doubt most people using Linux on the desktop are going to even consider switching because Windows has a new UI framework.
The only big advantage that I see is one UI framework for Windows and multiple different ones for Linux.
It does feel like there is a general lack of taste in Microsoft. There are a few pockets of good work but overall it doesn't seem to be a priority for them.
Use Qt or Electron and move on with it. With PWAs being supported in the app store and now with Chromium Edge, might as well just build something that will work on all platforms and mobile too. While I admit that there are too few options for lightweight fast Linux UIs, especially with Vulkan acceleration, Qt performs incredibly well and Chromium and Node are only going to get faster.
Sorry, but it already is. No-one write layout engines or canvas rasterizers any more. They just use chromium. Spotify, Teams, Slack... I expect even Word + Excel to eventually transition.
It's crazy how popular Electron is these days. I want to say that it's become the de facto solution for creating hybrid native apps on Windows but maybe that's selection bias due to the apps that I use (dev, gaming)?
I would say the main reason for the popularity is because it allows web devs to create apps for desktop with minimal effort for all available platforms.
C++/CLI is a necessary evil for operating on the CLR where are object lifetimes may be managed by the CLR and others follow standard C++ memory management. Ultimately it’s raison d'etre is to bridge native code to the CLR where P/Invoke or COM isn’t sufficient (lot of reasons that could be the case), so we deal with it.
C++/CX on the other hand only exists because Windows Runtime Components, while based on COM, is still a superset with new features that couldn’t be supported with C++98 and MSVC lacked the necessary C++11/17 features to implement it without needing compiler extensions.
Syntax highlighting and Intellisense being absent for MIDL 3 does indeed cause annoyance, however.
I guess those explanations were for the wider audience, I am perfectly fine with Microsoft's C++ dialects.
Actually my only complaint is the downgrade in tooling experience in name of some ISO purity that I don't care about, and other compiler vendors also do anyway.
For the time being I don't plan to adopt C++/WinRT until the tooling experience catches up with C++/CX.
From a developer point of view that last point is very important. If your native app is not going to look or feel any different than an app built using Web technologies why limit your customer base to Windows users?
Sure, native apps are better at reducing resource consumption but if your app offers significant functionality users won’t mind if it uses some extra memory. This is the reason Microsoft’s own Teams and VS Code products were built using Web technology.
Another issue from a technical pov is that XAML, MVVM and 2-way data binding are outdated. MVVM introduced 2-way data binding to the world. Since then every JavaScript framework copied 2-way data binding including EmberJS and Angular. Today we know 2-way data binding is gimmicky and makes programs hard to debug. This is the reason React uses unidirectional data flow.