Hacker News new | past | comments | ask | show | jobs | submit login
So you want to write a GUI framework (cmyr.net)
263 points by mwcampbell on Aug 11, 2021 | hide | past | favorite | 173 comments



Out of all the things mentioned in the article, it doesn’t mention tables which to me are one of the unsung heroes of all good GUI frameworks. They are so deceptively simple looking and yet, once you peek under the hood, the amount of complexity is daunting. E.G. do you allow all widget types to be in table cells? What happens when there are more items than can be displayed? Now you need scroll bars. What happens when a table is resized? It goes on and on and most solutions have negative consequences for a given use case, but to make it meet all the use cases, one ends up with such a complex set up that nobody wants to use it.


For most business apps with power users, being able to present data in huge, fast-scrolling, filterable, customizable tables with colorful styling for elements that require attention (un-acked errors, etc.) elicits more joy from users than anything else.

A close second is when you enable performing actions on multi-selected elements.

"We could never do that with [competing product]."

If at any point in your product lifetime you discover, probably accidentally, that some users are habitually exporting data from the product and importing it into Excel, well, first, you should panic, and second, your product managers need to become best friends with those users. For every one of them, there are a dozen who wish they could do the same thing but can't figure out how, or feel like it's too much work, and an order of magnitude more who would have their minds blown if you put that capability in front of them.


Adobe Flex data grid had it all figured out. It was by far the best data grid component I ever had an opportunity to work with.

Back in 2010 I contracted for an aircraft spares supplier in the UK. All their work was based on the grids. The main ones were a supply order and a purchase order. Creating either order was done through an editable grid with many items per order, one per row. Tabbing through to the end of the row added a new editable row, focused the part number field and allowed the user to start typing the part number. While typing, suggestions would show up. Once the part number was in, they would move through the columns and put stuff like price, quantity, unit of measure, select oldest batches in case of a sales order and an item with a shelf life. In case of a PO, it would automatically bring in items booked in for items on a back order.

It was awesome. Worked flawlessly regardless of the resolution, I could put any Flex component in a cell with just a few lines of actionscript. Flex would scale everything properly with its percentage based sizing.

It’s fascinating that 11 years later, we still don’t have anything comparable to Flex. There are some good grid components out there but none are as simple to program as Flex grid.


Tables of a static size, or small dynamically generated ones, are fairly straightforward because you just pay the entire cost of layout upfront. The layout algorithms themselves are a feature farm - CSS is complex because it supports so many ways to do things across a multitude of assets. If it's something like a game GUI, the layout can be bespoken to a particular stylistic choice and specific asset types. It's still not a trivial endeavor(headaches always abound when trying to align graphical elements in a top-down composition), but more of the programming load goes to text rendering, selection and input than layout once you drop CSS-style flexibility.

Lists and tables with both dynamic sizing and scalability requirements(think the "infinite scroll" style of presentation plus insertion and removal of elements plus nesting plus accordion folding) create all sorts of challenges in trying to amortize the processing costs. That's where layout code can get really out of hand.

However... if you opt to paginate by a fixed element count, most of the challenges go away because there's only so much layout you can fill a single screen with, and that lets you revert to brute force. At that point it's a question of "OK, how much scrolling do we need?" Scrolling itself has gained a reputation for being mostly detrimental to UX, so in terms of cost-benefit for productivity it would be one of the first things to go.

In a lot of ways frameworks create the problem for themselves by trying to do everything.


This article is very much focused on infrastructural stuff, and doesn’t talk at all about layout, widgets, design, theming, etc; these are all totally challenging areas that are out of scope for this post. Maybe one day…


I have seen them abused though. Like, lets take our entire database and throw it on a table on the main window.

Sure it's great for the devs who only work with test databases with 100 records, but that main client who has 1000s of records just grinds to a halt.


Many years ago I used to work on the Cappuccino project which was an open source implementation of the Mac Cocoa frameworks in Javascript (technically Objective-J, but that's not important).

In fact I was one of the main people responsible for implementing the TableView component. We modeled the rendering on the iOS version, which was aggressively optimized to be incredibly performant when rendering large data sets.

There are a handful of strategies we used including view reuse (only rows/columns not clipped by the scroll view around it would be rendered. Cells themselves were reused (when possible) when the user scrolled, which saves a lot of time allocating and garbage collecting memory (and in our case DOM nodes).

But beyond that we were very careful about documenting the algorithmic performance of the whole component. Most operations can be done in constant time, some were O(n) but n in our case was the number of items currently being shown, not the whole table which wasn't even in view.

If you're building a UI framework with the expectation that it scale for your users, it's not that hard to write it in such a way that it handles thousands of records. It just requires that the implementor make performance a consideration from the beginning.


HTML+CSS+tables is a massive weapon in the GUI wars, probably responsible for most of the power of the web and its current conquest/war/campaign into desktop via electron and associated web-as-app frameworks.

And with good reason, the ludicrous complexity necessary to specify presentation details is mostly and with an industrial scale army of programmer/designer knowledge in HTML+CSS.

The path forward in GUI frameworks at a minimum is to leverage/support HTML+CSS which is such a huge lift to support.

That leaves the issue of inherent ties to Javascript in HTML+CSS land, versus the language you want to support, and the still-nascent integration of HTML+CSS "document/page-oriented" language vs the window style of a true GUI app.

I do not envy a new language having to produce a ground-up GUI framework. Arguably harder to do than muscling a new programming language into relevance.


Well, I would argue that any even slightly-complex use of the kind of tables I feel must be being meant here (though it isn't 100% clear) quickly makes you realize that pre-formatting all of the table cells makes your app much too slow, and so you at least need to instead lazy-load content for cells as they scroll in if not carefully reuse a free list of GUI object hierarchies for the cells themselves... as much as I truly do love HTML+CSS as a rendering layer, it is honestly a massive liability for this extremely core use case (one so important on mobile screen formats in particular that I would argue that how your framework supports this should be one of your primary concerns when architecting your API).


Tables are one of the weakest parts of HTML. The lazy loading in e.g. Chrome History or Yahoo Mail is an awful, jumpy experience. Even the very best implementations, like Fastmail, still break basic browser features including Find and Print.


I'd agree for naive uses of HTML tables, but HTML+CSS incorporates sizing, overflow rules, etc.

Which is what you'd need in any general arbitrary-content infinite-stream model.

What is the alternative?


Do we need scrollbars?

What about popping the cell into a tab or overlay like we do with images, Apples 3D Touch

I want to drill into specifics, not fiddle with scrollbars, which just hide the data I started with as I scroll.

Pop that cell into a one cell view that gives me the full story.

GUI habits needs a rethink, not just a new backend


I think you are picturing something very different than I am when you think of a use-case for scrollbars? If I have a table with more items than can reasonably display on screen at once, nesting the information isn’t going to help. Flat is what I want, scrolling is what allows me to keep flat when there’s too much data to fit.


> sooner or later, someone is going to want to display some HTML (or an actual website!) within their application. We’d really rather not bundle an entire browser engine to accomplish this

Theoretically speaking, yes, but in practice it seems most people just gave up and simply ship their apps with embedded Chromium, which is why now almost every single piece of software has installer that's at least 500 MB large.


> which is why now almost every single piece of software has installer that's at least 500 MB large.

I know pessimism about software is in fashion, but exaggerating the problem doesn't help. The actual size of the latest version of Electron (on Windows) is 181 MB uncompressed, 77 MB zip-compressed, and 51 MB 7z-compressed. Not great, but again, let's not make it look worse than it is.

Also, there is hope. Microsoft's WebView2 [1] means that Windows has a modern, shared, embeddable web rendering engine for the first time in several years. And Tauri [2] is making it easier for web developers to take advantage of WebView2 on Windows, and WebKit on Mac and Linux, to create lightweight native applications using web technologies.

[1]: https://docs.microsoft.com/en-us/microsoft-edge/webview2/

[2]: https://tauri.studio/en/


I have two Electron apps on my computer, one is 470MB and the other 511MB.


Which ones? I have two too (Obsidian and IRCCloud), each weighing in on 88M and 52M. So it does seem it's possible.


That doesn't mean it's elecrron taking up the space. Native apps are ofteb much larger than that.


At the moment, realistically using WebView2 requires shipping a WebView2 Runtime, either a fixed version or a bootstrapper. That's still over a hundred MB. I don't quite understand why that isn't being included with Edgeium, maybe it will be there in the future, but as of now, having Edge installed is no guarantee that any software trying to use WebView2 will work.

I've got too many scars from needing to ship shared Microsoft runtimes in the past to be entirely happy about this current situation.


Webview is not a replacement.

Webviews for GUIs is the worst of all worlds. You're packaging up two separate event based runtimes, chromium and node, and have no ability to interface with V8, meaning youre actually losing significant performance by having to serialize/deserislize data from node, to the browsers window, then deserislized on message in the browsers packaged JS engine.

It makes no sense to treat your desktop GUIs as closed sandboxes, what's the threat model?

Electron is not simply chromium, it has a bunch of optimizations, fixes, and prebundled functionality.


This entire thread is about which webview to use as an actual web view in an otherwise native application, not trying to use them as apps


Shipping WebView2 runtime "in-box" in Windows is supposed to start with Windows 10 21H2 and Windows 11. I believe soon after both release (Chromium) Edge will also start shipping it by default down to previous Windows 10 versions still in security support.


I've got a lot of scars from MS... but the bootstrapper is 1.7MB. It will then pull in WebView2.

WebView2 btw is awesome (I've spent a lot of time with it already, see my profile if you want to know why)


Well bashing Electron is popular, this is why even a blatantly incorrect comment is automatically the top one as long as they are denigrating Electron.


Just wanna give a shout out to tauri's mobile site; it has a "Learn More" button prominently on the first screen, ABOVE the standard "Get Started" button most projects use. I greatly appreciate the opportunity to understand what I'm starting before doing it.


Webviews can't access the V8 engine, have different implementations on every platform, and are worse in performance.

As a business would you save 120MB in filesize, or thousands of hours in development time?

Electron isn't even unique in this. Blender is over 200MB now. Photoshop and Autodesk products have been big for a long time.


I'm still getting my head around electron development, but only the main process in electron is really supposed to access the V8 engine right (if you turn sandboxing on anyways). With sandboxing on, you're still doing IPC from your render context to some backend, where for webview would be whatever framework is using the webview right?

Is the V8 performance benefit really that high compared to something like C# just calling system calls?


I’d assume it’s more about the tooling and libraries for JS. If I want to build an HTML based UI, I have thousands of JavaScript libraries alongside thousands of developers who know how to use them. If I want to build HTML that calls into C# I’ll need to do a lot more work myself.


Blender includes art assets like HDR cubemaps.


And so so so much functionality. Video editor, modeler, several renderers, image editor, python text editor, sculpt software, importer, video audio mixer, compositor, graph editor, scripting engine, .. monkey model. So much functionality comparing it with irc clients is not fair.


I would ship a daemon/service that makes use of the user installed browser, if a Web interface must be.


And you would then be back to debugging small intricate differences across browsers, which is what people ship Electron to avoid.


Since I don't want to be part of Chrome marketing force, that is a very good thing to do.

Either go native or Web standards.


That's nice technically, but awful in terms of usability.


Only if the developer doesn't know any better.

It can replicate Electron UI/UX without the bloat or Chrome marketing.


Why?


Would WASM solve the performance issue? One can imagine writing a webview GUI in a compiled language in order to get the speed and size benefits.


No, that would be worse actually. Imagine a circumstance where you need to use a library to fetch some network data that you can't do in a browser. Now you have to import the whole data stream via text serialize/deseralize into the browsers sandbox.

Wasm is also quite slower and bulkier because you'd have to compile your entire runtime into wasm, and then have it in the browser.


> sooner or later, someone is going to want to display some HTML (or an actual website!) within their application. We’d really rather not bundle an entire browser engine to accomplish this

I read that and cannot avoid thinking: yeah well, sooner or later someone is going to want to play an MP3 file. Should now all GUI toolkits include multimedia playback support?

Someone is going to want to display and edit a PSD file, does it mean the GUI toolkit should be able to handle the Photoshop file format?

I get where it's going (I've been a GUI developer myself, working with Qt which does include a browser engine), but to be honest I think that already crosses the line from GUI framework to do-it-all framework.


Qt has a reduced version of HTML and CSS that it uses for Widgets styling, which is why you can make words bold, or underlined or whatever, in labels and pretty much everywhere in the GUI. This is very useful to have. Many more basic toolkits include no such functionality, or have a divide between "plain text" widgets, which can do no formatting at all, and a real browser-ish HTML widget. (There's also capital-R-T Rich Text used in some parts of Windows but you really don't want to go there)


I mean yes? Because either the toolkit does it or the platform does it and the toolkit wraps the functionality.

I think it’s easier to flip the script. If you didn’t include the ability for your toolkit to display HTML with a browser widget how do you imagine your user proceeding? I don’t think most end-programmers would be expected to know how to take WebKit and maybe a JS engine and make it into a custom widget in your framework.


You forgot rhe most important: to send email. (jwz)


If I watch my non-techie family members use their computers, they hardly use anything that isn't either in the browser or using electron.

I used to see a few things like Acrobat Reader (now killed off by pdf.js), Word (the college seems to now launch the web version by default), Skype (now Electron MS Teams) etc.


To be fair, the only non-electron (or browser) thing I can think of that I use for work on a regular basis is the terminal. VSCode is electron, Joplin is electron, Teams is electron.

To be honest I've done a complete 180 in my views on electron. As long as the application is well made and optimised (which, to be fair, is somewhat rare) I often prefer it to native. People talk about vendor lock-in when it comes to their data, but very few people seem to consider OS lock-in as another, just as important factor. If I'm using a native app that only runs on windows, that'll discourage me from moving over the linux should the want ever arise (and it does). Obviously there are exceptions like Sublime, which are both native and cross-platform, but like the well made electron apps, they're an exception to the rule


Do you find Teams well made and optimized? Eveyone that I know hates it.


God no, I was referring more to vscode (and obsidian, which I use personally)

Don't get me wrong, its better, you can always instantly tell how much work has been put into an electron app, especially on windows (and when you have a dark theme on) because the poorly made ones will use a default white titlebar, whereas those with some effort will have the titlebar match the rest of the application. So Teams has that going for it.

But optimisation wise, yeh, its not pretty. I used to use Linux at work, and it was soley responsible for my twice-weekly crashes (I knew because sometimes, if I were quick enough, I could stop the crash before my laptop slowed to a halt by force closing teams). On Windows, its better, but thats not saying much.


They are moving it to Edge Webview and off of Electron. It's supposed to use 1/2 the memory.

https://tomtalks.blog/2021/06/microsoft-teams-2-0-will-use-h...


It's trash, but mostly because it's got that standard Microsoft schmear on it, not because of electron.


The fact that this is relevant to the discussion means we aren't trying to advance technology anymore, just make more sales.


Chromium is not 500mb, and at this point the discussion is a dead horse. Every time it's brought up someone makes up new numbers because they've never worked with it nor understand why it's used. If you don't know, don't comment, it's that simple.

The actual bundled size is about 120MB, it can be lowered. Actual memory usage is about the same but is relative on how much memory your system has overall.


My solution to this (it's been done before), is to use the existing browser engine (not the system webview) installed. So far I only utilize Chrome, but as the way I connect to it is over the Chrome DevTools protocol which is somewhat fluent with the Remote Debugging Protocol[0] that Firefox is doing, this is a reasonable approach.

So far my "tool" to do this is simply a template repository with some conveniences, providing in essence a skeleton for these types of apps. I hope to flesh this out a little more, and expose a much richer API, as well as convert some of my existing popular apps (like 22120[1]) to the "framework".

The benefit of this is Graderjs has a built in 'app builder' that can create a cross-platform binary (excluding or ignoring the necessity (on MacOS) and near-necessity (on Windows) to sign your executable somehow, that lets you display your UI in JS/HTML/CSS using the already installed browser engine, as well as run code in NodeJS and using the rich APIs[2] of the browser engine itself. I'm really happy with this project and think that, even tho it's small now, it will in time become my most popular and powerful one: even bigger than my remote browser and popular web archiver.

Just give it time! :)

[0]: https://firefox-source-docs.mozilla.org/remote/index.html

[1]: https://github.com/i5ik/22120

[2]: https://chromedevtools.github.io/devtools-protocol/tot/Brows...

The GraderJS: https://github.com/i5ik/graderjs


The article also talks about MS Edge not being crossplatfom. Well.. that's about to change [1]. There's a beta for Linux, macOS, Android & iOS. What surprised me most about all this is that MS even provides the Edge for retired versions of their own OS (Windows 7 & Server 2008). I'm very impressed by that.

For Windows you don't have to include the WebView2 installer, your installer, can basically include a "stub", the evergreen bootstrapper [2], that then goes out to the internet and grabs the latest version.

edit: The great thing about WebView2 is that Microsoft keeps their control up to date. (At least on Windows, not sure what that is going to be on other platforms, time will tell)

[1] https://www.microsoftedgeinsider.com/en-us/download/?platfor...

[2] https://developer.microsoft.com/en-us/microsoft-edge/webview...


Just to add, the Evergreen option also doesn't require multiple installs if multiple programs are using it. The client machine just needs a single install that all programs will share [1]. Further, if the client also has Microsoft Edge installed and up-to-date, Edge and WebView2 will be hard-linked allowing them to share the same libraries and processes, further reducing the performance, memory, and storage footprints.

[1]: https://docs.microsoft.com/en-us/microsoft-edge/webview2/con...


I'm hoping that it improves performance ;)

Thanks for expanding on that part. It's really nice how Microsoft deals with these updates.


>I'm very impressed by that.

Why? Chrome supports those platforms. There's no reason for Edge to not.


MS does not normally provide support for retired versions of their platforms. They have no direct financial incentives to do so.


Edge may become crossplatform, but it's still just a skin on top of Chromium.


We didn't give up.... we use one these three when someone needs to be shown a web page:

  ShellExecuteW(NULL, L"open", (LPCWSTR)wuri, NULL, NULL, SW_SHOWNORMAL);

   [[NSWorkspace sharedWorkspace] openURL:nsurl];

  ::execlp ("xdg-open", "xdg-open", s.c_str(), (char*)NULL);


> now almost every single piece of software has an installer that's at least 500 MB large.

First, that's 500MB initially. It goes up from there.

Second, that works only if a small number of apps are bundling Chrome, and RAM can keep up; once too many Electron apps promulgate, people will start installing fewer apps.

If your I dunno, weather widget starts consuming 1GB to show me current weather, I'm going to trash it, if it conflicts with my IDE.


Yeah. But what are the options really for poor multi-billion dollar companies while making GUI based applications. They all have "carefully evaluated strategic options" and come to same conclusion: Electron is the way to go.


The article deals with this in a later section and has an interesting discussion about the tradeoffs. Note that you don't have to ship Chromium with every app, you can also go use web rendering engines supplied by the platforms.


And why Chrome market share keeps increasing.


Luckily, Google our beneficient overlord would never use it for anything nefarious after all their motto is "Do No Evil" /sarc.


was*


I think that was included in the "/sarc".

[EDIT:] I thought it was for quite a while after it wasn't any more. [/EDIT]


Well you can use the installed browser on the OS if it's just to display content.


A) But then you need to install a web server component with your app. (And make sure that starts with it, keep track of that process staying alive, etc.)

B) As was mentioned, I think both in the article itself and several comments here, then you get to contend with the same problem Web devs do: Browser differences.


Great list of the complexity encompassing a proper "GUI" framework. I feel like text / rich text / unicode alone is a huge project (if you include editing, especially of large documents).

However, one thing not mentioned in this list (which I feel is only becoming more important) is that a modern GUI framework also might want to support iPad / iOS / Android.

This complexifies the undertaking by a huge amount as the differences between macOS/win/linux are much smaller than the distance between, say, iOS and macOS. Apple has been going into this direction with SwiftUI where the same code can be run on different systems and behaves differently depending on the platform. However, by doing so, this "common denominator" of all platforms automatically reduces the flexibility of the framework as every "component" needs to be cut wide enough to intelligently behave correctly per platform.


Getting rich text right is a huge undertaking. I spent a lot of time on getting text working really well on a few GUI projects I worked on.

This is the last big GUI/application framework I coauthored. It’s still getting maintained 10 years later. It’s primarily been used for creative applications, but could just as well build cross platform media rich application. https://github.com/Downstream/ds_cinder


And then there is the question if your GUI framework is even allowed on iOS. For example, Firefox can't use its internal render engine on iOS, but is forced by Apple to use Safari's library.


They can use their internal render engine. What they can't use is their Javascript framework because a modern JIT requires taking compiled javascript code in memory and marking it as executable (e.g. taking any memory page of data and telling the system that this is code now, thus allowing to execute it). This goes against the iOS security mechanism / sandbox because a bug here would allow executing any remote code in the context of the current process.

You can even ship a custom Javascript engine on iOS if you disable JIT, it will just be slow. Or, you do it like iSH (https://ish.app) and implement custom execution via a hard-to-debug genius assembly.

As an exempla, Ejecta is a custom Javascript library for iOS - just without the JIT to make it fast. https://impactjs.com/ejecta


> You can even ship a custom Javascript engine on iOS

Yes but you can't run code downloaded from the internet or supplied by the user. So you are prohibited from building your own browser, even without JIT.

Unless you are Roblox for some reason, then you can run all the downloaded code you want! Sucks to be a Roblox competitor, sorry no running downloaded code for you.


Apple has long been known to "bend" certain rules for clients of the app store of a certain size.

https://www.theverge.com/2020/6/17/21293813/apple-app-store-...


Pro tip for Roblox competitors: Make a product that children really, really like, and make sure some of those children are those of executive leadership at the company you'd like to influence.

You'll get exceptions too, and you don't even have to be a unicorn (Roblox got an exception well before that, after initially getting banned from the App Store).


I am interested to hear more about the history of this exception. Roblox is now a public company worth 40 billion dollars and I believe a large part of that valuation can be attributed to their privileged and protected position in the iOS app store. Did they disclose that in their S-1?


Fully agree here, these special exceptions for big corp are just terrible. It has such a terrible bribery / corruption feel to it.


Where do they draw the line for what is "code"? Does HTML/CSS not count? Browsers will take complicated actions based on that downloaded data. Even PDF is up there if you ask me...


> They can use their internal render engine. What they can't use is their Javascript framework because a modern JIT requires taking compiled javascript code in memory and marking it as executable

> You can even ship a custom Javascript engine on iOS if you disable JIT, it will just be slow.

Wouldn't this mean that Mozilla could theoretically ship "real" Firefox (Gecko/XUL/SpiderMonkey) on iOS after the SpiderMonkey wasm port[0][1] lands?

> Instead of working directly on the machine’s memory, the JS engine puts everything—from bytecode to the GCed objects that the bytecode is manipulating—in the Wasm module’s linear memory.[2]

[0]: https://spidermonkey.dev/blog/2021/07/19/newsletter-firefox-...

[1]: https://bugzilla.mozilla.org/show_bug.cgi?id=1701197

[2]: https://bytecodealliance.org/articles/making-javascript-run-...


Outside of sideloading, probably not. The rule Apple has isn't "only we use JIT", it's "only we load code".

The lack of third-party JIT entitlements (outside of a debugger entitlement in iOS 14.0 thru 14.2) is downstream of the prohibition on loading code. For the record, "loading code" implies that the code is coming from somewhere other than the current app bundle. Embedded JavaScript, Python, Mono, Flash, or any other interpreters running code that ships with the app are fine. (At one point, Apple actually threatened to ban these. It was concommitent with "Thoughts on Flash".) However, running code by any means from any other location - a third-party server, or even user-selected files - is very much considered malicious behavior by Apple.

The contours of this rule are particularly strange, though. It's not just a pure ban on all third-party apps that interact with third-party code. You're allowed to have third-party browsers. But they have to use Apple's WebView control to get access to a browsing engine. Yes, you can compile a no-JIT build of Gecko, but Apple will never approve it. In fact, Apple goes so far as to even prohibit projecting new web APIs through the WebView interface. (I have no idea how they're going to apply this rule to Safari extensions in iOS 15.)

Of course, Apple also wants to sell iPads as computers, which means that eventually someone is going to want to program on their iPad. This has resulted in quite possibly the strangest rule exception I've ever seen on the App Store. You're allowed to load code if it's code that the user can see and modify. The explanation being that these aren't "code loaders", now they're "learn to code apps". So, e.g. Swift Playgrounds and third-party JavaScript or Python REPLs are perfectly fine. But, of course, all of that has to run at interpreter speed.

Side note: Apple is hilariously inconsistent with this rule, when it comes to emulation. Riley Testut proposed a ROM whitelist on GBA4iOS which App Review said they were OK with, but then changed their mind on by the time it was submitted to the App Store. iSH - an app that emulates x86 Linux binaries - was approved, pulled, appealed, and then re-approved under the "learn to code" exception. iDOS 2 - an app that does the same thing but with DOS apps - was approved, and then pulled with a rejection letter that strongly implicates iSH.


It's really hard to beat an embedded browser if you're doing a desktop GUI app. Yeah, I know, I know, there are downsides, but the upsides are compelling: very mature technology, can be very performant for many types of apps, extremely customizable, great tooling is available, huge resource pool of proficient developers, and the UI itself is more or less automatically consistent across OS versions and platforms.

Building a desktop app with an embedded browser always feels dirty, but honestly the alternative has always ended up being worse for stuff I've worked on. Buggy frameworks, missing features, cross platform support that is only 90% there, dealing with requirements for custom widgets, cost of bringing devs up to speed on it, etc.

Interestingly, a compelling reason for a non-web approach used to be that having native look and feel was a big deal, but over the past decade it seems like many apps have eschewed this anyway. It seems like most programs built in GTK always looks dated, and on Windows things have morphed so far away from the win32 look and feel that nearly everything modern looks like it might be a web page anyway.


> It's really hard to beat an embedded browser if you're doing a desktop GUI app.

You could also just make it a website and stop there unless there is some native hook you absolutely cannot manage via HTML5.

We started our product as a mobile app because of concerns over being able to access native device capabilities, but we have since reached a point where native doesn't make much sense compared to the amount of pain and suffering it causes us. Just the codesign process for iOS alone is not really worth the price of admission. This isn't even scratching the surface of xcode and the nightmare that is that ecosystem.

The only "native" thing we really need is the camera, and we have discovered a few reasonable shims for this particular concern (i.e. upload photo as a file).


Is the idea in a case like this to go full website, or use something like React/Vue to build a site that's just JS+HTML front end and API calls, and then ship a shim of an app which has a webview and the JS+HTML locally, and makes the same network API calls? Then it seems like the build process for native could be just about completely automated or at least reduced to it's basest components, and it's just dealing with App stores.

Is this a normal thing that's already done? As is probably obvious, I'm not a front end or app developer.


Our long term goal is to go all-in on server-side rendered UI.

For reference, we use Blazor server-side today and still feel this is still not extreme enough for us.

What I would like to see is something where all browser events are piped to our server and we send basic WebGL draw commands back to the client. This would give us pixel-perfect control over how the application presents on any device without worrying about a particular vendor's quirks. Everyone's implementation of a web browser is a little different, but when you get down to the primitive behavior of basic WebGL/canvas usages, the consistency is very strong across all platforms.

In order to accomplish this, the server would need to know about each clients' viewport dimensions, etc., so its not like its a free lunch on system resources. This is more about total control and maximum security.


> What I would like to see is something where all browser events are piped to our server and we send basic WebGL draw commands back to the client.

Do you have a plan for also handling accessibility if you go in this direction?


> Do you have a plan for also handling accessibility if you go in this direction?

It is something I would certainly consider after meeting the most urgent objectives. There is no reason I couldn't consider offering alternative views for certain users which contain things like high-contrast or screen-reader enhancements.

The initial scope of this work would be for a privately-used B2B application. Accessibility in this context would be driven by our business customers' specific requirements.

That said, providing accessibility is more of a business/application problem than a UI framework problem in my opinion.


So from where I sit, and mind you I'm not claiming to be perfect about accessibility but I spend a good bit of time thinking about it, this answer gives me shivers. 'Cause like:

> It is something I would certainly consider after meeting the most urgent objectives.

In practice, this is pronounced "no."

> That said, providing accessibility is more of a business/application problem than a UI framework problem in my opinion.

In practice, this is pronounced "I don't care".

An application cannot describe "a button with the caption of 'OK'" without a UI framework that can and does tell it that there is a button, that it has a caption, and what that caption is.

To be honest, accessibility is a team effort from everyone and things are already bad enough without actively making them worse.


> In practice, this is pronounced "I don't care".

I appreciate that you're taking a stand for accessibility, but IMO, that kind of statement just adds to the shrillness of so much of our online discourse. We can discuss the pros and cons of various approaches to UI, and what it takes to make accessibility practical, without venturing into the topic of whether a developer cares about accessibility, which is likely to just make them defensive.


Ooh. Shrill.

I'm pretty okay with this as a stance, to be honest, because folks have been asking those of us who are fully abled, very nicely for a very long time, to stop making things harder for them. Blithe dismissal deserves to be received a certain way.


> folks have been asking those of us who are fully abled, very nicely for a very long time, to stop making things harder for them. Blithe dismissal deserves to be received a certain way.

I'm one of those folks (see my profile). I still ask nicely, and try to be nice even when responding to what seems to be a blithe dismissal, because I want to focus on solving the problem, without making developers feel unnecessarily uncomfortable about honestly discussing a sensitive topic. I still struggle with this, and might not have gotten it right in my own response to bob1029 yesterday, since that response got downvoted.

Also I'm not sure bob1029's original response is a blithe dismissal; he may just be honestly stating the decision-making process that makes sense in a business world where disabled people are a tiny minority. That sucks for us, but maybe I just have to have the serenity to accept what we can't change, and focus on what is feasible, such as reconstructing UI content and semantics using OCR and other machine learning techniques (like Apple is doing as an option on iOS), rather than trying to convince every UI framework and application developer to put extra work into accessibility. I've been resisting that conclusion, but I'm getting closer to accepting it.


I know you are, and I follow your stuff. :) My post was off-the-cuff a little sharper (to you, not to him) than I wanted it, because "shrill" is coded usually on this forum in a way you didn't mean it. Sorry for that.

I respect where you're coming from, and at the same time I don't think pointing out that putting something as the fourth-most-important priority means it will stay the fourth-most-important priority forever is "shrill". Because if you shove something baseline like a11y down the ladder like that, it's never coming up, and if characterizing that as "I don't care" makes somebody upset then it's probably a "look inward" moment, yeah? I think it's reasonable to demand that we as a profession not act to be noninclusive by default. Not, at this point, ask. Demand is OK. I think more must be expected of us as people, not just as technologists. The business world might indeed say "oh, small minority, screw them"--we must be ethically loaded as humane craftspeople to fight back against that.

It's kind of a money-where-my-mouth-is thing for me, and I'm very happy that my employer cares a lot about this. It's a video company, so there definitely are limits to what we are able to do for folks with sight impairment, but right now we're in the middle of an initiative for the conference we run to better enable hearing-impaired folks to enjoy and participate. It's not why I came to work here, but I'm glad we're doing it.


> things are already bad enough without actively making them worse.

So should I stop trying to improve things if I cant perfectly satisfy all potential stakeholders on day 1? Is incremental progress not good enough?


"Just add accessibility later" is the UI world's equivalent of "just add multiplayer later." Design for it, or it'll never get done and what does get done will be done poorly.

So my bet's on never satisfying them at all.


> It is something I would certainly consider after meeting the most urgent objectives.

What urgent objectives are met by replacing HTML with a server-side UI implemented on top of WebGL?

> The initial scope of this work would be for a privately-used B2B application.

That doesn't make accessibility any less important. Consider: if one of your business customers is trying to fill a position that involves using your application, and one of the candidates is blind (or otherwise disabled), a reactive approach to accessibility means that they would likely have to pass over that candidate, even if that candidate is otherwise well qualified for the job. Or an existing employee may have an accident or illness that makes them disabled. So I don't think it's a good idea to ignore accessibility, let alone do something that is sure to make it much worse, until a customer reactively asks you to pay attention to it.

That said, providing accessibility is more of a business/application problem than a UI framework problem in my opinion.

I entirely agree with eropple's response to that. Also, our best hope of applications being more or less accessible is for the UI frameworks to automatically implement accessibility as much as they can.


I generally like using Qt apps, be it Linux, Windows or macOS. Development with Qt is also quite nice, with very well documented APIs. The only issue is that sometimes there is something really weird going on with fonts (on Linux at least). E.g. in Quassel sometimes text looks as if it went through a cheese grater. And even though I have all the necessary fonts, I don't see some Unicode characters properly (emojis mostly). In Gtk apps they are displayed perfectly. I've been told it has something to do with Qt limiting the number of fonts that it goes through, although I'm not really sure.

As for Electron, this is such a poor runtime. Chromium's C++ (not V8) garbage collector is so unsophisticated that it sometimes blocks for several seconds on lower-end devices. At $dayjob, I got reports about UI unresponsiveness[1] and after looking at the logs, sure enough, 2.5 pause times several times, and mostly revolving around 15ms, 50ms, 70ms, 110ms etc. - wtf. That's just bad press for garbage collectors. Add to that, that each release consumes more memory than the one before. After that you look at all the different garbage collectors for JVM and wonder who even makes the decision to go for Chromium and not JVM, not to mention the quality of languages that target those two platforms.

[1]: Not our app, but for us to debug. :|


> consistent across OS versions and platforms.

This is an upside for the developer only. And a downside for the user as the GUI will probably not be consistent with its OS.

> over the past decade it seems like many apps have eschewed this anyway

And I regret it.


> And a downside for the user as the GUI will probably not be consistent with its OS

Depends on what kind of "user". Many professionals that spend their day in software like the Autodesk suite of applications, Music production, Animation or other creative endeavors expect consistent GUI across OSes when using the same tool, not consistent GUIs within the specific OS for all tools.


And interestingly none are written with web technologies —- sufficiently complex GUIs are no good fit for that.


How many of those users ever use more than one OS?


How many of them might be following directions from someone or giving directions to someone on a different OS?

How often we use something and what type of things we're doing on it (work, play, etc) probably greatly affect how we think about this. I expect my tools to work the same no matter where I use them. For things I don't rely on to get essential work done, I care less, on a sliding scale.


I expect my tools to work well too, that's why they're almost never web-based.


Sure, but there's a difference between web-based and built using web technologies. Whatever other faults electron apps have, they're fairly consistent across operating systems. For a tool (like a chat client) that's very appealing. It's all that other things that seem to be endemic to electron apps (high resource usage, large size) that are the problem. Theoretically if you make a webapp that works fine on Safari Edge and Chrome you could use the OS provided webviews on iOS, MacOS, Android and Windows (and Electron on other platforms I guess) to ship local equivalents that are less problematic at the expense of more engineering work (or maybe just a leaner meaner electron competitor).


>> consistent across OS versions and platforms. > This is an upside for the developer only.

This is definitely not the case. For some apps/branding, a particular style is more desirable than making sure a checkbox looks 100% native on each platform. And even the things that benefit the developer are not upsides just for them (think lower initial design & dev costs, lower testing costs, lower costs of new features - all of which mean resources can be allocated elsewhere).

> the GUI will probably not be consistent with its OS.

Like I mentioned, there seems to be a trend away from this as a goal anyway (even within the set of OS-provided apps). Probably it's a combination of it not mattering as much as we originally thought it did, and because end-users now have a familiarity with a core set of UI concepts. There's a short list of things you do to make a button be "button-y", and users know to click on it. They just don't care if it looks different than the buttons in some unrelated app.


> There's a short list of things you do to make a button be "button-y", and users know to click on it.

Could someone please inform Microsoft of this, so maybe they could try that in "Skype for Business"?

I have to use the fucking thing at work, and it's even more afflicted by the fucking Flat Where Everything Looks The Same look than the rest of their apps and OSes. For instance, whenever I'm in a chat with someone and initiate an audio call by clicking on the round blue handset button, it pops up a helpful little tooltip/label in the form of a (squared-off, because Modern) speech bubble! to tell me that what I've just initiated is a "Skype Call". Then I sit there and wait for them to pick up for a while, before I remember that the fucking speech bubble IS A BUTTON, and that I need to click again to start a call.

Holy fucking goddamn shit! The absolute worst company at following Microsoft's UI guidelines is... Microsoft.

Another example: Just a few hours ago I was switching back and forth between Excel and my text editor, copying stuff from one to the other. Half the time when I though Excel had focus, it didn't -- because the fucking thing can't even de-highlight its bloody title bar! Their interfaces absolutely suck nowadays.


There’s more to being a native app than the look of a checkbox.


> > consistent across OS versions and platforms.

> This is an upside for the developer only.

Strongly disagree. How is a consistent user experience enabling developers to support more platforms at a higher level of quality going to lead to worse end-user experience?

To me: it's not, this is just anti-browser sentiment dressing itself up as enlightenment


> ... higher level of quality going to lead to worse end-user experience

This is just a management speak where degradation in quality is called 'next generation'. It is worse because it is resource hog and slow even on high end machines.

As a technical user I understand at least what is happening. On other hand non-technical folks are unable to articulate their frustrations which slick developers seems to take as users are satisfied.


> It is worse because it is resource hog and slow even on high end machines

And what of other factors? If it is worse in the dimensions you list but superior in:

* accessibility * cross-platform capability * debuggability * library availability * user familiarity

and more, then what does this matter? Each choice made in such a domain as this is picking a point in a high-dimensional tradespace, and to blatantly disregard this obvious perspective citing "management speak" and "slick developers" doesn't really inspire confidence you're arguing in good faith


Because it is not a consistent user experience, it is a consistent developer experience.

For the actual user, the experience is inconsistent with the rest of their system.


That's just one factor and while it may be true, do you really think that is more important that dramatically higher overall developer velocity and superior developer workflow, both of which contribute to more feature refinement, fewer bugs, faster updates, better support, etc, etc? Definitely not IMO. The fact that you are ignoring those other factors in your comments makes me think you are also ignoring them in your assessment of the options.


> For the actual user, the experience is inconsistent with the rest of their system.

This point is moot. If your argument is that each system has its own GUI semantics, and the developer fails the user by not developing within those semantics for that platform, then you are proposing the developer must do 1 unit of work per platform where previously it was 1 unit of work for all platforms. Therefore exactly 1 platform will be supported, and users on the unlucky ones will have exactly no experiences with this tool.


Sure, if you're strapped for resources and are content with making sub-par tools, that's what happens.

If you actually care about what you make and want to make it good, there are no shortcuts. You need to put in the work.


Or you can use cross-platform solutions which are themselves good tools, and disregard platform-specific semantics in favor of globally good UI/UX and let your tool stand on its own with no need for unnecessary, repetitive, error-prone platform porting.


> Or you can use cross-platform solutions which are themselves good tools

No such thing exists.

> and disregard platform-specific semantics

Again, this is a bad thing for users.


> > and disregard platform-specific semantics

> Again, this is a bad thing for users.

Can you make a supporting argument for this? You state it as fact. I do not see why globally strong UI/UX should be a bad thing for users just because it disregards some standards of unknown quality on an unknown platform. Most methods of interacting with applications today are standardized and have no platform reliance; it is unclear to me why a developer whose tool delivers value in and of itself would suddenly not be delivering value to users just because their UI implementation did not perfectly match said user's preferred OS' UI standards. Actually, it's perfectly clear to me that the opposite is true.


> I do not see why globally strong UI/UX should be a bad thing for users

Except "globally strong UI/UX" is usually just design wankery that totally ignores decades of usability studies.

> just because it disregards some standards of unknown quality on an unknown platform.

Yeah, because the bleeding Windows UI guidelines that were in force for decades and known by hundreds of millions -- or are we in billions territory? Probably -- of users are "some standards of unknown quality on an unknown platform" nowadays.

Pull the other one, it's got bells on it.


Maybe you have not actually used a good platform with good standards?


Maybe you shouldn't revert to facile claims which don't even support your point, such as implying that a single good platform with good standards justifies never building anything cross-platform ever in any context?


> For the actual user, the experience is inconsistent with the rest of their system

Hmm... is that actually true though? If so, in what way?

For example, right now I have open, among other things, Spotify, Visual Studio, Vim, Chrome, and the Windows 10 Settings app. It's a fairly big mishmash of UIs in terms of styling, but in terms of how I interact with them, there is very strong consistency.


Well, it depends. Some tools I want to be OS-consistent and unobtrusive. I don't care for the newish trend of everything having its own 'dark mode' (none of which match each other) or the way this is offered as some great innovation when it's only about few pages of code to implement. I would much prefer to have better theming options at the system/WM level.

On the other hand, I have many specialized tools where the UI is superior that of the OS and if anything I would like everything else to look more like those. Ableton Live was an early pioneer of the kind of flat minimalism that people associate most often with Material Design, for example. At the time it was launched most music software manufacturers were still obsessed with skeuomorphism, albeit for entirely justifiable reasons because musicians often fetishize and want to emulate the sound of particular pieces of studio equipment.


Eh, some apps have been around longer than mobile GUI paradigms, to the point they've seen a few. As a user that's been around for a few of those as well, at this point I could care less. If you're around long enough, everything changes anyway, so whether it's learning a specific app's navigation and conventions on install or re-learning many small changes in all your apps, you're stuck with stuff feeling confusing and inconsistent at some point no matter what.

Personally, at least for popular apps, I prefer being able to explain how to do something to someone else and it just working the same way, regardless of what phone they are on, or if I'm lucky, to some degree even if it's between mobile and web.


Users trade a consistent GUI for programs that a) wouldn’t exist on their platform or b) would have a smaller feature set. So both developers and users benefit


> This is an upside for the developer only. And a downside for the user as the GUI will probably not be consistent with its OS.

Depends on how you look at it. I'm building an app and if it was not for Electron and web technologies, no user would get to use that app because it would not be built in the first place.


This is all quite true because an app has changed from what we knew about "desktop applications" in the past.

nowadays: 1) it has to look good, and the web is a platfrom most ui/ux guys understand. 2) it has to provide a dead simple ui that conveniently solves a problem. 3) It doesn't have to support all of the features of the underlying desktop platform. 4) it probably must be have network connectivity, and therefore all of the quirks of network security.

The tens of millions of lines of c++ code called a browser are basically that infrastructure.


One of my favorite offline "classic" programs of all time is.. Microsoft WordPad.

It's by no means great, but it's so lightweight and gets off your way that it puts modern online Microsoft Word and Google Dpcs to shame with its mere 16MB memory usage for simple documents.

I wish more apps did that, even if they looked like a Windows app inside macOS or vice-versa

I wonder if releasing a new framework that doesnt try to look native but still tries to make flow for each OS optmized (e.g. allows to choose file picker between native and own) would be a resonable alternative to electron. Perhaps even with JS support so the gazilliins libraries out there arent totally lost.


Oh I love it. I use Libreoffice for Calc which is good-enough and still open source, but Write is a horrible word processing experience. For text alone it's just aesthetically dissatisfying but as soona s you need to include pictures it's an active barrier to productivity. More than once I've spent hours fuming over a Write document with a looming deadline, then thrown it away and started from scratch in Wordpad and finished with time to spare.


I somewhat suspect that WordPad is the only MS application that actually follows all the MS recommendations about usage of Win32 API and Windows HIG/UX.


It's written in MFC, and the sourcecode is on github (https://github.com/microsoft/VCSamples/tree/master/VC2010Sam...)


Going a bit on the side track over here, due to MFC being mentioned.

Given the current state of C++/WinRT tooling, even the aging MFC is more productive for Windows applications.

It doesn't matter it isn't modern C++, is full of CWhaterver, it is still more than good enough for those use cases where .NET isn't an option for whatever reason and C++ must be used.


"The WordPad sample demonstrates how to implement an application that imitates the functionality of WordPad, including the user interface elements and some of the capabilities." per the readme


Ah! thanks for clarifying


Strongly disagree. I am frequently using lower power hardware and Electron apps are always slow and consume way too much memory for what they are doing. Even something simple like burning an ISO file to a USB stick can drag a system down to its knees because someone just had to make it an Electron app.


I’ve consistently heard this. Is electron actually at fault or is it poor coding?


It’s only better when you don’t have enough time and resources to do a great job.

Using an embedded browser is inherently a compromise, but few companies can realistically support every platform natively without compromise. Cross platform UI’s simply a different kind of compromise and that’s where using an embedded browser shines.


Hear, hear, but if we follow this route, can't we just agree that the browser is already present in the system and reuse it? "WebView apps" that is... If there is no - we can just show the prompt to install that single instance of the browser, akin to what Flash/Shockwave/Applets/Unity Web or any other similar tech were doing.


It's not that hard - all you need is for your app to support mutliple monitors (of different sizes, DPI, orientation). Then the whole based on "browser" platform idea collapses. Maybe you can tweak it, by spawning additional browser? Maybe... You need docking. Start with this as your requirement. Proper docking, not Qt built-in one.


Other than electron what options are there?

I’ve heard electron has poor performance but it’s all anecdotal, I’m curious about opinions to the contrary


Nodewebkit and CEF are some of the main alternatives. And really, Electron is "just" something that does a lot of housekeeping for you, but at the end of the day you could do it yourself - you ship Chromium (or WebKit or ...) libraries, then create a native container window and then hand its drawing context off to the browser instance, shuttle events back and forth, etc.

CEF is kinda clunky, but also has bindings for just about any language.


IMO: TCL/TK definitely beats it, no one talks about it though for some reason.


As far as I can tell, Tk is entirely inaccessible with a screen reader. And I wouldn't be surprised if it's missing some other things that the OP author covered.


Tk is very limited. It's nice for very simple apps, but basically unusable for anything else.


Cairo/Skia are not the only alternatives for Linux.

For example, one pretty nice cross-platform GPU-enabled stack :

https://github.com/bkaradzic/bgfx

https://github.com/memononen/nanovg

https://github.com/wjakob/nanogui


BGFX is a general-purpose 3D graphics engine, not a GUI nor vector graphics framework.

Nanovg is an awesome vector graphics library, but has limitations. (1) no ClearType, I fixed in my fork: https://github.com/Const-me/nanovg (2) The only way to get AA is hardware MSAA, unfortunately many popular platforms like Raspberry Pi don’t have good enough hardware to do it fast enough. Nanogui is built on top of Nanovg, shares the limitations.

I agree with the OP that Cairo and Skia are the only viable ones for Linux.

It’s sad because Windows has Direct2D for decades now (introduced in Vista), and unlike 2006, now in 2021 Linux actually has all the lower-level pieces to implement a comparable equivalent. Here’s a proof of concept: https://github.com/Const-me/Vrmac#vector-graphics-engine


Interesting that the "7 GUIs" test hasn't been mentioned in the article or comments. IMO it's not a bad starting point for thinking about the bare minimum that a GUI framework might need to support, and for evaluating how easy one is to use.

https://eugenkiss.github.io/7guis/tasks/


I like 7guis, and we use it as a way to think about what we're doing. However, I think the focus of 7guis tends to be much more on the higher levels of the GUI stack, particularly how to express UI logic concisely, while this blog post is much more about the lower levels - how to actually interface with the large diversity of platform capabilities that's needed for "real" GUI.


This is underestimating the UI problems of video games. Open up something like World of Warcraft, or Civ 6. There's a fairly complete windowing system, IME, accessibility features, and even the means for users to build custom UIs.

It's still common to see games list some UI framework in the opening credits.


> accessibility features

Including screen reader support? I'd love to know about any mainstream game that has that.

BTW, I didn't write the OP, just came across it and thought it was worth sharing.


Usually it's something like hotkeys for everything and reading the chat: https://worldofwarcraft.com/en-us/news/23691035/enhance-your...


To expand just a little on the short paragraph talking about embedded and games. For those interested in these spaces, looking into Immediate Mode GUIs would make for good reading. Rendering the current state of the UI tends to be simpler with lower processing standard deviation than Event driven GUIs. The main tradeoff is that the average processing goes up, since you're redrawing every update. But with embedded it's your worst case that matters most if you have deadlines you can't miss. And in games lower frame rate standard deviation is often a very good tradeoff for lower average frame rate in terms of perceived smoothness.


IM GUIs are the way to go in my opinion. I've been playing around with some prototypes and they are dead simple compared to event-driven GUIs with multi-stage rendering pipelines, et. al.

Just the reduction in complexity is a massive feature. Managing view state is almost fun with this approach. The performance can be incredible too if you do a little bit of hacking with caching certain expensive resources between frames (i.e. texture atlas computation).

One trick I have employed is to process user events in "micro" batches that are 10-100uS wide. This allows for me to consolidate redraws across multiple logical user events. I have found this to be incredibly helpful in cases where inputs are sampled more frequently or otherwise occur at rates higher than output frames are produced. Driving redraws off individual events is probably a mistake if you want any level concurrency. You need to batch up the events and process them in bulk, then do redraws on the edge of each batch.


There are some big challenges, here. One is IME, which requires storing intermediate state and information on the editing session, and communicating that with the platform; another is accessibility, which requires keeping a tree of elements that the user can currently navigate to, and which may include elements which are not currently on screen.

So yes, maybe okay for games, but I’m unconvinced about imgui for GUI applications of the sort described in this post.


Here's my "go to" questions for whether a "new" GUI framework doesn't suck:

Can your GUI system build a Digital Audio Workstation? It doesn't matter how good your text handling is if everything is too unresponsive.

Can your GUI system scale arbitrarily when I put it on a humungous monitor? It doesn't matter how good your text rendering is if I can't read it.

Every single GUI system currently fails both of those.

GUIs are currently stuck in a path-dependent local minimum that needs to be broken out of. It's not clear what the path forward is, but this article simply reinforces the broken status quo.

No, you don't have to render HTML. No, you don't have to support application tabs. No, you don't have to support modal dialog boxes. And, no, you don't have to behave like native. You don't even have to support an "editor".

About the only part I agree with is "If you support interactive text, it's a nightmare." For most people, this needs to be outsourced to a library because you will never get shaping, rendering, and input methods even close to correct.

In fact, I might even go so far as to say that "99% of GUI misery is supporting interactive text--don't do that." These are the "embedded" GUI systems that the author so blithely dismisses in the first couple of sentences.


I’m not dismissive of embedded, it’s just a totally different problem set and is not the focus of this article. This article is about the pain points of integrating with the current major desktop platforms, period.

I agree we’re in a funny local minima. I agree there’s room for experimentation. But also… for a general purpose GUI toolkit, text editing is not optional. Modal windows (alerts, file open/save, as well as just drop-down/selection lists) are really not optional.

I’m also not arguing that everyone needs to do all these things from scratch. Very few people should be trying to reimplement IME. But you do need to support IME, or you are only making a toy.


Great overview, i am thankful that accessibility was mentioned as well.


Accessibility is the special focus of mwcampbell :)


To be clear though, I didn't write the post; I just spotted it and submitted it.


Fair enough, thanks for the clarification!


Why are you thankful? Do you need accessibility, and if so, what do you use?

Inevitably someone complains about lack of accessibility of a custom UI toolkit, which is frustrating, because something that helps 99% of people is still an improvement and still worth doing.


The argument would be that a new GUI toolkit will capture applications that would otherwise be made with an older, accessible toolkit. In that sense, it‘s a regression, not just a clear improvement.

I do agree about the idea though. We shouldn't shut down experiments just because they don‘t have a plan for it yet. I do see some of that on twitter etc. But once you‘re shipping software that users can‘t switch away from, because of network effects or because it‘s at work, it‘s not ok.

Also, yeah, I (not OP) depend on it.


I'll add UI virtualization to the list of concerns, especially for list views with a lot of data. Creating a UI element for each item in the list consumes a lot of memory and CPU, so it makes sense to do it only for items in view + a buffer before/after. And recycle the UI elements as the user scroll. This becomes difficult with custom non linear and non fixed size layouts. It also complicates UI automation, accessibility and keyboard navigation.


>One approach, then, is to present a common API abstraction over top of these platform APIs

For those interested in a similar topic, but with an opinion/taste cursor much more tilted towards pathological simplicity than towards feature completeness, I've been working on an API to abstract away the most basic features of GUI libraries, so as not to actually depend on any of them while still being able to make basic GUIs: https://github.com/jeffhain/jolikit/blob/master/README-BWD.m...

It's not in version 1.0 yet, but should still remain quite stable (or even frozen, or... dead ;) in the future. Before doing so I'm waiting on finishing the toolkit I'm building on top of it (which might involve also finishing the apps I'm building on top of the toolkit ;).


I wrote a simple GUI using C++ a while ago, just for my map editor. It's primitive and only has the absolutely needed widgets (buttons, textbox, one-layer menu which is essentially a button and maybe a couple others). Then I realized how difficult it is to get one right. I think QT is easy to use and C# is another option on Windows.


C# is opening up a lot of doors for UI development these days. Blazor Desktop is something I am looking forward to urgently.


> it is genuinely hard to write an abstraction that provides adequate control of advanced GPU features (such as the compute capabilities) across subtly different low-level APIs.

That’s a solved problem in C++, see this library: http://diligentgraphics.com/diligent-engine/

> The rasterization techniques used in 3D are poorly suited to 2D tasks like clipping to vector paths or antialiasing

That’s subjective, I think these techniques are awesome fit for 2D. See this library: https://github.com/Const-me/Vrmac#2d-graphics BTW I have recently documented my antialiasing algorithm: https://github.com/Const-me/Vrmac/blob/master/Vrmac/Draw/VAA...

> these traditional techniques can start to perform very badly in 2D once there are lots of blend groups or clip regions involved, since each needs its own temporary buffer and draw call.

One doesn’t necessarily need temporary buffers or draw calls for that, can also do in the shaders, merging into larger draw calls.

> What this comes down to is instructing the operating system to embed a video or 3D view in some region of our window, and this means interacting with the compositor.

That indeed works in practice, but I don’t believe the approach is good. Modern GUI frameworks are using 3D GPUs exclusively. They don’t need much efforts to integrate 3D-rendered content.

As for the video, one only needs a platform API to decode and deliver frames in GPU textures. Microsoft has such API in the OS: https://docs.microsoft.com/en-us/windows/win32/api/mfmediaen... Once upon a time wanted to do the same on embedded Linux, wasn’t easy but still doable on top of V4L2 kernel calls: https://github.com/Const-me/Vrmac/tree/master/VrmacVideo


> [hardware-accelerated graphics abstraction is] a solved problem in C++

I think that would strike most people who work with computer graphics as a highly questionable statement. There's many, many different graphics abstractions out there for C and C++ alone, and none of them is The solution that is ideal for everything.

> > The rasterization techniques used in 3D are poorly suited to 2D tasks like […] antialiasing

> That’s subjective, I think these techniques are awesome fit for 2D. […] I have recently documented my antialiasing algorithm

Your comment actually reinforces the original author's point. You're using a completely different kind of anti-aliasing technique for 2D than would be used for 3D!

> Modern GUI frameworks are using 3D GPUs exclusively.

That doesn't make it easy. You don't control what the system UI framework is doing with the GPU.


> There's many, many different graphics abstractions out there for C and C++ alone, and none of them is The solution.

There’re downsides of course (that thing is C++ only because exceptions, requires recent enough hardware e.g. minimum GLES version is 3.0, and a few others), but in general that one is actually good. On the very high level, developers have re-implemented D3D12 over the rest of them.

Things like Angle and BGFX expose APIs which are very far from how GPU actually work, and close to the legacy stuff like D3D11 or OpenGL. For complex enough scenes, it’s pretty much impossible to get good performance on modern hardware with these APIs.

Game engines are designed in a way which does allow them to leverage the hardware, but they are huge, implementing many high-level things like assets pipelines, and are more like frameworks than libraries.

> Your comment actually reinforces the original author's point

Rendering an indexed triangle mesh with a vertex + pixel shader is the traditional rasterization technique. Compute shaders like piet-gpu, or custom vendor APIs like NV_path_rendering are examples of untraditional ones.

> You don't control what the system UI framework is doing with the GPU

Indeed, but good ones support explicit interop: WPF has D3DImage, UWP and WinUI have SwapChainPanel.


Great piece! I'm saddened to think I might never have to do this sort of thing the job, when I have in fact been lucky to have had good occasion to shave many other sorts of yaks.

One thing I find interesting on Linux vs the others is where the defaults come from. MS and Apple clearly can enforce things, but conventions are a bit more subject to negotiation. Especially with Wayland, it seems like the ability/motivatino to do more stuff client side is allowing less coordination to happen period.


Can anyone recommend resources on understanding the inner workings and abstractions of reactive/declarative application models, such as Flutter and Vue?

It seems to be the latest rage (see Google adopting it for native Android development with Jetpack Compose) and I've been thinking about this lately, but all resources I can find explain how to use it, but not the building blocks of the system underneath.


Those explain React, but the concepts are similar:

https://pomb.us/build-your-own-react/

https://medium.com/@sweetpalma/gooact-react-in-160-lines-of-...

The main difference between React and other frameworks is how they detect state changes. React uses a function (like setState), Vue uses proxies, Angular uses Rx observables, etc.


When I used to do React work I remember reading this article which might help you out.

https://overreacted.io/react-as-a-ui-runtime/

This SwiftUI book is quite short but takes you through a fair bit of how it works under the hood.

https://www.objc.io/books/thinking-in-swiftui/


I had to build a fairly robust wrapper over a feature-poor GUI framework once, and it effectively sunk the project. I'd been cloning a different project which shipped in months because it used a native gui framework. Now I'm much more careful when choosing technology.


I always liked the actor approach: https://github.com/libgdx/libgdx/wiki/Scene2d


Im your first/last user and i want to issue a complaint:

1) It lacks features!

2) Its full of bloat

3) Its to slow!

4) Its full of optimization induced edge cases!

5) Its not customizable enough!

6) The third party customizations are incompatible!

7) I should write a better GUI-Framework


Wondering about docking, especially handling multi-monitor setup, preserving layout, etc. Support for mixed DPI monitors.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: