Hacker News new | past | comments | ask | show | jobs | submit login
Show HN: Brisk – Cross-Platform C++ GUI Framework: Declarative, Reactive, Fast (github.com/brisklib)
84 points by danlcaza 8 days ago | hide | past | favorite | 79 comments
Brisk is an open-source C++ GUI framework with a declarative approach, offering powerful data bindings, GPU-accelerated graphics, and dynamic widget management. It supports macOS, Linux, Windows, and simplifies UI creation with modern paradigms and CSS-like layouts. Initially developed for a graphics-intensive project with a complex and dynamic GUI, the framework is currently under active development.





All those naked 'new's in the example make me nervous. I see that the library uses exceptions, if one of the widget constructor throws while composing the overall widget hierarchy, it would leak memory.

You should be able to make it work with a value-based interface and allocating behind the scenes (this would also enable a few optimization opportunities).


Thank you for the feedback. The framework was originally built with exceptions always disabled, and it is currently being reworked to support both modes: exceptions enabled and disabled. Some approaches definitely need to be reconsidered.

An alternative approach is to use the rcnew macro, which wraps a pointer in a shared_ptr under the hood. Details on the implementation of rcnew can be found at: https://www.kfr.dev/blog/three-cpp-tricks/


Why not std::make_shared, given that you require C++20 anyway?

Just a quick note that the basic form of std::make_shared (which is the one that is relevant here) has been in the language since C++11.

Why is heap allocation and shared_ptr required? Can't you have the user store the widgets in whatever manner they want, as values?

You absolutely can. Heap allocation for every component is already unneccesary, but loose pointers on top of that is a huge red flag.

This seems like someone who isn't up to date with the most elegant and fastest ways to write C++. Charging money on top of that is egregious, not to mention that there are lots of great GUI libraries already. FLTK, Juce, Qt to name a few.


Heap allocation is necessary in real-world scenarios because it allows a tree of potentially derived widget types to be manipulated easily. This is precisely how any robust GUI library is implemented.

Since it's cited as cross-platform, including non-Windows screenshots would be helpful https://github.com/brisklib/brisk#screenshots

Also, I wasn't able to readily find any information about keybindings for the widgets, e.g. the way some frameworks use "&Monday" to make alt-m act on that checkbox https://github.com/brisklib/brisk/blob/v0.9.3/examples/showc...

I did see your "unhandled" event handler doing its own keybinding dispatch <https://github.com/brisklib/brisk/blob/v0.9.3/examples/calc/...> but I hope that's not the normal way to do keybinding


Good point about the non-Windows screenshots, but the Brisk library apps look exactly the same (except for window decorations and DPI scaling) on any supported OS due to its custom pixel-perfect graphics engine and its own font engine (based on FreeType and HarfBuzz, of course). Key bindings are not supported at the moment. The calc example shows a possible workaround for this.

Oh, wowzers, then you should really _for sure_ put macOS and Linux screenshots to warn off people who don't want <canvas> driven UIs on those platforms. I now better understand why you have listed WebGPU as a requirement

While I welcome new cross-platform GUI Frameworks, I wonder why not use a declarative UI similar to QML? Even Slint[1] (which is built in Rust) uses such syntax for its UI.

[1] https://slint.dev


It isn't really that important to have a declarative UI, making the UI is rarely the time consuming or difficult part of making a program. An extra markup language for a UI adds bloat and ambiguity. Now you're learning a different ad hoc language and trying to get around its quirks just to be able to feed in data that could have been done directly with functions.

Gosh I feel like making the UI is always time consuming, but that might just be because I find it repetitive and boring, and thus perceive the work as taking longer.

I could die a happy man never imperatively constructing nested QBoxLayouts again.


Is that because it's actually difficult or because iterations take a long time from obnoxious compilation times?

Eh Qt isn’t really difficult, just not my idea of fun. I use the python bindings so compilation time isn’t a thing.

Eh gosh if you're using PyQt already, why would a different markup language to create the GUI help? Are you even using QML? That is the context of this thread.

I was only responding to the general statement that making the UI is not the time consuming part of app development.

I second that. I used Dash for a while. Although it's nice to have everything wired up for free, the code quickly becomes a mess as soon as you try some more complex things (or you put everything in one function and got a big super nested structure, or you build a lot of small components and quickly get lost in a sea of small functions).

I'd rather build the HTML myself.

[1]: https://dash.plotly.com/


This looks really good. It's nice to see some options like this (and slint) appearing for cross-platform desktop GUI. I'm pretty skeptical of "modern" c++ but this looks like a good example of using it where it makes sense.

The data binding looks particularly clever. That's usually the Achilles heel of GUI toolkits (for me at least) and that looks like a clever and novel solution if it works.


Thank you for the feedback. C++ isn't a language that makes data binding implementation easy, but C++20 introduces some useful features that could be leveraged for this purpose.

How well integrated with the underlying platform is it? For example, on Windows do you take advantage of accessibility APIs for things like screen readers?

I don't think open source GUI toolkit developers should be expected to handle accessibility until OS vendors spend some of their billions of dollars to develop a reasonable cross-platform API.

If you are creating tooling for people to create GUIs, it should always be possible to make those GUIs accessible. I’m not saying this one doesn’t—I don’t know if it does—but in the general case it absolutely should. Not everyone is going to bother, but not even giving people the ability would be deeply irresponsible.

Why is this the responsibility of GUI toolkit developers and not app developers? There's nothing stopping app developers from making a separate version for their app designed specifically for accessibility. This results in a better UX for the users and the toolkit developers.

https://www.accessibility.works/blog/alternate-separate-acce...:

“Why Alternate “Accessible” Website Versions Fail ADA And May Increase Legal Risk.

[…]

1. DOJ: All public accommodation’s websites must comply.

[…]

2. ADA = "Full & Equal Enjoyment"

[…]

3. Is a "Separate But Equal" approach discriminatory?”

I’m not a lawyer, but I don’t think one can draw another conclusion from that than that having a separate app designed for accessibility is a big no-no.

Also, as a developer, you choose a GUI toolkit to make your life easier. Having good accessibility support then is a point in favor of a toolkit, just as, say, having lots of good-looking controls is one.

Possibly even more so, as implementing accessibility support is a huge undertaking. You need to be able to enable high-contrast mode, to set keyboard shortcuts, to use larger fonts, tone down animations, have an on-screen keyboard, etc.


* Your reference is talking about websites. The web is much easier to implement accessibility for than native OSes.

* It's talking about intentionally low quality, minimal-effort ADA compliance. Do you believe most blind people (simple example, there are many types of disabilities) would prefer a version of a tool that has basic (but decent) screen reader support built into a GUI toolkit, to a hand-tuned app built specifically to address their needs?

* Trying to legislate something as subjective as "Full & Equal Enjoyment" is one of the most absurd things I've ever heard. I didn't realize the ADA regulations were that bad. By that definition, anyone with a disability can claim anything about a website that they don't like is preventing them from experiencing "Full & Equal Enjoyment".

* Them comparing websites using cheap ADA solutions to the civil rights movement is chef's kiss

> Also, as a developer, you choose a GUI toolkit to make your life easier. Having good accessibility support then is a point in favor of a toolkit, just as, say, having lots of good-looking controls is one.

I fully agree here. Accessibility is important and it's smart for toolkits to compete on such features. That's not what I'm pushing back against. I'm pushing back against the idea that responsibility for accessibility lies predominantly on GUI toolkit developers. There is nothing wrong with making a GUI toolkit that doesn't support accessibility. Accessibility support can be implemented/improved at multiple layers of the stack, but HN loves bringing this up because it's a meme with high virtue signalling value.


Why should OS vendors spend money devaluing their product? It's up to the cross platform software developers to make abstractions that work across platforms, not on platform developers to make things that work on other platforms.

Just seems absurd.


See response to sibling. Why is this the responsibility of GUI toolkit developers and not app developers?

EDIT: Also, how is this different from any other cross-platform API like POSIX, sockets, TCP, etc?


App developers won't do it if the toolkit doesn't support it and make it extremely easy to use. It's also not that different, because platforms often offer richer/better APIs than POSIX and application frameworks use those with POSIX as a fallback for lower tier targets.

I agree they won't do it, but why does that mean GUI toolkit devs carry the responsibility, when they also aren't provided with a good API?

You're not talking about "good" APIs but "the same API." And if you're going to argue that operating system vendors should provide the same APIs for things that are deeply integrated into the value they provide and make money by selling, then you're going to have an uphill battle.

POSIX is a good example of a lowest common denominator that is "good enough" for many kinds of programs, but not consumer software applications with GUIs. That common denominator is the kind of cross-platform GUI toolkit/framework that we're talking about. a11y is a feature of that, not of something like POSIX. And no OS vendor is going to try and create such a standard, or if one is created they won't try and follow it - because anything that makes applications better on other platforms makes their platform worth less.

AccessKit is a really laudable effort to decouple the a11y API from the UI. It would be great if many GUI toolkits adopted it, but it does affect their design in non trivial ways.

Summed up: what you're asking is "why don't Apple and Microsoft make it easier for developers to make software developed on Apple platforms better on Microsoft or vice versa" and the answer should be self evident. These APIs exist to create value for the platform, not for other platforms. Standards don't exist for portability, they exist to create lock-in and entrench existing players.


> what you're asking is "why don't Apple and Microsoft make it easier for developers to make software developed on Apple platforms better on Microsoft or vice versa"

No, what I'm asking is why the responsibility is always put on GUI toolkit developers. If a good cross platform a11y API is possible, the moral obligation is on the OS vendors to create it and support it. If it's not possible, then why do we treat GUI toolkit devs not only as if it is, but it's somehow their job to re-implement it for every GUI library?


The point I'm making is that its OS vendors job to develop one platform, not software that works on multiple. It seems like you misunderstand that. They don't have a moral obligation to develop APIs that work for other platforms - that's absurd. That's not even what POSIX is, and like I said, POSIX doesn't even cut it for these applications.

The obligation is in the developers making cross platform layers, like GUI toolkits. There's no obligation for MS and Apple to agree on an API, but there is an obligation for the software claiming to target both to support both APIs. That's the very nature of making something cross platform!


All of the modern operating systems have good accessibility APIs. You ask why developers should take advantage of them? Because it’s the right thing to do and it’s usually not particularly onerous and accessibility accommodations usually make the software better for everyone.

Would you also question why builders have to make bathrooms wheelchair accessible in public buildings? It costs more and can be uglier and restricts the potential designs. Why shouldn’t the wheelchair builders have to make fancier wheelchairs that can navigate the spaces you want to build, right?


Why should devs have to implement a separate API for 4+ different OSes? There should be one single API. The closest we have is AccessKit[0]. It's an awesome project, but this work should be funded by Google/Microsoft/Apple.

I'm not arguing GUI toolkit devs shouldn't implement accessibility support. It's a laudable thing to do. I'm pushing back against the meme that the responsibility lies primarily with them.

> Because it’s the right thing to do and it’s usually not particularly onerous

Have you implemented cross-platform accessibility support for a GUI toolkit?

> Accessibility accommodations usually make the software better for everyone.

Accessibility support bloats software. The most trivial example is websites. If you add full accessibility support to a website, it will be bigger, and therefore objectively worse for any user not taking advantage of the accessibility features. The extra bytes also add up over trillions of requests to have an environmental impact.

I think we should be exploring alternative approaches like hand-tuned accessibility-focused apps.

> Would you also question why builders have to make bathrooms wheelchair accessible in public buildings? It costs more and can be uglier and restricts the potential designs.

Strawman. Bathrooms exist in a physical space. Only one implementation can occupy that space, and disabled people have little choice but to interact with the implementations near them. Software distributed over the internet has no such constraints, and different products can compete on accessibility features.

Also, ramps are beneficial to almost everyone at various times. Most people will never use a screen reader.

> Why shouldn’t the wheelchair builders have to make fancier wheelchairs that can navigate the spaces you want to build, right?

Physical accessibility laws were created before modern electric wheelchairs. It might indeed make more sense to regulate (or subsidize) this at the wheelchair level. Do you think most wheelchair users would prefer ramps everywhere, or a wheelchair capable of going up stairs[1]?

As mentioned[2] by your sibling, accessibility advocates and ADA regulations push for "Full & Equal Enjoyment". That doesn't sound like being forced to use ramps to me.

[0]: https://github.com/AccessKit/accesskit

[1]: https://youtu.be/hxf-fIubkMs?si=Cya66U7KQBpvLU-R

[2]: https://www.accessibility.works/blog/alternate-separate-acce...


> Why should devs have to implement a separate API for 4+ different OSes?

Because they don’t want to release software that targets a lowest common denominator. If you don’t care about this stuff, just use Electron or some other library that lets you move fast and make mediocre software that feels foreign on every OS.

Each operating system is different in lots of different ways and if you want to make the best app on each platform, you embrace that. For example, many apps have some number of settings. On Windows you will probably put those in a Preferences or Settings dialog in the app. On iOS, some of those belong in the settings panel. Great apps aren’t going to pave over these differences.

Same goes for accessibility APIs. There’s all kinds of assistive devices (both hardware and software) already in widespread use. They have different expectations on every platform. Follow the conventions laid out by each platform maker and your software should just work. It’s the same as handling keyboard and mouse support. Each OS has different APIs.

Does that mean people who write cross-platform libraries have to do more work? Well yes it does, but then creating an abstraction over each platform is what they set out to do in the first place. It’s an important part of the job.


You're making my point for me, maybe better than I did. The type of OS integration you're talking about isn't implemented by libraries like Brisk anyway. They use their own custom visual style. These libraries fill a specific niche.

Those developers have the option to make use of said OS APIs.

So do app developers. The reason everyone assume this is the responsibility of GUI toolkit devs is because it's a meme. See response to sibling.

Apps need an OS to run on.

I agree. Apps need OSes almost as much as OSes need apps.

Are you speaking for the Brisk project?

Do you have some reason to think I am?

Accessibility features are planned and will be included in one of the upcoming major releases.

"However, for those who wish to use Brisk in proprietary or closed-source applications, a commercial license is also available."

Unfortunately there is no public information on the pricing.


The pricing information will be made public at the time of the next Beta release. Currently, the framework is refining Linux support, with plans to include mobile platforms as well.

As someone who lives and breathes QT at the moment - what would you says the main differences are? What does Brisk offer that QT doesn't and vice-versa?

Though unfortunately since it's not lGPL or MIT, it doesn't look like I could use it anyways.


> Though unfortunately since it's not lGPL or MIT, it doesn't look like I could use it anyways.

Why not. They also intend to have a commercial license. Don't event want to consider paying?


It's difficult to convince others and leadership when a free, well-understood and time tested alternative exists. At least, not without a big reason.

I spent a very long time giving SolveSpace a native Haiku UI. I'm going to keep doing this kind of thing because there's nothing I personally dislike more than apps that don't use the platform's native UI.

I don't care that my approach is harder for the developer, because the thing I care about is consistency and convenience for the user.

I know the thing you built is neat (I've spent quite a few years working on almost the same thing), but I guess this is why I gave up on pushing my own solution


” nothing I personally dislike more than apps that don't use the platform's native UI”

I’m not sure if this is universally applicable dogma. Games generally apply their own UI regardless of platform.

Web apps generally do as well.

I do realize there is space for apps with least surprise per platform, but it’s not obvious to me if an app benefits from platform standard UI any quantifiable way.


They said “apps,” not games nor websites

App usability and performance typically benefit greatly from using the native platform they’re running on. Plus all the egress savings of not shipping chromium with every download


"App usability and performance typically benefit greatly from using the native platform they’re running on"

I know this has always been the design dogma but is there any research to back this up? It's a _plausible_ dogma of course!

To be honest I don't see the distinction between apps and games. I am usually irritated if the software I'm using has different UI on different platforms. I realize it's possible most users don't use three or four operating systems daily.

"Plus all the egress savings of not shipping chromium with every download"

I'm not sure what this refers to. Creating a custom UI does not require embedding a browser runtime - it's the most silly thing to do IMO.


There are many professional applications (not games) that use custom-drawn UIs. Examples include video editing software, 3D modeling tools, and professional audio plug-ins. These applications may rely on a significant amount of platform-specific APIs for better OS integration, yet they maintain a consistent appearance across all supported platforms.

If you are going for declarative UIs, please consider using a separate file for storing the UI and a "code behind" which is the C/C++ source file for that screen. This external file can be written in a form suitable for IDEs to read/write the form properties including all of the components in the form in the hierarchy in which they are put in during the design of the form.

The powerful approach used by Delphi for this is something I have never seen any other language use - where you could create your own components and these components could persist their properties into this external file used for storing the details about the form. So when the IDE loads the design from the external file, it would call the components to read the properties from the file to repopulate itself. This allowed for very powerful and deep components to be developed in Delphi


I don't want to be a party pooper here, but, I am so that's how it comes out. In code declaration of user interfaces was old hat in Qt, and wxWidgets Why no interpreter? Recompiling to fritz with UI is just a bore.

Specifying things declaratively within the language itself goes far beyond simply constructing widgets by calling library functions.

This technique is utilized in modern frameworks like Jetpack Compose, Flutter and SwiftUI and unlocks several powerful features, such as flexible data binding and the ability to rebuild the widget tree on demand, features that would be quite difficult to implement in other libraries.


Wouldn’t pure data based definition scheme enable all of the above in any case?

If the underlying model is tree, then any graph based configuration data could be used to build it.

I mean underneath it could be implemented using the api as is, but as a C++ dev I don’t see any compelling reason to be interested in this.

You can do the most astounding things nowadays in 100fps. I’m not sure if manually handcrafting object initialization trees to a ui framework is something I want to do as a developer in 2024.

This is not intended as a put-down! This must have been enormous amount of work.

I would love to hear the rationale for why exposing the user API as a class structure like this - I’m sure there are good reasons and would love to hear them.


Nice to see someone taking a swing at a C++ GUI framework. Implementing a real on is not for the weak. If it's really works, it'll be expensive to license.

I've only dabbled a bit in 3D graphics (OpenGL, THREE.js, swift SceneKit). When these 2D GUIs say "GPU-accelerated", does that mean they are doing the same thing you would do for 3D - build triangular meshes, materials, etc - and then just essentially point an orthographic camera at this "scene"? Or what kind of low-level GPU APIs (eg WebGPU) are used?

> When these 2D GUIs say "GPU-accelerated", does that mean they are doing the same thing you would do for 3D

Yes. Nowadays all modern desktop interfaces—Aero/Metro/WinUI2/3 on Windows, Aqua/Cocoa on macOS, KDE, GNOME, XFCE, LXDE, and even some window managers on Linux—are 'GPU-accelerated'.

Every window is a quad of two triangles. There's no real vertex shading since it's all orthographic as you mentioned. The framebuffer for each window is exactly the x:y resolution for that window (macOS does some interesting 1:2 resizing here sometimes). The 'fragment shaders' is where the GUI toolkit comes in, writes to these buffers, and does any decorating where needed.

The final framebuffer is exactly the resolution of the entire monitor (again, macOS may do some weird 1:2 resizing).

The framebuffers of all windows on-screen are composed into this one. This is where things like transparency effects, the window and scroll controls, drop shadows, any 'rounding off' masks (used to great extent in macOS), and funky 'frosted glass'/'reflection' effects come in. This gives the effect of windows behind/in front of other windows. This is also when partially/fully off-screen windows are clipped/culled against the viewport frustum (not really a frustum but more a cuboid since it's not a perspective).

Once all this is done, you have a frame that's ready to be piped down the display cable into the display.

There are some other facets muddying the water like HDCP DRM protection for the entire framebuffer or some window framebuffers, variable-rate refresh, and so on. The former is how PrintScreen on Windows returns a black screen for some windows—that's HDCP in action.


Sorry just have to pop in and say that the DPI scaling makes the concept of "resolution of the window" much more complicated since you have some logical resolution and then the actual rendering resolution and two may not necessarily map 1:1 if DPI scaling is in effect.

Most of the drawing work of a GUI is drawing text, shape paths, and images. A GPU-accelerated UI layer draws these using GPU commands. It's not exactly the same as making an ortho 2D scene, but conceptually that's pretty much it. Often it means using crude geometry (such as a quad per glyph in a text renderer, or a "ribbon" of quads to cover a curve) and using a shader to draw the glyph or curve itself.

For a simple example, think of a rounded rect. Typically this would draw as a quad (a pair of triangles), and a shader would calculate the rounded corners.

There's also a lot of compositing and clipping that happens in a UI (e.g. a large widget inside a scrollbox) which is challenging to do on GPU as these get nested.


Modern 3D rendering APIs (like Metal, D3D12 and Vulkan and D3D11 to some extent) offer far more flexibility than the traditional triangles-and-materials approach. Rendering in Brisk leverages complex shaders that directly process drawing commands on GPU, avoiding the intermediate step of converting them into a list of triangles.

For text rendering, Brisk uses FreeType to render glyphs on the CPU and caches these glyphs in a GPU-accessible texture, which is reused for improved performance. This approach is common among GUI toolkits for handling graphics and fonts.

In addition to this, Brisk employs SDF (signed distance field) graphics wherever possible, which are entirely computed in shaders.


I think it just means the texture image for the widgets are loaded into gpu memory space instead of a traditional framebuffer that gets copied to the gpu.

I dabbled in 3D for a while too and was astonished how much 2D stuff there is for it.


My friend and I used to build a cross platform UI library in OCaml called… brisk. We didn’t make it production ready so i am sure the naming is a coincidence https://github.com/briskml/brisk.

Unfortunately, I didn't come across your library when checking for name collisions. You're right, this is purely a coincidence.

The one thing I have to wonder is:

  How many people really want to spend time programming their UIs?
I use Qt myself and one of the best things about the framework and toolkit is the UI tooling that allows me to drag and drop and visually create my UIs in the UI Designer app.

I find that for any non-trivial application this type of boilerplate is best done with good tooling that just works and lets the UI to be knocked up fast and efficiently.

I also wrote an UI system for my game engine but it's completely drag & drop in the editor. Styling (skinning) is also via UI builder.

Source:

https://github.com/ensisoft/detonator/tree/master/uikit

Live demo:

https://ensisoft.com/demos/ui/game.html

Question, how do you handle arbitrary clipping masks ? In my solution clipping masks require evaluating all the widget parent's clipping rects and writing them to stencil buffer before rendering the current widget. This is unfortunately quite slow...


> I use Qt myself and one of the best things about the framework and toolkit is the UI tooling that allows me to drag and drop and visually create my UIs in the UI Designer app.

But then it's not trivial to write responsive/adaptive applications. In contrast, QML makes it extremely easy to build such apps.

I used to build UIs in the designer as well[1] but after studying QML there's no going back. Here's a new project I program solely in QML (and C++ for the logic)[2].

[1] https://github.com/nuttyartist/notes

[2] https://www.get-vox.com/


Huh, can you elaborate what you mean by this

"But then it's not trivial to write responsive/adaptive applications."

Personally I prefer the widgets over QML mostly because QML is just too poorly typed and checked + you normally need to do a bunch of integration work between the QML and the C++ code. I do see the appeal though.


I mean that it's much easier to write apps that change their layout based on window size. Especially if you want to target both a desktop app and a mobile app using the same codebase. QML is great for that.

QML is definitely getting better in regard to type checking. For example, you can annotate a list with a type:

property list<int> myNumbers: [1, 2, 3]

You can annotate a signal with the expected types:

signal onThisChange(x: int, str : string)

Etc.

You can also ENABLE_TYPE_COMPILER[1][2] to convert QML files to C++ which require you to type your code in order to work, but I don't really have experience with that.

I'm sure there are even more examples I'm missing. There was a discussion regarding TypeScript support in QML[3] but I guess they decided to do it their own way[4].

[1] https://doc.qt.io/qt-6/qtqml-qml-type-compiler.html

[2] https://www.qt.io/blog/compiling-qml-to-c-qtquick-controls-a...

[3] https://bugreports.qt.io/browse/QTBUG-63600

[4] https://bugreports.qt.io/browse/QTBUG-68791


I'll wait for the new gen of developers to re-discover WYSIWYG cross-platform GUI frameworks such as C++ Builder / Lazarus / Delphi like we had in the 90s

Nice project! Do you think you'd ever support a "live preview" or "hot reloading" of the UI or is that beyond the scope of the project?

This is definitely a great feature for future versions and is already in our backlog, though it will require significant effort to implement. Thank you.

To make it really declarative and exception safe, you need to wrap widget creation in function and return smart pointer.

Great job, looks like a huge amount of work.

Very nice, can't wait if you could make it compatible with a Python3 wrapper.

Incredible to see people working on this so kudos

Why would someone use C++ and not Rust in 2024? Familiarity and experience?


I see that WebGPU is used so it is a good candidate for being tried in https://exaequos.com, the OS I am creating and that fully runs in the web browser



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: