Hacker News new | past | comments | ask | show | jobs | submit login
Integrating Zig and SwiftUI (mitchellh.com)
165 points by ingve on May 27, 2023 | hide | past | favorite | 55 comments



SwiftUI shines because you get to write Swift code and embrace the DSL

You'll never be able to write that kind of code with Zig, or C or C++, it's impossible: https://github.com/amosgyamfi/open-swiftui-animations

So the main advantage here would be to be able to consume your Zig code/libraries with your Swift application, and that does look interesting, so you could write your crossplatform app logic in Zig, and only use Swift the UI

That's the advantage that should be advertised, Kotlin tried with Kotlin-Native, seems like a good strategy


The downside of this kind of strategy is that for an engineer to work on it, they need to be proficient in iOS, Android, and zig/c/c++. It’s somewhat uncommon to have engineers known just iOS and Android, let alone a 3rd language / stack.

Another issue is maintenance and debugging - even something trivial like not being able to set breakpoints in the shared code can be a significant slowdown to an engineer. Not to mention the extra wrapping layers your native code needs to smooth out the interactions between the shared and native layers.

For sure there are settings where it makes sense to take this approach, especially if code is shared across more than two platforms. But just for iOS and Android I wouldn’t necessarily say it’s a more efficient solution.

Working on such shared code is also pretty stressful. In a pure native codebase I can simply make the change and test it, but with shared code I have to worry about consequences across many platforms - did I just fix something on one platform only to introduce an issue on another?


> even something trivial like not being able to set breakpoints in the shared code

I think your overall point still stands, but you can setup breakpoints in Zig and have the normal tooling work correctly with it.


  > Working on such shared code is also pretty stressful. In a pure native codebase I can simply make the change and test it, but with shared code I have to worry about consequences across many platforms - did I just fix something on one platform only to introduce an issue on another?
this is one of the biggest downsides

the other big one is the code → write → test → deploy scenario is extremely slow to the point of extreme frustration (at least for kotlin multiplatform anyways) with these kinds of tools


> It’s somewhat uncommon to have engineers known just iOS

This is very common in the iOS world. Not sure about other areas.


Why did you break off mid-sentence? If you continue reading the rest of the sentence, you will see that they are saying the opposite of what you think they are:

> It’s somewhat uncommon to have engineers known just iOS and Android, let alone a 3rd language / stack.


> If you continue reading the rest of the sentence

who stops reading a sentence in the middle?

I did not include Android or other stacks because I don't know about them (as I said, "not sure about other areas"). But for iOS it is not uncommon for developers to solely specialize in it, yet the comment claims it is uncommon. I wonder if you misread my comment or the one I replied to? Or perhaps the original comment is grammatically ambiguous.


My wording could have been better - I meant that it’s uncommon for engineers to know both iOS and Android at a deep level. Sharing code can let you share a lot of your logic, but you still need to understand the native host environment well, in fact maybe even deeper than normal since you are using unconventional approaches.

So you need engineers that have this deep iOS experience, deep Android experience, and deep C++/rust experience. If you are a solo dev with this skillset then maybe it’s a good choice. But in a team / big company setting such an unconventional stack will always be a drag chute on your velocity in all stages of the project


They were saying that it's uncommon for a single developer to specialize in both iOS and Android


Second paragraph in OP:

> One approach to building a native GUI for a cross-platform application is to write all of the business logic in a cross-platform language (C, Rust, Zig, etc.) and then write the platform-specific GUI code. This is the approach I take with my my terminal emulator and it works really well. As of the current date writing this post, 93% of my repository is business logic in Zig and C, and 4% is macOS-specific GUI code in Swift.


React native and all these similar DSLs always seemed like a bad idea to me. I can forgive it on the web as you have no other choice than to use the 3 broken pillars which is html, css, and js and build your sand castles on top of that.

I wonder though, was the development of swift UI influenced by the abundance of react developers or is this what peak UI development looks like?. In the history of GUI applications does react represent the best we have managed to come up with?. I've used it as to me it kinda sucks but I have never worked on the UI side of anything except for small web frontends

I always just assumed that in the 40+ years there were some better UI libraries but apparently not?.


The main benefit of declarative UI frameworks is that they remove the need to manually reconcile changes in state with changes in the representation of that state (with the caveat that it often becomes difficult to do so if the DSL doesn’t cover the behaviour you want).

UIs inherently have a hell of a lot of state that is constantly in flux, so it can become very difficult to manage those transitions correctly and efficiently. By providing a description of how an interface should look given some program state, the developer only need be concerned with the flow of data, and the framework can automatically create and apply the delta to update the UI in the most efficient way. For example, there’s no possibility to forget to re-enable a button in that rarely-used code path, because the “enabled” state is derived from the rest of your app state and any changes to that automatically cascade downwards.

I’m not saying these things are a silver bullet (and I don’t even really do much UI stuff anymore), but when my job involved frontend web stuff, moving from jQuery to Vue was both an unbelievable jump in both productivity and a massive decrease in UI bugs (and LoC!).


DSL stands for Domain Specific Language, not declarative. It is possible to be declarative without a hybrid DSL like JSX. Angular is declarative, but it doesn't mix html structures and js in the same way.


I wonder if you responded to the wrong comment.


It's the one I intended to respond to. React isn't declarative in a new way. Backbone.js popularized declarative event bindings long before it. It is a bit of a unique DSL though.


It was odd because the comment you were responding to does not make the point that you seem to be debating against.


I suspect it would be influenced more by QML than React. Why jump to the conclusion that it's React?


> In the history of GUI applications does react represent the best we have managed to come up with

tl;dr, yes.

Ignore the DSL syntax of React (or SwiftUI or whatever), and understand that React's model is that UI is a function of state. In my experience, being able to declare how your application should look given it's current state (and just let the engine figure out how to get there) is a significantly more productive.

This works very well with React on the Web because the primitives there are pretty basic, and there's a well understood way to break out back down to the imperative DOM APIs. Others (SwiftUI, WinUI XAML from my experience) haven't quite seem to have gotten the mix right yet, so you have a harder time once you need to do more complicated things.

SwiftUI (like React Web) has some really neat benefits from this - views can be serialized down and rendered without your application running. This is how lock+home screen widgets, and watch complications work - the app is intermittently asked to provide snapshots at certain points in time, which the OS serializes and shows later. This is easily possible with SwiftUI because it fundamentally is data.


Are you suggesting that Zig never lets you just think about the problem? That's some terrible preoccupation with an idealized concept of a DSL.

The swift code examples look kinda slick but it isn't a different paradigm any more than JSX is for building HTML.

It seems like a form of The Bipolar Lisp Programmer. https://www.marktarver.com/bipolar.html

> Lisp is like wielding an air gun with power and precision.

Those Swift examples you show are very exciting, but it's best not to get carried away. It's the same as those that say that vim is terrible because it's modal and don't believe that vim programmers can think about the problem while they're typing because their brain is distracted with switching between modes.


>"So the main advantage here would be to be able to consume your Zig code/libraries with your Swift application"

I've heard that BS from MS reps when they visited our company sometime in 90s. They were telling us how we should have monkeys drawing forms in VB and gurus making main code in C/C++. Then I showed them Delphi. You had to see how sour their faces turned.

>"You'll never be able to write that kind of code with Zig, or C or C++"

Your statement looks totally wrong. Looking at code seems like simple "fluent" notation that is quite possible in languages you mentioned.


"you could write your crossplatform app logic in Zig, and only use Swift the UI"

The UI glue code would be nothing special, and not improved by Swift vs Java. The UI would be designed in a design tool. I like open source, so perhaps Penpot.


Rust also works quite well from Swift, in the same way

The key to a nice developer experience is making the build from source flow as easy/integrated as possible

I use a build script in Xcode to build a Rust static library automatically when building an app. This is the latest version in a project I am helping with: https://github.com/hackclub/burrow/blob/main/Apple/NetworkEx...

It would be cool to write a similar adapter but to run build.zig!


the key to 90% programmer is though, zig seems to be 100 times easier to master.

to me, use zig for c,use rust for c++


this is also how the Element X rewrite of the Element Matrix client works - although with Rust rather than Zig. We had to add async support to uniffi for a good experience from Swift (and Kotlin) tho! https://youtu.be/eUPJ9zFV5IE?t=1280 has some deets.


IMHO that’s one of the main benefits of following the Model-View-ViewModel architecture pattern for GUIs. You can have all the models and viewmodels defined in a platform agnostic library, and your views are the only thing to implement in a platform specific way. The core library can be implemented in any language, you just need a way to call methods and setters when the user interacts, and handle update events to update the UI.


This seems like a really good idea! But at second glance, won’t differences between the UI designs on different platforms require subtly different viewmodels for each platform?

Not if you make the exqct same UI on each platform I bet, but if you do that, you might as well use something like Flutter or React Native and only code the whole thing once, no?


Yes, there are differences between platforms, complicating the picture a bit. There is a bit of a trade off/judgement calls when deciding what needs to be in the core library and what to keep on the platform specific code. You can always wrap a shared view model with a more specific one if you have to account for a few differences, or just decide to keep a bit of logic in the view. If you have >80% of your viewmodels that can be shared between platforms, MVVM can be really useful (just my own rule of thumb). If you have more differences between platforms I guess you will want to try something else.

Regarding Flutter, then yes I agree with you, their approach is different, it’s more adapted if you want to design a single UI that will be rendered the same on every platform, without relying on native controls.


Noob question,

What makes languages like Zig, Rust, C and C++ the best fit for cross platform applications over many garbage collected languages? Why is bringing the language runtime a problem?

What does it mean to compile to a C-compatible library?

EDIT:

I decided to ChatGPT my question instead.

https://chat.openai.com/share/9a5f9f7a-0f5d-4cf6-95fc-4e0ec9...


ChatGPT's responses start accurate then quickly go off the rails. The section from this point onwards is completely incorrect:

> Say I called a bunch of goroutines when I was in the Add function of the example you gave, would this be a problem?

The Go runtime is initialised once only in c-shared mode for the lifetime of the application - it would make no sense to do it on every function invocation, and be incredibly slow. So the answer to this section and the next one are just largely bogus.

ie. this response

> However, once you call a function via a C or Swift bridge, it becomes a synchronous operation and will block the calling thread until all goroutines have completed execution. Therefore, you would need to effectively manage the synchronization of these goroutines to avoid unnecessary blocking of the calling thread.

And the response to this question:

> You said in 4 the Go runtime may not keep running, does this mean that every invocation of the Add function has to spin up the whole Go runtime every time? Why cant it just stay alive inside the Swift process?

Are completely incorrect.


Pretty much every modern language (Zig, Rust, C, and C++ included) depends on a runtime. The C runtime is privileged because it is already present on all 3 desktop OSes.

It is also a lot smaller than most other runtimes, which makes bundling the C runtime with the program more palatable.

A "C-compatible library" is a library (i.e. a collection of functions) that is callable in the same way that functions written in C are called. Nearly all non-C languages provide a way to call C functions (because, again on all modern desktop OSes, the operating-system interface is written in C).

If everyone wrote OS interfaces in perl, then you would want to compile to a perl-compatible library. If the Lisp machines had won, then you would be compiling to a Common Lisp compatible library.


That is somewhat conflating the calling convention and the runtime - C really doesn't have much that you can call a runtime outside the calling convention, although some libraries (eg pthreads say) have a little runtime which matters in practise for integrations, but these are libraries not parts of the language itself.


malloc and free are part of what I would consider a runtime. As are the file objects opened for standard I/O.

It's a very small runtime, but all the code before main is properly runtime initiation


> The C runtime is privileged because it is already present on all 3 desktop OSes.

Yes, Ubuntu, Debian and Fedora.

On Windows, you don't get a C run time; you ship a MS Visual Studio run-time DLL if you're using Microsoft's tools (and not static linking), or something else with someone else's tools; maybe a CYGWIN1.DLL or whatever.

You get platform libraries like kernel32.dll and user32.dll; but those are not a C run-time. They are easy to call from C, but other than that, they are the OS run time.

Recently Microsoft has made an effort to create a "Universal C Run Time" for Windows; but I think you still have to download and ship that, and there may be reasons for someone to choose a different run-time. (E.g. needing a pretty detailed POSIX implementation.)


I thought msvcrt shipped with windows as early as XP; was I wrong?


Even earlier; but that's a private system library you're not supposed to use.

Raymond Chen explains it: https://devblogs.microsoft.com/oldnewthing/20140411-00/?p=12...

Microsoft's new Universal C Run Time addresses the problem of there not being a C library on Windows that is for public use (every compiler vendor providing their own).


Curious to know from experts in the above to see if ChatGPTs response is valid? Me as someone who knows nothing about either, looking at a nuanced response from ChatGPT, puts me at awe - esp. response to the question: "Say I called a bunch of goroutines when I was in the Add function of the example you gave, would this be a problem?"


As I responded in a sibling comment, this is the point where ChatGPT goes completely off the rails and starts fabricating responses. Temper your awe :)


It's about the libraries usually, ui lib are all made in c/c++ and those "natives" langage have better integration with it ( ffi ).

For example calling C from Go is doable, but it's not encouraged and sometime a bit slow.

On the other hand Go and Java are trully multi platform and it's well supported and usually easier to do than other native langages.


Nice post!

I'm not sure if the same result is applicable to most software projects. A console emulator doesn't need several types of GUI objects, while most other projects need to display and handle input for several different types of data.

In most projects I've worked on, the proportion of GUI code to business code was different: I saw apprx. 70% GUI code to 30% business code.

So a cross platform GUI layer saves a lot more development time.


> In most projects I've worked on, the proportion of GUI code to business code was different: I saw apprx. 70% GUI code to 30% business code.

Is it just me or is this a sad indictment of GUI libraries/frameworks in general? I'm not saying that the majority of situations I've seen are different, but while looking at these percentages it's actually sad that they seem likely, especially when you consider the lack of control most modern GUI solutions afford you and how you constantly have to work around their bad design. We're paying a lot for very little in the modern GUI landscape.


Is 70% of the code actually GUI code, or is it business-and-platform-logic-coupled-to-GUI code?


New kinds of widgets, composition of classic widgets, view models, event handling (selection state, drag and drop, user inputs), visuals, shortcuts and accessibility.


How does the api and "bridge" look in this type of setup? I'm trying to wrap my head around which parts you'd keep in the cross-platform layer (and whether that layer might be very thin in many applications I've built).

If the application is heavy in UI state, does this sort of thing still make sense? I can't imagine what hell bidirectional data binding would be in this setup.


OP here. UI state lives in the GUI code. For a GUI toolkit like SwiftUI, that's quite important ergonomically.

I'm unsure how to really describe how the "bridge" code looks. All GUI interaction code happens in Swift, and when buttons are pressed and so on, I call functions back into Zig, and so on.

There are certain scenarios where my Zig code calls back into Swift (via function pointers provided). For example, requesting to quit the application. These are handled using NSNotificationCenter accordingly.

As I stated in the post, my cross-platform layer is >90% of my lines of code, and this isn't a trivial program, so for my use case this is working great.


The post is great, hope I didn't seem to be critiquing it! I liked the idea so much I wondered if theres a magic reactive layer with data bindings so I could use it in UI heavy cross platform apps.


Interesting.

I am thinking of writing such bridge for ly use case using Wasm (Wazero) with Go.

Hoping the perf impact of crossing the bridge will not be too high.


Good job. Please add a few words about debugging.


this approach gets me really interested. What are the improvements i should expect in zig compared to C ? And how does zig stdlib compare to something like the go stdlib ( which to me remains the gold standard)


>What are the improvements i should expect in zig compared to C ?

https://ziglang.org/learn/overview/

>And how does zig stdlib compare to something like the go stdlib

the stdlib is still work in progress and not the current focus of development. right now things get added to it organically (ie things needed by the compiler itself, the toolchain, or as a proof of concept for other important functionality, like an event loop). as we move forward we will have a phase where we make a final decision on what should stay in, what should be improved, and and what should be cut.

In the meantime you can quickly get a feel for what's in the stdlib by browsing the online docs:

https://ziglang.org/documentation/master/std/


i just read the doc and saw a page saying stdlib only targets x86 arch at the moment ? so no arm ?


According to https://ziglang.org/documentation/master/#Targets the only fully supported targets are x86-64. This does not mean that the entire stdlib will only work on x86-64 but that specific functions that require information about the inner workings of an architecture is only guaranteed to work on x86-84.

If you'd be interested in a hash function like CRC32 you can see in the source code that there is no dependency on arch specific functionality, which makes it likely that it will work fine on other (LE?) architectures. This is on a case-by-case basis at the moment though. The advice from the Zig community is to check the source code in case of doubt. I have always found the source to be accessible, so this hasn't been an issue for me.


nah that's absolutely not the case, we support a bunch of architectures (with varying degrees of support level). can you link to the page that said that?

this is the most up to date support table for Zig:

https://ziglang.org/download/0.10.0/release-notes.html#Suppo...


sorry, i can't find the link back. Maybe it wasn't on the official site but from an old website.

Another question i have : are there any kind of mechanism to ensure at least some kind of memory safety with regards to memory deallocation ?

I'm honestly halfway to try and prototype some cross-platform lib for my mobile app using zig, but this part is really the one that bugs me. I'm fine with having no GC or no ARC like go or swift, (and i'm ok with no ownership mechanim like rust), but seeing absolutely nothing new since C on that side is both a surprise (since zig has improved other aspects like slices and fat pointers), and a worry.


The GeneralPurposeAllocator will tell you about memory leaks and double frees in debug mode. Take a look at the example on the front page of ziglang.org.

That said, the main feature that Zig offers for avoiding those mistakes are defer and errdefer, those really go a long way in avoiding mistakes, while also keeping things simple and explicit, which also helps avoid bugs.

ArenaAllocator is another thing that helps with memory safety, depending on what you're doing.

Finally, having to pass an allocator explicitly to all the things that need to allocate also helps with that: if you have an allocator to a struct when initing it, then you know you will most probably have to call its deinit method once you don't need it anymore.


> Another question i have : are there any kind of mechanism to ensure at least some kind of memory safety with regards to memory deallocation ?

In general you probably need to learn about custom allocators and how different kinds can help you manage your memory. This applies to pretty much any lower-level language, but Odin, Zig, C++ and a few others have explicit support for this in their standard libraries.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: