Hacker News new | past | comments | ask | show | jobs | submit login
How async/await works internally in Swift (swiftrocks.com)
148 points by ingve on Sept 28, 2023 | hide | past | favorite | 47 comments



I've been playing around with Swift and SwiftUI for the past few months and the thing that baffles me is why are certain features only available with a minimum iOS version.

For example async required iOS 15.0. Why is this tied to the OS? Why can't they include newer runtimes be downloadable like Node / Java / .NET etc?

Other examples are from SwiftUI. For example the NavigationStack appears much more useful than the older NavigationView but that requires iOS 16. Which means that you can't support anything older than an iPhone X.


Here’s a blog post where they describe the rationale behind switching from app-specific swift versions to bundling it with the OS.

https://www.swift.org/blog/abi-stability-and-apple/


I think it comes down to Apple trying to avoid as much fragmentation as possible. Fragmentation makes debugging, troubleshooting, and development a nightmare. It’s not an objective good but everything has its trade offs.


FWIW, as the application developer, it increases fragmentation, as more of the code in your app is determined by what version of the OS the user is running. If Apple were developing and debugging everyone's apps, that argument would make sense (but, of course, they are not). If you truly want to minimize costs for development and debugging by minimizing fragmentation you want to provide the most uniform and stable interface as possible for the developer and let the app then operate as identically as possible across every device it will ever work on, not just today but into the future.


Most Apple users always run the latest iOS version. And the vast majority of apps aren’t seeing major new OS features each iOS release anymore that they just have to adopt to stay competitive - they can focus on their business these days.

So in that kind of market, this approach does reduce support costs because you just pick a target iOS version to support and test with devices running that and you know newer devices will also just work.


> Most Apple users always run the latest iOS version.

I assume you're talking about the latest iOS version for their device, which may or may not be the latest iOS version released by Apple. I was running around with an iPhone 5 until 2019ish when apps stopped working altogether.


New iOS version adoption figures speak for themselves. Honestly it doesn’t sound like you are “most people”.


I mean, looking at https://www.statista.com/statistics/565270/apple-devices-ios... -- "earlier" versions stick around 5-10%, which was enough to keep people supporting IE for over a decade beyond its EOL. So, I don't think you have a valid argument.


5–10% of people using earlier versions absolutely backs up the statement “Most Apple users always run the latest iOS version.”

It’s not reasonable to compare native apps to browser support. Dropping support for an older version of iOS doesn’t cut users off, they can continue to use the more recent version of your app that supports their device.

It’s especially unreasonable to compare it to people supporting Internet Explorer for a decade. Microsoft halted all development for five years and people carried on using it much longer. Apple comes out with a new major version of iOS every year and nobody is using decade-old versions of iOS.


I’m currently on an iPhone 7 so I’m also stuck on an old version

It works fine so I don’t see a need to get a new phone, except less and less apps are going to continue to work eventually


I don't see how this would reduce support costs compared to shipping the runtime with the app. Couldn't you just pick a target iOS/Swift runtime version and support that even if the runtime wasn't tied to the iOS version?


The reduced support costs are that your testing complexity (ie cost) is hugely driven by how many OSes you are supporting. By bundling it with the app, you’re pretending that the Swift runtime is the only thing you have to test when you have to test all the OS integration bits. So by tying the runtime to the OS it’s saying “these are the same” and you only have a single compat flag to select. They do this with C++ runtime as well btw. And I believe the language and the runtime are decoupled from what I read although I haven’t paid attention to swift for a long time (ie you can enable new language features without using a new runtime).

The other support costs it reduces is for Apple because their testing matrix for making runtime releases is drastically reduced. Which means the swift team runs more efficiently.

It’s annoying but for end users it’s even more valuable because apps are smaller (and more longevity for storage) and less memory is used.

TLDR: Apple has always run the languages this way and it works nicely for their ecosystem.


Joke's on them because debugging is already a nightmare.

I've been using Swift since its inception, it's been years since I've been able to pause execution at a certain breakpoint and do "po object" and get something back that is not an error.


Well for starters, if you were actually playing around with async/await on iOS you’d quickly find out it was back ported to iOS13.

It is frustrating though, although in my case 40% of my DAU are already on iOS 17 (>3m devices) so it’s not the end of the world.


My understanding for the reasoning behind this is that they don't want apps to have to ship the runtime stuff to avoid downloads and app sizes getting large.

That could be fixed by just shipping an updated shared library to all phones similar to how Google play services works on Android, but I guess they figure if you're updating anyway you might as well just update the whole OS.


ASync has been backported to 13


Yeah, that's a bad design that leaves many functional old devices in the dust


I’ve been doing multithreaded embedded system for over 20 years, and the latest system is a custom Actor C++ framework (so yes, I’ve professionally used Actors in a shiping system).

Having said that, I find Apple’s implementation exceptionally complex. Aiming for safety and catering to inexperienced devs, they’ve created something way too complex for less experienced devs to understand. In my personal opinion, they should have overlooked Actor safety and gone with a simpler model which is easily understandable, and rely on experienced devs to understand what is happening under the covers and program accordingly.

To use a cooking analogy, We live in a bizarro world where chefs are being asked to abandon cooking knives (they’re dangerous), and to use dozens of kitchen aids for safety. And the limitations this produces…


I wonder. Do you happen to know how Swifts new Concurrency Model compares to C# and Java 21?


Yet to read the article :)

C# and Rust async models are based on top of state machines and cooperative yielding back to the executor. The difference between C# and Rust is a tradeoff of either the ability to just not think and use async/await naturally with really nice defaults but paying for those with heap allocations of state captured by continuations or dealing with the memory model explicitly which requires more effort but gives you fully configurable and/or deterministic behavior, that can (and usually does) achieve far lower overhead.

Java uses green (virtual) threads where the runtime can preempt (pause/suspend) them (keeping virtual thread stack in memory) and schedule the execution of a different green thread on top of the current physical one, not dissimilar to C#/Rust as they achieve so explicitly, where the next work item is executed by the threadpool once the current one yields.


My main pattern of concurrency is scheduling a computation made of a serie of asynchronous steps. I want this computation to be cancellable, and i want to schedule them keeping them in order.

Basically NSOperation scheduled to a non-concurrent operation queue, where the operation itself is made of async calls.

What would be the recommended approach for doing that with swift concurrency ?


I have found myself, many times, in exactly the same situation. I think a concurrency-compatible queue is basically essential for many kinds of pretty boring situations. I made one here, and it look a lot like an OperationQueue. But it also points to another implementation I found recently that is certainly different, but also might be interesting.

https://github.com/mattmassicotte/Queue


thanks, yes i saw that link in the video linked in OP. I am absolutely dumbounded swift concurrency and actors don't offer a trivial solution to that problem.


Probably write your own queue


sure, how ?

One actor per queue, then one actor per operation ?


Depends on the exact semantics you want. What are you looking for?


Swift async/await is such a foot gun that induces deadlock over deadlock, and whats even worse, deadlocks the entire app so you cannot report and notice this deadlock state in the app itself, which makes you blind. I almost never had this problem with dispatch, you could fairly reliably guarantee and reason about some backup queues being able to detect these states and report the deadlock state.

IMO I would recommend not interacting with async/await as much as possible and stick to dispatch queues you can reason about far more easily.


Do you happen to have some links to information about the issues you mentioned?

Swift async/await has worked excellently for me so far. The biggest issue is that most libraries aren't updated to use it (and sometimes couldn't be, because they require Custom Actor Executors, which weren't available until Swift 5.9).

I found the following video helpful to better understand Swift async/await: "Swift concurrency: Behind the scenes" https://developer.apple.com/videos/play/wwdc2021/10254


I'm not sure what you're doing to get into deadlocks, but when used as prescribed, I personally haven't run in to these issues.

Swift concurrency is still in a transitory period, and with that comes some warnings about how you can mix it with legacy concurrency primitives. i.e. not holding a lock across Task boundaries.

However, it's fairly well documented. There's a talk 'Swift concurrency: Behind the scenes' [1], that goes into detail on this. View from around the 25 min mark.

[1] https://developer.apple.com/wwdc21/10254?time=1614


I've not really played around with async/await because I immediately found issues trying to replace my GCD code, swift still doesn't have the fine grained control GCD offers.

I replaced some of my other async code with Combine, which I do really like now, it's proving itself to be pretty solid


I’m curious what sort of data structures you’re using where you’re hitting deadlocks? Are you by chance doing shared mutation of lists?

I’ve written several async Swift apps and not hid deadlocks but I also tend to structure my data in ways that avoid shared rw access where possible.


A lot of people are asking you how it's possible you're getting deadlocks, so I'll reply to all of them here: It's definitely possible without doing anything wrong in your own code, if you're calling poorly written code in other frameworks. See: https://forums.swift.org/t/deadlock-when-using-dispatchqueue...

The important quote:

> both Swift concurrency and Dispatch’s queues are serviced by same underlying pool of threads — Swift concurrency’s jobs just promise not to block on future work

What this means, is that if you're in a Swift concurrency context, and you dispatch_async work to a concurrent queue, then use a semaphore (or similar) to block on that work completing, then the thread pool implementation will not backfill the blocked thread. Crucially, this is true even if the code doing the semaphore hack is old ObjC code that used to work fine.

So if some older code you happen to be calling is doing something like:

    func badIdea() {
        let sem = DispatchSemaphore()
        someConcurrentQueue.async {
            doLongRunningThing(completion: { sem.signal() })
        }
        sem.wait()
    }
and you happen to call `badIdea()` from all cores simultaneously, you'll deadlock.

Now, under normal pre-Swift-Concurrency circumstances, GCD would spawn a new thread to handle the queue.async block, which would free the semaphore (this leads to thread explosion, but at least not deadlocks.) But if the call to `badIdea()` happens to be done by a Swift Concurrency Task, then the thread pool gets a hint saying "don't worry, this thread will never block on future work", so it doesn't spawn a new thread to handle the dispatch_async, and you're hosed.

How exposed you are to this issue depends on what kind of code you're calling (third party, even code written by Apple) that may be doing this semaphore hack. You don't have to do this semaphore hack yourself, for this to be a problem. You just have to call into poorly written framework code which may be doing this.

Now, the answer to this problem is that "nobody should write code that does this", which is absolutely true, but it also is the case that there's a lot of code which does it anyway. A lot of people run into this function-coloring issue (which existed before `async/await` was a thing, completion-based functions have the exact same problem) and find themselves painted into a corner where they need to be synchronous, but they need to call asynchronous code, and using a semaphore works, so they just do it and ship it. Swift Concurrency rather silently changes the contract here so that stuff that used to be "merely" a bad idea, is now a deadlock.


This is exactly it, a lot of apple framework code under the hood is not safe and you don't notice it most of the time, but you get users reporting it and maybe reproduce it intermittently once a week if that.

Most apps are not as intense as ours, they are web app equivalents that just do a few http calls and display form data. Our app is pretty intense with gpu background jobs, image queries, ai models running, local db modifications and network uploads occurring, which is way more of a concurrency stress test than most. It acts like a local only desktop app with some optional internet features.

It’s the observability blocking that is the worst part. If we could observe deadlock states then it wouldn't be as bad.


async/await is deadlock-free if things are working correctly (and they mostly do) and you don't do anything that hinders forward progress. what are you doing with it?


> Apple was one of the companies at the time that recognised the need for a safer, modern alternative to these languages. While no amount of compiler features can prevent you from introducing logic errors, they believed programming languages should be able to prevent undefined behavior, and this vision eventually led to the birth of Swift: a language that prioritized memory safety.

i'm pretty sure that's not the reason for the birth of Swift. Trying to access the 20th element from an array of 2 elements would cause a crash in almost all languages including Objective-C which Swift replaced.


Trying to access the 20th element in an array of 2 elements in C, or objective C for that matter, may instead of crashing, print out a security key, or give someone access to your credit card information. Security bugs are order of magnitude more dangerous than availability bugs. The industry is finally realizing that as we mature software engineering as a proper discipline. Human beings will write bugs, that's just a fact of life, we need to make sure the impact of those bugs is as small as possible. And the best tools we have for that, is programming languages where entire classes of bugs don't even exist anymore.


> Trying to access the 20th element from an array of 2 elements would cause a crash in almost all languages including Objective-C

It depends on which part of objective C you’re referring to: the C part, or the objective part. ObjC has NSArray, which has safe, bounds checked accessors. But ObjC is a strict superset of C, and C has very unsafe C-style arrays. In the latter, you definitely don’t always get a simple crash for accessing out of bounds… you get UB and buffer overflow exploits, same as C.


Yeah, people that shit on memory safety don't understand that memory bugs don't just lead to your program crashing. That has never been a problem. The problem is that they lead to you having to roll out fixes in a hurry in a race against hackers, with the company and safety of customer data on the line. A program crash is just an availability issue, a security bug can be an existential risk for a company.


Right. To elaborate on my original point, because ObjC includes C, saying "ObjC has safe arrays because it crashes on OOB" is both true and irrelevant. It has safe arrays, but that's not all it has. It also has very, very unsafe arrays. Swift doesn't have this issue, there is no unsafe array type in Swift (absent doing very unsafe byte-level address casting using functions that are literally prefixed with the word "unsafe".)


This part of the article is actually backwards. The interesting basic part about Swift isn't that it has bounds checks on arrays (which everyone also does), it's that it also has bounds checks on integer overflow, which half the "safe" languages turn into silent logic errors.


> […] the so-called “precursor programming languages” like C++ or Obj-C.

Like if there were no other languages before C++ or Obj-C, eh? What about C? Or Fortran? Or Ada? Speaking of Ada…

> Apple was one of the companies at the time that recognised the need for a safer, modern alternative to these languages.

People recognised need for safer languages way before Swift was created.


I remember my dad complaining about the invention of Ada and how bad it was, designed by committee, etc.

At least, I think it was Ada. Can't ask any more, he was born in '39, we scattered his ashes nearly a decade ago.


Swift is a very modern language and very few languages get close to its safety. Rust does and goes beyond, Zig probably is very similar.


Zig doesn't protect against UAF.

What Swift offers in safety is already available in Lisp, D, Ada, OCaml, Haskell, Delphi, among many others, for a few decades.


Thanks, I had to look up UAF: use after free.


Rust > Swift >> Zig when it comes to memory safety.


Since 1958 actually, had UNIX not been originally available as free beer, C wouldn't have taken over the world.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: