The "underlying C semantics" is a very important point there though. Consider C++, adding a GC to it has the same problem. The GC in Rust was also a complete mess, they're kinda all in the same bucket.
Swift, had it not needed backwards compatibility with Objective-C, could probably have gone with a tracing collector. It's a higher level language than the other ones.
Yes, to the point that only two production quality GC C++ exist, C++/CLI and Unreal C++.
C++/CLI is only considered safe, as long as pure MSIL is being generated, and specific C and C++ features are forbidden, if you make use of them, the code is considered unsafe by the verifier, and some surprises might happen, just like with Objective-C.
Whereas Unreal C++ makes use of specific macros to mark the GC aware objects, and possible bad behaviour is covered by the fact that those objects are mainly used from Blueprints, not raw C++ code.
Actually, once C++ has reflection it'll be possible to get rid of the ugly macros and the "only" issue left is root-pointers/shadow stacks. Googling just now to find the boost_pfr library I noticed a newer one that would suit the bill ( https://github.com/boost-ext/reflect/ ).
Makes me itch to rewrite a small gc I wrote a couple of years back to use that to see how "painless" it's possible to make it without ugly MINIGC_AUTOMARK(..) macros ( https://github.com/whizzter/minigc )
Reflection will make it trivial to generate tracing code automatically, so roots are the only real issue. If you do like me and require that the roots be explicitly passed to collect, then the programmer can just make the shadow stack by hand.
Though of course this will still be a much simpler beast than the sort of concurrent GC you see in managed languages. Supporting that sort of GC is a whole other ballgame, needing safepoints inserted into functions and things like that.
> Swift, had it not needed backwards compatibility with Objective-C, could probably have gone with a tracing collector.
Programs using tracing GC have a higher memory ceiling than the equivalent using RC, don't they? Because things hang around for a little (or a lot) longer than is strictly needed. Apple's M.O. has been to put as little RAM in the iPhone as they can get away with.
Yes, on the other had, the optimizations to make RC impact run-time execution as little as possible, slowly turn them into a kind of tracing GC algorithm anyway.
Specially when you add stuff like background cleanup threads to handle the domino effects of when a pointer reaches zero, and it is pointing to a graph structure full of pointers that will themselves reach zero.
And then there is the whole non-moving memory blocks and related fragmentation, unless the pointers get turned into handle like ids.
Yes, at the price of taking time away from the mutator thread, while a tracing GC can do useful work in parallel. Arguably though, especially for the early mobile devices an RC might be a better choice.
Swift, had it not needed backwards compatibility with Objective-C, could probably have gone with a tracing collector. It's a higher level language than the other ones.