Hacker News new | past | comments | ask | show | jobs | submit login

To add to the GC discussion, something that many that weren't around during the GC project failure for Objective-C, is that ARC was pivot from a failed project, but in good Apple fashion that had to sell the history on their own way.

The GC for Objective-C failed, because of the underlying C semantics, it would never be better than a typical conservative GC, and there were routinely application crashes when mixing code compiled with GC and non-GC options.

Thus they picked up the next best strategy, which was to automate the Cocoa's retain/release message pairs, and sell that as being much better than GC, because performance and such, not because the GC approach failed.

Naturally, as proven by the complex interop layer in .NET with COM, given Objective-C evolution, it would also be much better for Swift to adopt the same approach, than creating a complex layer similar to CCW/RCW.

Now everyone that wasn't around for this, kind of believes and resells the whole "ARC because performance!" story.




Do you happen to have any source/book on why you can't use anything but a conservative gc on C-like languages? I would really like to know why that's the case.


Basically C semantics are to blame, due to the way C was designed, and the liberties it allows its users, it is like programming in Assembly from a tracing GC point of view.

Meaning that without any kind of metadata, the GC has to assume that any kind of value on the stack or global memory segments is a possible pointer, but it cannot be sure about it, it might be just a numeric value that looks like a valid pointer to GC allocated data.

So any algorithm that needs to be certain about the exact data types, before moving the wrong data, is already off the table in regards to C.

See https://hboehm.info/gc/ for more info, including the references.


Thank you!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: