Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The origins of Objective-C at PPI/Stepstone and its evolution at NeXT (acm.org)
124 points by fanf2 on June 14, 2020 | hide | past | favorite | 79 comments


Good article, so glad to find it here. I wish, though, that the authors had talked a lot more about memory management in Objective-C, because this had a major impact, not only on the development of Objective-C, but also on Swift. As far as I can tell, the idea of (explicit) reference counting came from Next, when Blaine Garst and others were working on remote objects. When I joined Apple (and therefore had to learn Objective-C), the most important task was to understand reference counting and memory management. Yes, there was a lot more, like interfaces, properties and the many libraries, but understanding alloc/new/release/autorelease was crucial for shippable software. GC was introduced (and deprecated) during my time, but it was the realization (presumably by Lattner and his team) that the LLVM analyzer could do a better job than humans at managing reference counting that was huge. No more hours and hours hunting down memory leaks! And, of course, only one concept required in Swift, namely weak vs. strong references.

In summary, a terrific paper, but it is too bad that this crucial part of the story is missing.


Indeed; reference counting was one of the most important aspects of Objective-C. Probably the most difficult aspect of it was that while most of the time method names have hints as to what increased a reference count, it wasn’t always obvious. And then, of course, you had Core Foundation functions which could be tricky to grok in terms of memory.


> As far as I can tell, the idea of (explicit) reference counting came from Next,

Maybe in objc, but in the general case I kinda doubt this. It's a very common pattern in kernel development as an example, haven't done any archeology here but I have seen it in codebases rooted in the 80s at latest, probably earlier. (Did original Unix reference count file objects in the FD table the same way a modern kernel would? Would not be surprised if yes.)

Autorelease is pretty unique though and clever.


I believe this is the first paper to describe reference counting:

George E. Collins. A method for overlapping and erasure of lists. Commun. ACM 3, 12 (Dec. 1960), 655–657. https://doi.org/10.1145/367487.367501


Good point. I really meant reference counting as implemented in the Objective-C runtime architecture.


This is a wonderful bit of history, really grateful to have it. I remember reading an interview with Steve Jobs in Wired magazine in the 90s where he talked about the future being all about Objects – "The Next Insanely Great Thing" [1]. I built my career on top of Objective-C. These days I really enjoy Swift, but I still have a space in my heart for the dynamic beauty of Objective-C.

1: https://www.wired.com/1996/02/jobs-2/


Swift is great but is starting to suffer from feature bloat. I really loved the simplicity of objective-c. If you can get past the syntax you find a very thin wrapper on top of c that lets you do really fun things. I don’t think I’ve ever seen another language where people followed the same patterns so consistently either.


> where people followed the same patterns so consistently

so true. This is one ecosystem where you could actually just include another persons code as if it was yours, and get a great understanding of how to use it from a header.


All modern languages tend to suffer from feature bloat - there's this obsession of covering every single edge use case, every possible paradigm. C#, Swift, Rust, Python, even C++, all are in this competition dynamic where they want to natively support everything the others do. It's very tiring to keep up, and it's a quick way to end up with a buggy unmaintainable mess of a language.


Ruby is probably one of the current languages most like ObjC in it's bones, at least in certain dimensions, due to them both being so influenced by smalltalk. But ruby isn't necessarily doing great as far as market share either. :(


It's been a while since I've used it, but before I made native apps I had a little web development business going, and used Ruby on Rails extensively. I couldn't articulate the similarities at the time with ObjC, but looking back I'm sure it had a lot to do with my comfort working in Ruby. And, I credit that web work I did in Ruby with my becoming a better programmer.

Being able to drop in to low-level C for real-time applications (audio, in my case) was a great aspect of ObjC. Though there were plenty of times I wished I could program in ObjC in the real-time threads. There were many debates on the Core Audio forums about whether ObjC was safe in real-time threads. Most of Core Audio was developed by C++ programmers, and their default was to avoid ObjC out of a fear that a surprise lock or memory allocation would glitch out the audio. There was, however, at least one NeXT veteran on the forums who felt this was nonsense, and recalled writing device drivers in ObjC.


> one NeXT veteran on the forums who felt this was nonsense, and recalled writing device drivers in ObjC

So did I.

https://www.nextop.de/NeXTstep_3.3_Developer_Documentation/O...

It was quite lovely, and I later heard from Apple kernel devs that moving to C++ from Objective-C was one of their biggest regrets.


Apparently they did not regret enough, because DriverKit was the opportunity to reboot that decision, and it was decided to still be C++ only.


It isn't a "fear" - it's avoidance of demonstrated behavior.

You cannot use any GC-based language if you want to guarantee an upper bound on latency. You cannot use any language that uses global or even large-scope locks if you want to guarantee an upper bound on latency.

It has nothing to do with "writing device drivers" - there are plenty of device drivers for which latency is not anything like the issue that it is for audio.


>You cannot use any GC-based language if you want to guarantee an upper bound on latency

ObjC wasn't then and isn't now an GC-based language.


Someone else has made this point below, but I'm not sure where this misunderstanding comes from. Reference counting is a form of garbage collection.

My copy of Jones and Lins "Garbage Collection" textbook includes Reference Counting as its first item its "classical algorithms" chapter.

It's usually non-tracing, but variants that can cycle collect will even do some limited tracing to resolve cycles.

And the original comment about latency still stands, even for RC. A single de-reference can lead to a whole series of collections, taking an unknown amount of time, and thrashing cache in the process. A deep tree of objects deleted at the top creates a cascade of de-references. And modern high level languages tend to encourage deep object trees.

When doing realtime programming, I avoid anything with GC. Out of experience. Yes it is possible to tune GC for realtime performance -- but it's fiddly, and never perfect. There are systems tuned for incremental realtime collection -- but there is a cost to this.

As software engineers our job is to pick the right tool for the job. I had many years usually almost exclusively garbage collected languages (including Objective C) before ending up where I am now (I work mainly in C++ lower down the stack)

Manual memory management still exists for a reason.


I'm sure I sound like a broken record but Rust has solved the problem of memory safety lower in the stack pretty well with the borrow checker. There is almost no reason to use unsafe code aside from interop.


Sure, I'd be into it, if someone would pay me to work in Rust. That hasn't happened for me yet.

But there is a cognitive overhead cost to working with Rust. Or even just following consistent RAII patterns in C++. It doesn't come for free.

Trade-offs.


Yep. ‘Tis true. I keep telling Rust people that I could do with less syntactic overhead and that sometimes I just want GC, but I don’t think it’s a priority for the community and they already have a lot on their plate and it does basically everything else right, imo.


Ignoring the problem with calling something a “GC-based” language, Objective-C is most typically used with ARC these days, which is definitely GC.


Reference counting is still not garbage collection. ARC causes the compiler to implicitly insert calls to retain and release, that’s it. Everything is still 100% deterministic at runtime.


Reference counting is a form of garbage collection [1]. Any automatic memory management is garbage collection. There is in fact a continuum of memory management strategies between full reference counting on one end and full stop-the-world mark/sweep on the other.

Malloc heuristics also mean that it is not true that everything is 100% deterministic at runtime. It's easy to forget that malloc is simply an abstraction like any other. When do free memory blocks get reshuffled between threads in a thread-caching malloc? When does unused memory get returned to the OS? These are all things that cannot be predicted. Even retain and release are not 100% deterministic, as they can and will be optimized away by the compiler.

[1]: https://www.memorymanagement.org/glossary/g.html#term-garbag...


reference counting is reference counting.

a reference counting gc is a form of garbage collection.

not sure what applies here, as am not versed in objc, but, the terms are distinct - in theory one can have an object system that refcounts objects so that they are not mis-deleted if some reference still exists and the object is accidentally deleted by some code, but still allow for mostly-manual memory management.


We're discussing ARC here, i.e. Automatic Reference Counting, which is a garbage collection algorithm. Retain/release would be manual memory management by reference counting, which is not being discussed.


Even without ARC it's still garbage collection. The reference counts are kept for the purpose of managing garbage collection. That the programmer has to do some manual retain/release steps is beside the point, even if it does make the experience somewhat painful.


It's not deterministic because you don't know the timing relationship between threads which may drop references. The only sense in which it is deterministic is that "when the reference count reaches zero, the object will be destroyed". That's a long way from the determinism of manual/explicit memory management. It gets even worse if there are per-thread pools in use, though to be fair, this is mostly an application design issue rather than a language one.


It’s not tracing GC, but it’s well-known as a garbage collection algorithm: https://en.wikipedia.org/wiki/Reference_counting#Garbage_col...


Well Objective-C wasn't doing well before iPhone. Then the smartphone revolution started and you are all forced to use it.

Ruby also never had much if any cooperate backing. MRI / CRuby is still by and large improved by people in their free time. It wasn't until the past 2 may be 3 years did Github and Shopify had some interest in the Ruby Runtime.


In US, because plenty of us were already using J2ME, Brew and Symbian C++ before iPhone happened.


In 1994 I was on a team that had one of the few uses of Objective C that had nothing to do with NeXTStep. The Swarm Simulation System[1], an agent based simulation toolkit. We used GNU's Objective C implementation and Tcl/Tk (via tclobjc) for the UI.

We picked ObjC because we knew we wanted something object oriented but C++ seemed too complicated and too static. ObjC was simple and appealingly dynamic and flexible in its type system. It worked pretty well for us other than the costs of being an oddball language no one knew. But it was simple enough to learn that a fair number of our target audience did.

In retrospect I wish we had the courage to use Smalltalk or a Lisp system but at the time that felt too risky. Java was also just beginning to be an option then, but it was still being shown as a toy for making applets and not a real programming language. Also very slow before JITs.

I really appreciated that the ObjC runtime was open source and very small. It was quite easy to get in deep and understand what was going on.

[1] http://www.swarm.org


I have fond memories of using Swarm in an undergrad course on agent based simulation ~20 years ago. Just tweaking the heatbugs example in various ways was already enlightening. Thanks for your work!


And in 1999 I ported a particle simulation engine from Objective-C/NeXTSTEP into Windows/C++, because the university departments were getting rid of their Cubes and my supervisor wanted to rescue some of the department work.

The turns that the world does.


Back in 1996 I ported a fairly large (exactly how big I forget) workflow framework that had been developed in Objective-C on NEXTSTEP to Windows. It didn’t use Foundation, so we didn’t need much beyond gcc and some general portability elbow grease.


Wow. “Software-ICs.” I remember that, and I thought it was a great concept. At the time, it was revolutionary.

Nowadays, it’s how we do everything, but the term “Software-ICs” never climbed out of the bassinet.


No, Brad Cox thought that the only way reuse could happen at scale would be via micropayments, which of the time he considered to be integral to the software IC idea: https://deprogrammaticaipsum.com/brad-cox/

It turns out, of course, that the assumption was faults, and trying to track every dynamic use of software to pay its makers was completely impractical.

What actually happened is similar to what happened with tcp/ip. The phone companies couldn't see how it would be useful, because its lack of per packet accounting meant it couldn't charge for every packet. Instead, by throwing away all the complex accounting systems, TCP/IP could grow far behind what the phone system could do.


My impression is that micropayments were a later evolution of his thinking. When I met Cox in the mid-90s, his talk focused on micropayments as a newer concept.

Stepstone started out by selling bundles of "Software ICs", after all, for a price that was rather "macropayments".


In some sense the idea was just ahead of its time. After all, micropayments are a reality today with the hosted APIs and high-level cloud services that seem to be all the rage these days – services where the primary value-add is in the software that someone’s running for you. It’s not exactly the same concept, but pretty close. Not that I’m a fan of the trend.


The problem wasn’t nomenclature or technology, it was economics: Part of the premise of “Software-ICs” was that, like physical ICs, a market would develop in software components that could rival the existing “shrink-wrapped” software industry in size. That never came close to happening, for reasons that it would be fun to speculate about.

Perhaps one could make the case that it’s finally happening except as “cloud services ICs”.


I wonder what would have happened to the Linux desktop ecosystem had it embraced something like Software-ICs modeled similarly to Apple's OpenDoc project rather than to implement monolithic applications. One of my favorite Hacker News comments describing what could have been is this (https://news.ycombinator.com/item?id=13573373). My views are in line with this comment; I believe that component-based software is a more efficient development model, and I also believe that this would provide necessary differentiation of the Linux desktop ecosystem compared to commercial desktops like Windows and macOS.

I'm actually planning a side project where I want to implement a component-based application framework on top of Linux and the BSDs, but using the Common Lisp Object System as its base as part of a long-term vision to build a modern-day Lisp operating system (I want to focus on the application environment first, leveraging the wide hardware support that Linux and the BSDs have compared to lesser-known operating systems like Mezzano). I haven't heard of the term Software IC until now, but I'm familiar with the history of the OpenDoc project as well as other component-based software technologies such as SOM, COM, CORBA, and XPCOM.


The fundamental problem is that no matter how much more efficient a component-driven model is for developers, users don’t want it. To a first approximation, nobody wants to assemble their own software, or even choose amongst assemblies made by a zoo of “integrators” who turn components into applications.

Developers want this, and indeed they benefit from a developer-focused OS like Unix that builds on the component model. But that’s the outer limit of the market for components.


Well, the old VBX market was pretty robust. I remember receiving catalogs in the mail advertising components to add to a Visual Basic project.


Same for Delphi, MFC and to a lesser extent WinForms. I think the rise of open source libraries simply killed that market and with the web being steeped into open source mindset from the get go, it never took off there.


The market is still there, the customers I work for, keep buying such components, even in 2020.


I'm not sure it was the mindset or just no real mechanism for making isolated components.


There isn't much of a mechanism in any of those, the components were essentially libraries. Sure, in Delphi and VB you get a tiny icon to represent the component and a visual property editor, but that doesn't really translate web stuff that is 100% text-based. AFAIK MFC has nothing visual.


Windows had a whole mechanism with VBX and OCX for building the components and keeping their innards isolated. Javascript had no equivalent.


Yes, but this was only for Visual Basic. Delphi, MFC and WinForms each have their own and is just a library. JavaScript also had libraries since, if nothing else, you can always load multiple scripts per html page.

Also, FWIW, i wasn't only referring to JavaScript but also server-side too (e.g. PHP or Python or whatever).


I think that happened for awhile in the early 2000s. I know we purchased a lot of UI component libraries back in the day.

But I think we all had a enough of those component development companies go out of business or deprecate the version we were using that we are gun shy on doing it again.

I know I stick with opensource only at this point for that very reason. At least if they decide to stop supporting me, I can support myself.


For more of Brad Cox's thinking see his book "Superdistribution: Objects as Property on the Electronic Frontier".


Yes, and I still think it still is great concept. :-)

I also don't think it's really how we do everything, the original concept caught on only very partially, and appears to be more and more forgotten.

See Software-ICs, Binary Compatibility, and Objective-Swift

https://blog.metaobject.com/2019/03/software-ics-binary-comp...


IMO, the trouble with COM and its imitators is that they're prone to gross over-use. The best example I know is Gecko, which over-used XPCOM and then had to go through what Mozilla folks called deCOMtamination. [1] I think IE might have over-used COM to some extent as well, but that's only speculation based on what I saw on the outside. (Disclosure: I work at Microsoft, but I joined well after IE became Edge, and I was never on that team.)

Then Chrome landed like a piece of alien technology, and if we took a look inside, we found that it was one giant binary module (DLL, .framework, or executable, depending on the platform) that internally didn't use anything like COM, at all. It was also fairly well-known for its use of link-time optimization. I wonder how much these things contributed to Chrome's famous speed.

Of course, Chrome was only able to pull this off because the team had great engineering discipline, and later, a great build system (first GYP, then GN). I remember when I built Chromium for the first time and was awed at how it was made up of hundreds of modules, but they were all built as static libraries and then linked together into one monster binary module at the end. These days, newer statically compiled languages like Go, Rust, and others are bringing large-scale static linking of arbitrary modules within reach for the rest of us.

If I may stretch the IC metaphor, I'm guessing something similar happened with actual ICs; better EDA tools made it more feasible to combine more and more IP blocks onto a single chip, giving rise to the modern SoC.

[1]: The best post I can find that talks about deCOMtamination, and then goes on to describe how XPCOM continued to be over-used, is this: https://brendaneich.com/2006/02/fresh-xpcom-thinking/ Does anyone know of a definitive written history of this process?


> I remember when I built Chromium for the first time and was awed at how it was made up of hundreds of modules, but they were all built as static libraries and then linked together into one monster binary module at the end.

This is how Gecko is built too—essentially everything ends up in libxul unless it has to be split out so that link.exe doesn't OOM on 32-bit (sadness). I believe it had been this way when the first version of Chrome was released, so this wasn't something Chrome introduced.

(Also, based on my experiences with Node and other Google projects like Skia, I wouldn't consider gyp a great build tool—it's always been a nightmare for me. gn is better, but Google projects still have a tendency to be difficult to build for those who aren't Google employees.)


> This is how Gecko is built too—essentially everything ends up in libxul unless it has to be split out so that link.exe doesn't OOM on 32-bit (sadness). I believe it had been this way when the first version of Chrome was released, so this wasn't something Chrome introduced.

Touché. And I do remember seeing this in Firefox, or maybe even the old Mozilla suite, long before Chrome came out.

Still, I think static linking is a much more effective optimization in Chrome, because Chrome doesn't make heavy use of internal ABI boundaries (e.g. COM or XPCOM), so link-time optimization can be more aggressive. IIUC, Firefox and Thunderbird still use a fair amount of XPCOM internally, because they have lots of modules written in JavaScript, including all of the code behind the XUL-based UI. Chrome, on the other hand, uses a lot more C++.


Chrome has a lot of the frontend written in JS nowadays too. At this point XPCOM is mostly just a bindings layer between JS and C++ (that's what COM was supposed to be to begin with—a glue layer between languages). V8 has something similar.

I don't think you can really say static linking is more effective in Chrome or Firefox. Both browser architectures are broadly similar these days.


Isn't UWP exactly the over-use of COM across the whole Windows ABI surface area?

It just didn't work out as Sinosfky originally planned it.


> I also don't think it's really how we do everything

Good point, and I agree. It was probably a rather "generalized" statement, on my part. I really meant "clumped-together opaque modules."

Yeah, COM worked (sort of), but CORBA has always struggled, and that was probably the real methodology that expressed the concept.

What I was thinking, was the "dependencyshere" that pretty much describes software development, these days; often linked through communications networks (as opposed to binary APIs). A bit like the distributed nature that was a big part of CORBA, but without the common data link layer.

There's an entire generation of engineers that can create marvelous applications, but barely understand what's going on in the components they use (which is not necessarily a bad thing. I don't know what's going on inside my calculator).

BTW: Thanks for the excellent article.

For what it's worth, I use frameworks a lot. I have always believed in modular development, with modules being atomic, standalone entities with independent lifecycles.

Not all of these are "frameworks," per se. For example, my RVS_Spinner project, which implements a powerful "prize wheel spinner" in UIKit, is actually just a single source file, and not really worth importing as an opaque framework: https://github.com/RiftValleySoftware/RVS_Spinner

Same with my persistent prefs project, which is just a single, 300-line file: https://github.com/RiftValleySoftware/RVS_PersistentPrefs

Most of the code in these projects is testing code (I like testing. It's a good thing).

Modern package managers help that along.


> create marvelous applications, but barely understand what's going on in the components they use

Totally agreed that this is the vaunted reuse that we were desperately trying to achieve in the 80s and 90s. Did we ever achieve it! And yes, this is a Good Thing™, and it really irks me when people claim we have the same "software crisis" we had in the 80s or 90s, or even since 1968. I just want to shake them an go "open your eyes, look around"

Yes, we have problems with the state of the art, but these are new problems that are due to our past successes.


Thanks. I know that you are someone that is a bit on the older side (maybe not quite my age, but a bit more experienced than many).

I am glad to see your optimistic outlook. I have a similar one. I have often been accused of being "negative," which makes me laugh.

It's just that I have been delivering software for more than 30 years, and have come to learn that there's a great deal of work necessary, when creating and nurturing a vibrant, growing, and attractive future.

There's a lot of compromise, as well as many layers, built over time, with care and patience. Testing, documentation and support are necessary, as well as a commitment to "seeing the story through to the end."

I have compared creating SDKs and modules to having children. Once we have brought them into the world, we are responsible to maintain them, and support them for the rest of our lives.

Those are "classic" values that are just as valid today, as they were when Fred Brooks was a kid.

In reality, we are standing on a mountain of work, done by our predecessors.

We don't need to rebuild the mountain; just put on oxygen masks.


Very nice article.

Regarding COM's approach, I just wish that Windows team would care to take more inspiration from Delphi, C++ Builder and .NET integration of COM (now UWP) into their infrastructure instead of coming up with "macho programming" frameworks like ATL/WRL.

C++/CX seemed to actually hit a sweet point, almost like C++ Builder's approach, but it was overthrown due to the championing of C++/WiRT, not much better than WRL in regards to productivity.

I still hope that if C++/WinRT gets as much push back as UWP happened to suffer, maybe Windows DevTeam will finally accept that providing nice tooling is something that C++ developers on Windows also enjoy having access to.


Why do you say C++/WinRT is not much better than WRL in productivity? I strongly disagree. I work on the Windows accessibility team at Microsoft. We use C++/WinRT in new code that both consumes and implements WinRT components. So far, I'm happy with it and would not want to go back to C++/CX. C++/WinRT is definitely more productive than WRL.


It might be more productive than WRL, but it certainly isn't more productive than C++/CX and keeping with that mentality is what will keep us to actually embrace it, unless we are getting paid to do so.

Here is the bullet points I keep giving back when asked for feedback.

- No Visual Studio support for syntax highlighting or completion of IDL files

- The fact that IDL files have to be manually edited to start with (C++/CX does it in the background).

- The fact that those manually edited files have to be copied back into Visual Studio projects, after cppwinrt generates new files out of them. Again, something that isn't required with C++/CX

- The fact that for data binding some types like bool, we need to go fully in and make use of x:Bind, thus, again manually having a new view model class, instead of directly binding via DataContext. Again, C++/CX has no issues with it.

- XAML designer integration is still lacking versus the C++/CX experience, which manually editing IDL files doesn't make it more enjoyable

- The amount of type juggling between hstrings, std::string, com_ptr, box, winrt::make, agile_ref and all the stuff that isn't required when using C++/CX

- Constantly being told that we must just suck it up, and hopefully ISO C++23 or who knows when, will provide the necessary features regarding reflection and metaclasses, to add back to C++/WinRT the tooling experience that we already have today with C++/CX and is being dropped on the floor with the push of C++/WinRT

- The fact that C++/WinRT keeps being pushed forward, yet like 90% of MSDN still provides examples and documentation using C++/CX, or that some problems migrating from C++/CX into C++/WinRT are only to be found in StackOverflow answers or comments discussing cppwinrt issues.

Going back to WRL example, it is hardly any better than ATL was. C++/CX was the closest that Microsoft ever was with proving a C++ RAD like experience similar to C++ Builder.

Apparently that isn't the path that Windows team wants to travel, there is .NET for that, and being better than WRL is already considered mission accomplished.

Even MFC provides a better way of doing GUIs than what WinUI with C++/WinRT is giving us, but yeah I it is better than WRL I guess.


> Even MFC provides a better way of doing GUIs than what WinUI with C++/WinRT is giving us

In what way? IIRC, MFC doesn't have dynamic layout as all of the XAML-based frameworks do. That means that the typical results with MFC are definitely not better for the user, e.g. they can't take advantage of the dynamic text scaling that we introduced a few releases ago.

I think the main reason for our difference of opinion about C++/WinRT is that I and my team mainly work on OS components, whereas you're doing application development using XAML. And the core OS uses its own build system, not Visual Studio's; not all of us even use Visual Studio (I use Vim and do builds from the command prompt). We definitely have a different perspective than external developers. So thanks for sharing yours.

P.S. Just to be clear, I'm merely an individual developer on a team that merely consumes C++/WinRT. So I'm not speaking for the C++/WinRT team or Microsoft in general here.


In the Visual Studio tooling way and not having to deal with all COM low level details in name of ISO C++ compliance.

Those of us that eventually migrated to .NET were quite happy with C++/CX for dealing with those APIs that Windows team keeps resisting to expose as .NET APIs, and now that has been taken away from us.

Who cares about dynamic text scaling support when one needs to climb a mountain to actually make use of it?

By the way, this is a feature that is actually supported as Win32 C API, most likely because of hurdles to use it otherwise.

As note, having to deal with WRL was never a consideration for the projects where my voice counted for the technology decisions, too many bad memories from ATL/WTL days.

Also it remains to be seen what C#/WinRT will actually bring into the table.

Really, is it so hard for the Windows team to get some Qt and C++ Builder licenses to understand how to offer productive C++ development tooling for Windows application developers?

Because at the end of the day what happens are adoption failures like how the whole WinRT/UWP story ended up, and if the tooling doesn't get better, it isn't Project Reunion that will save it.


That’s a great post. Building “component-oriented” software platforms is a fascinating mix of CS theory and gritty practical engineering.

I’m disappointed to hear even Swift couldn’t produce a satisfying result under their constraints & trade-offs.


Objective-C was a very interesting language, in some ways much better than C++ while filling the same type of role. It's a shame that so much effort is being thrown away by Apple. I recommend anybody who has an Objective-C program that they don't wish to rewrite in Swift to look into GNUstep. I personally have two Objective-C projects which I will not be porting to Swift, instead I will port them from Cocoa to GNUstep.


I'm curious why you feel the need to port them at all? ObjC is pretty far from dead. Perhaps the ObjC runtime will be abandoned by Apple at some point, but I can't see even deprecation (not removal from their shipping systems) taking less than a decade.

In particular, note that Swift on Darwin platforms is intimately tied to the ObjC runtime. I wouldn't be surprised if there are long-range aspirations to change that, but it's not feasible in any short term.


Where's the pressure to rewrite an existing application in Swift coming from?


If it is non-GUI I also recommend Objfw


Very interesting paper. I was a bit puzzled, though, at the assertion related to the Stepstone graphics libraries that X11 was still "years away". The chronology of the paper is not 100% certain in that area, but my impression was that this assertion was set in something like 1987/88, and X11 was released in late 1987.


I guess it has more to do with the bare bones X capabilities and the alternatives like NeWS.


Eventually software industry revolution was brought by an open source movement and likes of NPM and PyPi repositories.


CPAN existed several years before.


Software ICs (Integrated Circuits)


Glad it's going away, the syntax is a chore to read and understand. There's no valid reason for the use of brackets.


There’s also no valid reason for the non-use of brackets, is there? They’re just different. It didn’t take long for them to feel natural to me when I started coding in ObjC, and the named method parameters that go with them make ObjC code (in my eyes) delightfully self-documenting.


> different. It didn’t take long for them to feel natural to me when I started coding in ObjC

definitely, and for people like me, who started coding when they were young (on mac/ios) there was no bias for or against such syntax (i also learned c++, and java at the same time and have no special love or hate for those syntaxes either)

i think alot of these "i hate this syntax" issues are people just not liking what they were not used to...


I hated the brackets at first but after using ObjC for a bit I came to love them.


It’s amazing how many peopled are triggered by square brackets.


100% compatibility with C isn’t free.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: