Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yes, and I still think it still is great concept. :-)

I also don't think it's really how we do everything, the original concept caught on only very partially, and appears to be more and more forgotten.

See Software-ICs, Binary Compatibility, and Objective-Swift

https://blog.metaobject.com/2019/03/software-ics-binary-comp...



IMO, the trouble with COM and its imitators is that they're prone to gross over-use. The best example I know is Gecko, which over-used XPCOM and then had to go through what Mozilla folks called deCOMtamination. [1] I think IE might have over-used COM to some extent as well, but that's only speculation based on what I saw on the outside. (Disclosure: I work at Microsoft, but I joined well after IE became Edge, and I was never on that team.)

Then Chrome landed like a piece of alien technology, and if we took a look inside, we found that it was one giant binary module (DLL, .framework, or executable, depending on the platform) that internally didn't use anything like COM, at all. It was also fairly well-known for its use of link-time optimization. I wonder how much these things contributed to Chrome's famous speed.

Of course, Chrome was only able to pull this off because the team had great engineering discipline, and later, a great build system (first GYP, then GN). I remember when I built Chromium for the first time and was awed at how it was made up of hundreds of modules, but they were all built as static libraries and then linked together into one monster binary module at the end. These days, newer statically compiled languages like Go, Rust, and others are bringing large-scale static linking of arbitrary modules within reach for the rest of us.

If I may stretch the IC metaphor, I'm guessing something similar happened with actual ICs; better EDA tools made it more feasible to combine more and more IP blocks onto a single chip, giving rise to the modern SoC.

[1]: The best post I can find that talks about deCOMtamination, and then goes on to describe how XPCOM continued to be over-used, is this: https://brendaneich.com/2006/02/fresh-xpcom-thinking/ Does anyone know of a definitive written history of this process?


> I remember when I built Chromium for the first time and was awed at how it was made up of hundreds of modules, but they were all built as static libraries and then linked together into one monster binary module at the end.

This is how Gecko is built too—essentially everything ends up in libxul unless it has to be split out so that link.exe doesn't OOM on 32-bit (sadness). I believe it had been this way when the first version of Chrome was released, so this wasn't something Chrome introduced.

(Also, based on my experiences with Node and other Google projects like Skia, I wouldn't consider gyp a great build tool—it's always been a nightmare for me. gn is better, but Google projects still have a tendency to be difficult to build for those who aren't Google employees.)


> This is how Gecko is built too—essentially everything ends up in libxul unless it has to be split out so that link.exe doesn't OOM on 32-bit (sadness). I believe it had been this way when the first version of Chrome was released, so this wasn't something Chrome introduced.

Touché. And I do remember seeing this in Firefox, or maybe even the old Mozilla suite, long before Chrome came out.

Still, I think static linking is a much more effective optimization in Chrome, because Chrome doesn't make heavy use of internal ABI boundaries (e.g. COM or XPCOM), so link-time optimization can be more aggressive. IIUC, Firefox and Thunderbird still use a fair amount of XPCOM internally, because they have lots of modules written in JavaScript, including all of the code behind the XUL-based UI. Chrome, on the other hand, uses a lot more C++.


Chrome has a lot of the frontend written in JS nowadays too. At this point XPCOM is mostly just a bindings layer between JS and C++ (that's what COM was supposed to be to begin with—a glue layer between languages). V8 has something similar.

I don't think you can really say static linking is more effective in Chrome or Firefox. Both browser architectures are broadly similar these days.


Isn't UWP exactly the over-use of COM across the whole Windows ABI surface area?

It just didn't work out as Sinosfky originally planned it.


> I also don't think it's really how we do everything

Good point, and I agree. It was probably a rather "generalized" statement, on my part. I really meant "clumped-together opaque modules."

Yeah, COM worked (sort of), but CORBA has always struggled, and that was probably the real methodology that expressed the concept.

What I was thinking, was the "dependencyshere" that pretty much describes software development, these days; often linked through communications networks (as opposed to binary APIs). A bit like the distributed nature that was a big part of CORBA, but without the common data link layer.

There's an entire generation of engineers that can create marvelous applications, but barely understand what's going on in the components they use (which is not necessarily a bad thing. I don't know what's going on inside my calculator).

BTW: Thanks for the excellent article.

For what it's worth, I use frameworks a lot. I have always believed in modular development, with modules being atomic, standalone entities with independent lifecycles.

Not all of these are "frameworks," per se. For example, my RVS_Spinner project, which implements a powerful "prize wheel spinner" in UIKit, is actually just a single source file, and not really worth importing as an opaque framework: https://github.com/RiftValleySoftware/RVS_Spinner

Same with my persistent prefs project, which is just a single, 300-line file: https://github.com/RiftValleySoftware/RVS_PersistentPrefs

Most of the code in these projects is testing code (I like testing. It's a good thing).

Modern package managers help that along.


> create marvelous applications, but barely understand what's going on in the components they use

Totally agreed that this is the vaunted reuse that we were desperately trying to achieve in the 80s and 90s. Did we ever achieve it! And yes, this is a Good Thing™, and it really irks me when people claim we have the same "software crisis" we had in the 80s or 90s, or even since 1968. I just want to shake them an go "open your eyes, look around"

Yes, we have problems with the state of the art, but these are new problems that are due to our past successes.


Thanks. I know that you are someone that is a bit on the older side (maybe not quite my age, but a bit more experienced than many).

I am glad to see your optimistic outlook. I have a similar one. I have often been accused of being "negative," which makes me laugh.

It's just that I have been delivering software for more than 30 years, and have come to learn that there's a great deal of work necessary, when creating and nurturing a vibrant, growing, and attractive future.

There's a lot of compromise, as well as many layers, built over time, with care and patience. Testing, documentation and support are necessary, as well as a commitment to "seeing the story through to the end."

I have compared creating SDKs and modules to having children. Once we have brought them into the world, we are responsible to maintain them, and support them for the rest of our lives.

Those are "classic" values that are just as valid today, as they were when Fred Brooks was a kid.

In reality, we are standing on a mountain of work, done by our predecessors.

We don't need to rebuild the mountain; just put on oxygen masks.


Very nice article.

Regarding COM's approach, I just wish that Windows team would care to take more inspiration from Delphi, C++ Builder and .NET integration of COM (now UWP) into their infrastructure instead of coming up with "macho programming" frameworks like ATL/WRL.

C++/CX seemed to actually hit a sweet point, almost like C++ Builder's approach, but it was overthrown due to the championing of C++/WiRT, not much better than WRL in regards to productivity.

I still hope that if C++/WinRT gets as much push back as UWP happened to suffer, maybe Windows DevTeam will finally accept that providing nice tooling is something that C++ developers on Windows also enjoy having access to.


Why do you say C++/WinRT is not much better than WRL in productivity? I strongly disagree. I work on the Windows accessibility team at Microsoft. We use C++/WinRT in new code that both consumes and implements WinRT components. So far, I'm happy with it and would not want to go back to C++/CX. C++/WinRT is definitely more productive than WRL.


It might be more productive than WRL, but it certainly isn't more productive than C++/CX and keeping with that mentality is what will keep us to actually embrace it, unless we are getting paid to do so.

Here is the bullet points I keep giving back when asked for feedback.

- No Visual Studio support for syntax highlighting or completion of IDL files

- The fact that IDL files have to be manually edited to start with (C++/CX does it in the background).

- The fact that those manually edited files have to be copied back into Visual Studio projects, after cppwinrt generates new files out of them. Again, something that isn't required with C++/CX

- The fact that for data binding some types like bool, we need to go fully in and make use of x:Bind, thus, again manually having a new view model class, instead of directly binding via DataContext. Again, C++/CX has no issues with it.

- XAML designer integration is still lacking versus the C++/CX experience, which manually editing IDL files doesn't make it more enjoyable

- The amount of type juggling between hstrings, std::string, com_ptr, box, winrt::make, agile_ref and all the stuff that isn't required when using C++/CX

- Constantly being told that we must just suck it up, and hopefully ISO C++23 or who knows when, will provide the necessary features regarding reflection and metaclasses, to add back to C++/WinRT the tooling experience that we already have today with C++/CX and is being dropped on the floor with the push of C++/WinRT

- The fact that C++/WinRT keeps being pushed forward, yet like 90% of MSDN still provides examples and documentation using C++/CX, or that some problems migrating from C++/CX into C++/WinRT are only to be found in StackOverflow answers or comments discussing cppwinrt issues.

Going back to WRL example, it is hardly any better than ATL was. C++/CX was the closest that Microsoft ever was with proving a C++ RAD like experience similar to C++ Builder.

Apparently that isn't the path that Windows team wants to travel, there is .NET for that, and being better than WRL is already considered mission accomplished.

Even MFC provides a better way of doing GUIs than what WinUI with C++/WinRT is giving us, but yeah I it is better than WRL I guess.


> Even MFC provides a better way of doing GUIs than what WinUI with C++/WinRT is giving us

In what way? IIRC, MFC doesn't have dynamic layout as all of the XAML-based frameworks do. That means that the typical results with MFC are definitely not better for the user, e.g. they can't take advantage of the dynamic text scaling that we introduced a few releases ago.

I think the main reason for our difference of opinion about C++/WinRT is that I and my team mainly work on OS components, whereas you're doing application development using XAML. And the core OS uses its own build system, not Visual Studio's; not all of us even use Visual Studio (I use Vim and do builds from the command prompt). We definitely have a different perspective than external developers. So thanks for sharing yours.

P.S. Just to be clear, I'm merely an individual developer on a team that merely consumes C++/WinRT. So I'm not speaking for the C++/WinRT team or Microsoft in general here.


In the Visual Studio tooling way and not having to deal with all COM low level details in name of ISO C++ compliance.

Those of us that eventually migrated to .NET were quite happy with C++/CX for dealing with those APIs that Windows team keeps resisting to expose as .NET APIs, and now that has been taken away from us.

Who cares about dynamic text scaling support when one needs to climb a mountain to actually make use of it?

By the way, this is a feature that is actually supported as Win32 C API, most likely because of hurdles to use it otherwise.

As note, having to deal with WRL was never a consideration for the projects where my voice counted for the technology decisions, too many bad memories from ATL/WTL days.

Also it remains to be seen what C#/WinRT will actually bring into the table.

Really, is it so hard for the Windows team to get some Qt and C++ Builder licenses to understand how to offer productive C++ development tooling for Windows application developers?

Because at the end of the day what happens are adoption failures like how the whole WinRT/UWP story ended up, and if the tooling doesn't get better, it isn't Project Reunion that will save it.


That’s a great post. Building “component-oriented” software platforms is a fascinating mix of CS theory and gritty practical engineering.

I’m disappointed to hear even Swift couldn’t produce a satisfying result under their constraints & trade-offs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: