Hacker News new | past | comments | ask | show | jobs | submit login

Business as usual with macOS. The other day I was browsing the ocspd source code. Turns out it calls openssl using system(). So openssl is officially deprecated on macOS and yet they're using it internally to handle certificates?! And there's an enlightening comment:

    /* Given a path to a DER-encoded CRL file and a path to a PEM-encoded
     * CA issuers file, use OpenSSL to validate the CRL. This is a hack,
     * necessitated by performance issues with inserting extremely large
     * numbers of CRL entries into a CSSM DB (see <rdar://8934440>).
http://opensource.apple.com/source/security_ocspd/security_o...

ocspd was introduced with 10.4. A decade ago. And that's really the problem with macOS: There's no refactoring of old hacks, but rather just bolting on of ever more new stuff.




Linking to openssl was deprecated (lack of binary stability) - not the command line tools.


I don't see a major problem with using the openssl command for this, but using system() to do it is completely insane.


Apple needs to take a bit of those tens of billions of dollars they have sitting around and spend it on starting from scratch with something that's not horrifically crufty. The quality of their software is lagging so far behind the quality of their hardware right now. Realistically, I think we may just be at the point where operating systems and all the stuff the companies put on top of them are too complicated to keep developing in the traditional way with traditional tools. Formal verification might be the cheapest way forward at this point.


So far as the current state of the art in computer engineering goes, we don't know how to completely rewrite a system as complicated as XNU without creating fresh batches of implementation errors. So this is a little like suggesting Apple use its hundreds of billions of dollars to build an iPhone battery that only needs to be recharged once a month.

We may someday get an XNU rewrite, but probably not until software engineering produces a new approach to building complex systems reliably that works at the scale (here: number of developers and shipping schedule) Apple needs.


This is so, so true, that I wish there were enough beer in this world to gift you with. There's a lot of cruft in XNU, and there's even more of it in the rest of the system, but all this heap of hacks isn't just useless cruft that we'll be better off without. That heap of code also contains almost twenty years' worth of bugfixes and optimizations from more smart engineers than Apple can hope to hire and get to work together in a productive and meaningful manner. All this unpleasant cruft is what keeps the system alive and well and the users happy enough to continue using it.

More often than not, systems that get boldly rewritten from scratch end up playing catch-up for years. Frankly, I can't remember a single case when a full rewrite with an ambitious timetable wasn't a full-scale disaster. The few success stories, like (what eventually became) Firefox have taken a vastly different approach and took a lot more than users would have wanted.

A lot of idealistic (I was about to write naive) engineers think it's all a matter of throwing everything away. That's the easy part. Coming up with something better is the really hard part, and it's not achieved by just throwing the cruft away. If you innocent souls don't believe me, come on over to the Linux side, we have Gnome 3 cookies. You'll swear you're never going to touch anything that isn't xterm or macOS again.


A lot of macOS/iOS was written from scratch, though: Core Graphics (vs. Cairo and FreeType), Core Animation, Core Text (vs. pango), WindowServer (vs. X11), UIKit (vs. Cocoa), IOKit (vs. the native BSD driver framework), Cocoa Finder (vs. Carbon Finder), LLVM/clang/Swift (if you count Chris Lattner's work on it at UIUC)...

Of those, the last one is very impressive: it's a decade-long from-scratch project that has succeeded in competing with a very entrenched project (GCC) in a mature market.

Regarding GNOME 3, the delta between GNOME 2 and GNOME 3 is far less than the delta between NeXTSTEP+FreeBSD and the first version of Mac OS X.


> This is so, so true, that I wish there were enough beer in this world to gift you with. There's a lot of cruft in XNU, and there's even more of it in the rest of the system, but all this heap of hacks isn't just useless cruft that we'll be better off without. That heap of code also contains almost twenty years' worth of bugfixes and optimizations from more smart engineers than Apple can hope to hire and get to work together in a productive and meaningful manner. All this unpleasant cruft is what keeps the system alive and well and the users happy enough to continue using it.

This whole premise is a false dichotomy. Apple does not have to throw away Mac OS X, and it does not have to keep piling crap on without fixing things. If you stop the excuses and rationalizations and commit to code quality you can ship an operating system with quality code and minimal bugs. The OpenBSD project has been doing this for two decades with minimal resources. There is no valid excuse other than "we are too lazy and incompetent."


Bingo! That's a beer shot :)

Oh too much code, bad code, we inherited it, throwing out away won't work etc are baloney excuses without meat. All it takes is the will to hire and commit the right resources with an objective of increasing code quality. I mean take this bug itself - Apple did fix it but only after GPZ was on their arse. No reason they couldn't have reviewed it themselves and fixed it.


Hasn't the vulnerable code been here for over a decade? Why do people think this was an easy bug to spot? There are dozens of extremely qualified people looking for these things. I think there's a reason there isn't a Nemo Phrack article about this bug: it was hard to spot, and required a flash of insight about the competing lifecycles of objects in two different domains (POSIX and Mach).


I was (obviously...) responding to this:

> Apple needs to take a bit of those tens of billions of dollars they have sitting around and spend it on starting from scratch with something that's not horrifically crufty.

They certainly don't have to throw everything away. Not having thrown everything away is one of the reasons why OpenBSD is a good example here. Remember all that quality code that was in place before Cranor's UVM? (Edit: actually, the fact that UVM is an improvement over it should say something, too...)

And, at the risk of sounding bitter, in my experience, very few companies have the capability to "commit to code quality", and I don't think Apple is one of them.

Edit: BTW, I really like your blog. You should write more often :-).


> Remember all that quality code that was in place before Cranor's UVM?

So much before my time I was not even aware of it. For the uninitiated: https://www.usenix.org/legacy/events/usenix99/full_papers/cr...

> Edit: BTW, I really like your blog. You should write more often :-).

Thank you. :) Just this week I started thinking of getting back into it.


Hold on, because, I'm pretty familiar with pre- and post- UVM OpenBSD, because Arbor Networks shipped on OpenBSD (against medical advice) and ran into a number of really bad VM bugs that Theo couldn't fix because of the UVM rewrite!


But I am on the Linux side & wouldn't want to touch anything that is xterm or macOS again (suckless's st ftw)

Also currently running on a nice pure wayland system, no need for that X11 cruft


> We may someday get an XNU rewrite, but probably not until software engineering produces a new approach to building complex systems reliably that works at the scale (here: number of developers and shipping schedule) Apple needs.

It's conceivable to perform a gradual transition away, though. They could demote Mach to a fast IPC system that just augments BSD, similar to the way the kdbus/bus1 proposal for Linux does. That would be difficult and a long-term project, but it would fix the underlying issue in a way that mostly retains userspace compatibility. Driver compatibility would be more difficult, of course…


That's true, but if you undertake a difficult and long-term project, you want the outcome to be decisive. Mach is ugly and a nest of bugs, but kernels implemented in C/C++ are bug magnets with several orders of magnitude more force.

My prediction is that we don't ever see an XNU refactor/ redesign/ rewrite so long as C/C++ is the kernel implementation language.


No argument there. :)


SeL4 is an indicator of things to come. We can build complicated OSs with extreme reliability; the up-front cost is just higher than most companies are willing to spend right now, because customers don't yet realize that it's technically possible to avoid the huge costs associated with software failure in exchange for slightly higher amortized software costs.


Until we have a way to extend seL4 or something like it to full multiprocessor operation (without the multiple kernels with separate resources limitation that is currently the only way to use multiple processors with seL4) I'd disagree that we can build general-purpose OSes with verification. Our techniques for verifying concurrent programs are still very primitive and cumbersome, and I don't think many would take an OS where processes can't use multiple hardware threads seriously.

Also, seL4 (being a microkernel) leaves out a huge swath of kernel-level facilities that need to be implemented with the same standard of verification (resource management, network stack, drivers, etc.). Running on a verified microkernel provides a great foundation, but these still add a ton of code that needs to be verified. Plus the concurrency problem will strike again at this level.


L4 is incredibly simple. It is essentially (a word I chose carefully) the opposite of a complicated OS. It also doesn't really do anything.

If you have just a few extremely simple applications you'd like to run in an enclave, L4 is a good way to minimize the surface area between the applications themselves and the hardware.

If you'd like to host a complicated operating system on the simplest possible hosting layer: again, L4 is your huckleberry.

Otherwise: not so useful.

Note that if you just host XNU on top of L4, you might rule out a very small class of bugs, but the overwhelming majority of XNU bugs are contained entirely in the XNU layer itself; having XNU running on an adaptor layer doesn't do much to secure it.


I don't think I've ever seen a complicated OS based on SeL4, and it is the opposite of complicated itself.

I don't think SeL4 means much for macOS/iOS.


Hundreds of billions?



o.O




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: