I was involved with OpenBSD (and FreeBSD) security in the '90s and I mostly agree with the conclusion of this article while not really admiring the logic it uses to get there.
In 2010, it seems to me, there are basically three schools of OS security, each of which hates the other two:
* There's the OpenBSD model, detailed in this article, which suggests that the way to secure an operating system is to audit, simplify, and harden the code; and, at the same time, and new security features are mostly unnecessary.
* There's the SELinux-style MAC model, which suggests that if every component of the operating system can be sandboxed and its interactions carefully prescribed, we can get to a place where individual code bugs won't matter, so long as we've got a tiny, ultra-carefully audited reference monitor we can rely on.
* There's Brad Spengler's GRSecurity model, different from the OpenBSD model in embracing new, user-visible features, and different from SELinux in that it doesn't rely entirely on a MAC-based security kernel.
There's something to be said for all three of these approaches, but if you're going to go all-in on one of them, Spengler seems to have landed closest to the mark. Both OpenBSD and GRSecurity are "exploit-aware" security models: they're both built assuming that there's no way to secure an operating system without keeping abreast of what people are actually doing to break systems. But OpenBSD has picked a fight with computer science that it can't win: its model depends on shipping bug-free distributions --- which is why its security claims get more and more specific over the years.
You're comparing two whole system approaches against an individual tool.
If you look at the largest SELinux user, RHEL, you find that SELinux is just a single line of defense. The philosophy that "code will always have bugs" is in play and so in addition to SELinux RHEL has many other layers of protection.
Mark Cox has an excellent list of technologies that contains all the things that come into play even before SELinux has to make a policy decision:
I agree, that is what I am doing. I'm using SELinux as a synecdoche for all the OS security strategies that rely on access control policies, and pointing out that they are being made fun of by exploits that simply turn them off.
There clearly is a philosophical different between Theo de Raadt (new security features are window dressing, what matters is code quality), SELinux (the missing MAC feature is the problem with operating system security), and Scott Spengler (the problem is that the system has to be built to anticipate the specific things attackers will throw at it).
This isn't some deep insight on my part; the parties involved are pretty vocal about it.
Agreed. Most sysadmins I've encountered do not even realise that binaries like `ping` are actually setuid root. I wouldn't trust them (or myself!) to develop security policies.
In 2010, it seems to me, there are basically three schools of OS security, each of which hates the other two:
* There's the OpenBSD model, detailed in this article, which suggests that the way to secure an operating system is to audit, simplify, and harden the code; and, at the same time, and new security features are mostly unnecessary.
* There's the SELinux-style MAC model, which suggests that if every component of the operating system can be sandboxed and its interactions carefully prescribed, we can get to a place where individual code bugs won't matter, so long as we've got a tiny, ultra-carefully audited reference monitor we can rely on.
* There's Brad Spengler's GRSecurity model, different from the OpenBSD model in embracing new, user-visible features, and different from SELinux in that it doesn't rely entirely on a MAC-based security kernel.
There's something to be said for all three of these approaches, but if you're going to go all-in on one of them, Spengler seems to have landed closest to the mark. Both OpenBSD and GRSecurity are "exploit-aware" security models: they're both built assuming that there's no way to secure an operating system without keeping abreast of what people are actually doing to break systems. But OpenBSD has picked a fight with computer science that it can't win: its model depends on shipping bug-free distributions --- which is why its security claims get more and more specific over the years.