That’s not what this.
This is largely about mitigating the damage that can be done once someone finds a memory safety error.
Things like pointer authentication, memory tagging, CHERI, aslr, stack cookies, etc, etc are all mitigations to limit what can be done once someone finds a memory error, and all of these things are relatively expensive costs incurred as a result of the lack of safety of the code being written. I am somewhat curious as to whether anyone has done a like vs like benchmark of “real” code with and without these mitigations (and i mean the entire system, libraries, and kernel because otherwise the benchmark picks up those perf hits).
We can then compare that cost to the performance gain you get from pointer nonsense, etc people do for performance vs comparable rust/go/swift code that is memory safe.
Obviously the mitigations can’t be removed (yet?), but I would like to be able to point to some comparison when people start talking about how much slower safe languages are.
> Things like pointer authentication, memory tagging, CHERI, aslr, stack cookies, etc, etc are all mitigations to limit what can be done once someone finds a memory error, and all of these things are relatively expensive costs incurred as a result of the lack of safety of the code being written
Presumably CHERI/MTE/etc. hardware support would help (at the expense of die area and 128-bit pointers); it would be nice to see something like this in Apple Silicon.
The cheri, mte, etc have cost with hardware support.
MTE consumes huge amounts of memory (if you have an N-bit tag for every M bytes, you burn (total ram / M * N) bits. Imagine you have say a 4 bit tag for every 16 bytes - that works out to 3% of memory being used. But there are additional costs: now to read from a pointer you have the load cost of the pointer itself, but also the tag. Now I assume cpu hardware folk know how this kind of stuff could be faster than the obvious thing I would try, but no matter what happens this ends up using more memory and more processing time - a cost that would be avoided if everything was in a safe language.
CHERI has the same set of problems, but I never looked into it too much as the costs seemed like they’d be huge, and conceptually it’s very “obvious” (it took me a bit of time to understand that MTE is intended to be probabilistic, and how you would then make use of it - cheri is much easier to comprehend). But the same thing happens: you need more memory, and it requires more cpu time. Extra fun is that with cheri the hardware burns resources in bounds checks for pointer accesses, even though in safe languages those have already happened.
Pointer authentication has cpu time costs at least - I don’t think there is meaningful memory overhead, but clearly it has enough cost that simply “authenticate every pointer” isn’t something that can be done otherwise someone would have written a post about doing so.
But this also misses the point: I did not say “software cost”, I said “cost” without qualifiers. The reason is that it doesn’t matter whether any particular mitigation is implemented in software or in hardware, the only reason the mitigation is present is to make code written in unsafe languages less trivially exploitable.
If for the sake of argument someone wrote everything from the kernel to all of userspace solely in memory safe languages, then none of these mitigations would be necessary. So you’d get back the ram, the cpu time, the die space, etc that is all burned solely for the benefit of unsafe languages.
Solaris SPARC ADI is doing just fine, it is actually one of the first UNIX systems with proper memory tagging, and this was done under Oracle stewardship.
I am not saying these don't work, and I don't know why people seem hell bent on interpreting everything I'm saying as "X does not work" when that was not anywhere in anything I said.
I said, very clearly, that these mitigations are necessary because of code written in unsafe languages, and that these mitigations are not free.
Some basic googling says that on an M7 there are 4 bit tags with a 64byte granularity. Now before any other costs you are burn 0.8% of your memory on these tags. The other costs are along the lines of increased chip complexity (caches, logic, lookahead/ooo etc).
It may be that we're ok with these costs, but that doesn't change the fact that the costs exist, and that the exist largely to mitigate exploits in unsafe code.
You can start by DoJ security assessment of Multics, where they explicitly refer to PL/I as the reason why Multics isn't susceptible to typical exploits found on UNIX systems.
The Turing award speech of CAR Hoare about their customers clearly not wanting bounds checking disabled.
Unisys still sells Burroughs (naturally somehow modernized) as ClearPath MCP. Sales pitch, being a mainframe OS for business whose top priority is security.
We know how it goes, it just happened that while most computers weren't a target, no one cared about security beyond a couple of virus when using pirated software.
>That’s not what this. This is largely about mitigating the damage that can be done once someone finds a memory safety error.
Hmm, seems like Apple should do better when there are perfectly good languages available which could prevent memory safety issues altogether. Seems like Apple would want to compete with Linux rather than watch it race away into the memory safe future, leaving XNU behind with legacy memory safety problems.
Ok, so rather than writing a comment where you shit on the people who work at Apple and actually do care about security, I suggest you actually read the article. Because the article lists those safe languages, and states why the existence of those safe languages does not solve security.
Things like pointer authentication, memory tagging, CHERI, aslr, stack cookies, etc, etc are all mitigations to limit what can be done once someone finds a memory error, and all of these things are relatively expensive costs incurred as a result of the lack of safety of the code being written. I am somewhat curious as to whether anyone has done a like vs like benchmark of “real” code with and without these mitigations (and i mean the entire system, libraries, and kernel because otherwise the benchmark picks up those perf hits).
We can then compare that cost to the performance gain you get from pointer nonsense, etc people do for performance vs comparable rust/go/swift code that is memory safe.
Obviously the mitigations can’t be removed (yet?), but I would like to be able to point to some comparison when people start talking about how much slower safe languages are.