My opinion is that they are. There's another argument to be had that the essay touches on as well in the end: developers don't need to know the ins-and-outs of each and every vulnerability class. It's just not feasible if we expect developers to actually ship software. We need to make the tools themselves safe, fix these problems at the source, instead of patching the tail-end.
I don't really see the point of automated triaging myself. Given the sophistication and skill of some writers out there, you should just assume that an attacker-controlled write primitive would eventually lead to RCE. The essay mentions this when it goes into the intersection of skill, motivation and resources. But on the defender side this shouldn't matter no? A bug is a bug, and memory corruption should just blindly be assumed to be a 'worst case'.
> I don't really see the point of automated triaging myself. Given the sophistication and skill of some writers out there, you should just assume that an attacker-controlled write primitive would eventually lead to RCE. The essay mentions this when it goes into the intersection of skill, motivation and resources. But on the defender side this shouldn't matter no? A bug is a bug, and memory corruption should just blindly be assumed to be a 'worst case'.
I think the utility of auto-triaging is when the bug isn't a clear arb read/write. The article links to https://seanhn.files.wordpress.com/2019/11/heelan_ccs_2019.p..., which talks about how to automatically develop exploits from limit heap corruption primitives.
Serious question: Did someone ever find an exploitable bug in software written in something more safe than C/C++? Remote shells in Java, C#, Rust, Go, Haskell?
There's a big difference between intentionally duplicating a known flaw in a safe language an unintentionally producing a new exploitable flaw in a safe language.
As the many responses indicate, you've set the bar way too high here. A more narrow question could be something like:
"Did someone ever find an exploitable bug of the type it was intended to be safe from?" In other words, are there examples of the safety guarantees themselves being exploited?
The answer to this is "of course there have." An entire class of such exploits can be derived from CVE-2009-3869. Java 6 and older did not randomize address space, so any ability to write arbitrary memory was easy to exploit. There are other examples.
You need to give your intuitions a good hard shake. Most of the software written over the last decade and a half has been written in memory-safe languages. Almost every bug bounty program is run for software written primarily in memory-safe languages. Is it your impression that game-over bugs in those things are rare?