My opinion is that they are. There's another argument to be had that the essay touches on as well in the end: developers don't need to know the ins-and-outs of each and every vulnerability class. It's just not feasible if we expect developers to actually ship software. We need to make the tools themselves safe, fix these problems at the source, instead of patching the tail-end.
I don't really see the point of automated triaging myself. Given the sophistication and skill of some writers out there, you should just assume that an attacker-controlled write primitive would eventually lead to RCE. The essay mentions this when it goes into the intersection of skill, motivation and resources. But on the defender side this shouldn't matter no? A bug is a bug, and memory corruption should just blindly be assumed to be a 'worst case'.
> I don't really see the point of automated triaging myself. Given the sophistication and skill of some writers out there, you should just assume that an attacker-controlled write primitive would eventually lead to RCE. The essay mentions this when it goes into the intersection of skill, motivation and resources. But on the defender side this shouldn't matter no? A bug is a bug, and memory corruption should just blindly be assumed to be a 'worst case'.
I think the utility of auto-triaging is when the bug isn't a clear arb read/write. The article links to https://seanhn.files.wordpress.com/2019/11/heelan_ccs_2019.p..., which talks about how to automatically develop exploits from limit heap corruption primitives.
I don't really see the point of automated triaging myself. Given the sophistication and skill of some writers out there, you should just assume that an attacker-controlled write primitive would eventually lead to RCE. The essay mentions this when it goes into the intersection of skill, motivation and resources. But on the defender side this shouldn't matter no? A bug is a bug, and memory corruption should just blindly be assumed to be a 'worst case'.