Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's remarkable how C memory safety vulnerabilities are the most talked about. But the most impactful ones are due to insight collapse from over-complexity caused by following so-called best design practices.


The people making noise about unsafe C and then recreating Byzantine monsters are often the same camp; it's part of the normalized incentive to keep yourself employed by building Rube Goldberg machines for simple apparatus.


I’ve experienced a strange phenomenon:

The people who rant most (to zealot levels) about “unsafe C” are the most likely to write horribly insecure code in whatever their “safe” language of choice is.

It reminds me of early aviation tech like computers, autopilot, fly-by-wire, etc. In many cases accident rates went up initially because pilots (very wrongly) assumed all of these magical newfangled technologies could just fix and handle everything.

I’m not saying C isn’t fundamentally unsafe but your favorite memory safe language isn’t a cure-all and I can’t remotely understand how you could think it is.


> The people who rant most (to zealot levels) about “unsafe C” are the most likely to write horribly insecure code in whatever their “safe” language of choice is.

A longer story would be more convincing for those of us who have different experiences.


Writing correct code is hard/tedious, as is writing C. You can extrapolate the rest.


It's a lot easier to have no obvious flaws than obviously having no flaws. Complexity is one hell of a drug. sigh


For sure UBI will stop people from doing that if everyone id getting wort 300k of tax free credits so they can work on things that actually make the world better


It feels unproductive to stage a discussion about anything around such unrealistic numbers. Very few people make 300k+, the average US salary is just under 60k.


Organizational bugs can be very impactful to the organization that has them. But the memory safety bugs are typically exploitable via automatic methods and affect all organizations using the software. For example, Heartbleed was so talked about because it was so impactful.


There's plenty of really nasty C specific zero days out there, but the bulk have rather limited applicability: witness the latest curl CVE. Potential compromise of the world's largest password silo however sounds clearly impactful. And whether Heartbleed was worse than log4shell is mostly philosophical.


> And whether Heartbleed was worse than log4shell is mostly philosophical.

Most definitely not. Heartbleed means that as long as your are on the internet they have your certs, full stop. Log4shell required access in both directions (to inject the vulnerable string, then for target machine to load the payload off internet) to do anything.

The sheer fact that you think it is 'philosophical' difference points to you not understanding anything about the topic. log4shell could be trivally prevented by common security practices like "not allowing your apps to freely access anything on internet", heartbleed could not.


Log4shell was an ACE attack not a mere leak. So yes, philosophical.


It's just complexity. Stuff gets missed, stuff gets updated but the cascading effect it will have on other logic gets overlooked,etc... complexity breeds vulnerability. But scdlt, layered security defenses and countermeasures can reduce risk. I like to think our inability to grasp vast levels of complexity is the root cause.


What can be measured tends to get optimized. We have numerical, easily gatherable data on software vulnerabilities. However we, imo, lack information on how (and sometimes even when) organizations get data stolen. The reasons behind the latter are most likely complex, as they pertain to humans and as auch it's easy to look for a rescue in technology.


what is "insight collapse"? that is very interesting concept to me.


You start with a small product that you fully understand. Eventually you hire more people, you onboard them and they understand the whole product, but ate mostly focused on the big chunks of it and how they interact. Then those people each became a team of mixed quality of people. Sone know how things work, some don’t, but together they can figure out their things in the part of the product they own and can talk with other teams to get things done. Eventually everyone who was there at the step 2 leaves. Skip 10 years of growth, adding features and the usual attrition rate (assuming better people have more mobility and opportunities). At this point you have a critical mass of people who can work on a product, but don’t know how we got here and what not to do. At some point fundamental assumptions change due to external events and the system as a whole doesn’t solve original problem in a most efficient and elegant way. It is what it is. Throw in some managers hired from bigger co who know how to things the right way


thanks.... can it be measured? is there a unit of insight where you can graph it and watch it go down?


All of this security policy relies on memory safety. You build policies on the idea that you can trust some broker to manage and enforce it. Memory safety vulnerabilities subvert all of that, that's why they're so serious.

Consider an SSH server. The most likely attack against it isn't a memory safety vuln, it's stealing someone's key. But if there were a memory safety vuln keys wouldn't even be a part of the conversation - that memory safety vuln would allow the attacker to bypass policy altogether.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: