Hacker News new | past | comments | ask | show | jobs | submit login

This isn’t the Linux kernel but I’d say it’s fair to assume that the same likely applies to it:

> Most of our memory bugs occur in new or recently modified code, with about 50% being less than a year old.

> […] we’ve found that old code is not where we most urgently need improvement.

https://security.googleblog.com/2021/04/rust-in-android-plat...




That makes some intuitive sense, right? The fact that it got "old" in the first place indicates that it's not being touched a lot, and if it isn't being touched a lot that means that bugs haven't been found, meaning that the bugs that are in there are especially sneaky edge cases, or there simply aren't any large bugs to begin with.


This is kind of one of those issues that can really cut both ways. I don't think it's the best attitude to say, "it's been working for years, so it's fine"; there could be subtle bugs, and areas rarely exercised. Still it often holds true. We all remember times we've meddled with something and messed it up. It seems that some of the low level code has been really heavily used in many different ways, and seems to just work. Especially if it's not safety/security critical (and maybe even if it is), it could be a poor use of resources trying to fix something that isn't broken.


I had this argument with some of my management. There was a push to upgrade all of our libraries that were deemed "old" with no other criteria than "it's outdated." I can appreciate shiny new toys, but if you're not hitting bugs and things are stable, I'd rather put my effort into adding features to our codebase and not chasing down library bugs.


There’s more risk in not updating dependencies due to not patching bugs




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: