Hacker News new | past | comments | ask | show | jobs | submit login
GitHub Advisory Database now powers NPM audit (github.blog)
98 points by todsacerdoti on Oct 7, 2021 | hide | past | favorite | 63 comments



Dan Abramov has an interesting take on npm audit (https://overreacted.io/npm-audit-broken-by-design).

Scanning through the vulns on GitHub Advisory Database, it looks like it still contains many that DA brought up as unhelpful at best (e.g. “Regular Expression Denial of Service”).


The irony is that those useless "Regular Expression Denial of Service” alerts create their own denial of service attack against `npm audit` consumers.


CVE-2021-41968

Npm audit denial-of-service by Regular Expressions Denial of Service CVE’s


I've used esbuild's metadata file to filter npm audit's report and only warn for packages with files that are required at runtime. It's great at filtering out denial of service in the command line parser of my web server.


This sounds SUPER interesting. Do you have an example of this? I wonder if this is something we could adopt in npm itself



Ultimately I think everyone will find more headaches trying to avoid updating than they would from updating. This sort of thing, even when the exact issue is nonsense, is still a helpful prod to let me know I'm using unsupported software.


I for one do like to know when I have a denial of service vector in my app.


This is kind of a long story but the short version is that the tooling isn't yet able to differentiate between dependencies which are packaged and deployed with your application vs dependencies which are only used in the build process. The canonical example seems to be regex DOS vulns reported in a test library. What does an "exploit" look like? One of my colleagues writes a bad regex in a test, now the test suite takes an offensively long amount of time to finish. That's not a vulnerability, that's a broken build. Even if they bypassed the automated tests and forced a deployment to production the offending regex would not be included.


While I agree that regex DoS isn't a very useful thing to highlight in a build, there are risks that come with vulnerable code running in CI/CD systems.

There's a whole topic in information security and infrastructure hardening that centers around SDLC (secure development lifecycle) and it covers such thrilling topics as:

1. Is the code from your SCM the same as the code that was built and deployed?

2. Has anyone tampered with your build or deployment artifact?

3. Who all actually has access to the boxes that do all this? Can you tell when folks access your build boxes and what has changed on them?

4. Does your CI/CD system have privileged access to secrets or other internal networks?

5. How confident are you that your digital supply chain is reliable and trustworthy? Would you know if someone fed you a poisoned dependency?

The list goes on and on. There be dragons here. I'd love to see less focus on regex DoS and more awareness around the fact that build systems need just as much scrutiny and security as the production environments they feed into.


People act like tooling that promotes useless busywork is "free security advice". It's actively harmful.

Prioritising security activities that aren't useful (fixing "regex dos" in dev deps) takes time and effort that could have been spent on real bugs or product improvement.


It’s not actively harmful. No one is forcing you to do anything. If your team sets the standard as no security issues that’s a team workflow issue not this. Information is information. What you do with it is your choice.


It's a problem at scale,

Incorrect and expensive to evaluate information, broadcast to tens or hundreds of thousands of people is misinformation and yes, it's worse than no information.

By notifying you of "vulnerabilities" which aren't, these tools effectively "amplify bullshit".

It wastes the time of your team and in an open-source context also wastes the time of your downstream consumers. Triaging (evaluating the bullshit) takes time. It is often quicker to "fix" the non problem by upgrading but of course this pushes additional version churn on everyone downstream of you.

Even ignoring it wastes time as you have to communicate why you're doing it to "helpful" third parties. In an organisational context you're often stuck with security people who don't know any better or an org policy with metrics around this.

If the information were categorised correctly at the source, or the tooling were smart enough, or the rules nuanced enough to capture the reality of the situation then the time and effort of every team in the ecosystem could be saved.

Unfortunately researchers are incented to produce CVEs regardless of quality, and of course the CVSS score is always calculated "worst case" despite the fact that 90% of the time these issues are in barely used parts of the codebase or exploitable only in unusual configurations.

The tooling then makes this poor quality data worse by completely ignoring context. For example, basically every docker scanning tool on the market will report sev 9.8s on "linux kernel vulns" (based on installed headers) or systemd or cron privesc bugs... in a container... based on matching package versions inside the container.

It's all just incredibly lazy engineering from scanning tool vendors who afaict don't QC their data feeds at all, and are also incented to maximise "findings" on every scan.

I believe there's room for a startup that does "collaborative triage" of security issues to help stem the tide of this, because no vendors seem interested in fixing it.


> the tooling isn't yet able to differentiate between dependencies which are packaged and deployed with your application vs dependencies which are only used in the build process

You can run

    npm audit --production
to audit only the "dependencies" that are what deployed code is using, but not "devDependencies" which are what build and test infrastructure is using.

> That's not a vulnerability, that's a broken build.

True, but at the same time you shouldn't discount the potential issues in the build toolchain as totally harmless. A classical example is a malicious transitive dependency that steals your signing keys. Or triggers long builds, slowly (or rapidly) burning your money that you pay for CI.

The thing is, with every potential vulnerability you should assess whether it actually applies to you. There are countless reasons for which a vulnerability cannot be exploited in your particular situation: you don't even use a feature, you use it with proper safeguards, you don't deploy that code to anywhere that matters, you run it in a sandbox, etc.

But doing so is not that easy when you have hundreds of transitive dependencies and npm audit cries wolf daily. I have this little library with exactly one direct dependency and what do I see?

    found 7 vulnerabilities (4 moderate, 3 critical) in 124 scanned packages
This is doable. Now multiply it by 200 and you can't be bothered to review it all and decide whether things apply to you or not.

Extreme positions are to either ignore the advisories entirely (“it’s not an issue until I have a real user with a real complaint”), or treat all of them as immediate priority (“npm audit says we have 7 critical vulnerabilities, those must be real and actual, drop whatever you're doing and fix that NAO”).

A middle ground is some arbitrary policy, like don't sweat it until we have more than N potential vulnerabilities, or someone happened to read their descriptions and thinks they are particularly nasty.

The bad thing here is that the npm audit scale promotes this sort of middle ground position taking without involving actual security people.


I stand corrected, thanks! Elegance in simplicity https://github.com/npm/cli/pull/202/files#diff-2a8ed1f0d31e4...


I've used esbuild's metadata file to filter npm audit's report and only warn for packages with files that are required at runtime. It's great at filtering out denial of service in the command line parser of my web server.


Wouldn't you want build tooling to be a `devDependency`? And npm audit can be configured to only run on non-devDependencies


Many of these kinds of things are in places that could never actually result in a denial of service. There is a large problem with security reporting where people report issues which couldn’t ever be exploited and then scanning tools and up showing hundreds of “vulnerabilities” which don’t actually represent any danger.


couldn't agree more- we built Dassana [1] so solve this problem by adding context to security alerts. Currently we support AWS Config/GuardDuty alerts but had been thinking of adding context to vuln scan results too.

[1] https://oss.dassana.io/


I mean, you don't have to use the scanning tools if you'd prefer not to know.


It’s not about not wanting to know, it’s about wanting to know about real security issues and being buried in low quality reports which do not need to be addressed.


The problem is it gets weaponized. If you're unlucky you have a client who runs a scan naively on your software and demands these get addressed as critical priorities under SLA.

I'm sure there are plenty out there with managers who will just run the report and demand these "security issues" are fixed and won't understand an explanation on why it's not relevant.


This is a silly objection in my opinion - clients demand unreasonable things all the time. Its up to you to write a reasonable SLA that protects yourself.

Dont get me wrong, i wish npm audit was better. But unreasonable people will be unreasonable regardless of the tools they have access to.


The risk that a person could potentially write an infinite while loop and commit it to a codebase is about the same as the threat posed by the DoS "vulnerabilities" that seem to have completely overrun the NPM audit system. Noone wants to know about the former risk.


Even so, you should just upgrade to a secure version anyways. Wanting to stick to an old version with vulnerabilities in it seems like something you shouldn't want to do.

If anything you should be promoting easier upgrades for the packages affected by these security problems.


If only it were so easy. Typically, you're not using these dependencies directly. You use something that uses them. And it's not compatible with the "secure" version.


This sounds like a bigger problem than whatever issues exist with npm audit. Security updates should not be hard to do. When a vulnerability drops that impacts you project severely it should be easy to get to a secure state. Having to deal with working with downstream projects or a secure version being incompatible sound like big problems with the ecosystem.


> vulnerability drops that impacts you project severely

well if it's a regex ddos in build tooling, it doesn't "impact your project severly". That's the entire point of this thread


I never said it did. I am saying that if it's hard to patch irrelevant issues it is also hard to patch relevant issues. The ideal would be it is easy to patch all issues so you can always just upgrade to a safe version.


Sure, but that's not something you can fix at the ecosystem level, it requires effort from the package maintainers and/or researchers to backport these fixes to older versions. And if the fix requires effort from the maintainers that is not actually very benefical to anyone (i.e. the vulnerability does not actually impose a security risk) then we are back to the question if it's really worth it. The effort has just been moved from the consumer to the library author.


Just because npm audit says something doesn't mean it has any impact at all. Most of its outputs are false positives.


But when it's not a false positive, when it actually impacts your project it should be easy to update.


So do I, so when a tool tells me there's a ddos vector in a devdependency that never gets put on a live server I ignore it. If it happens a lot I start to ignore all those warnings. If that leads me to ignore a real warning then the warning has had a negative impact.

If you ever heard the story of "The Boy Who Cried Wolf" as a child you should recognise what npm audit is doing wrong.


Your company may care about it. Endpoints are as common a vector for attacks as the server is. As a penetration tester, I’ve used DoS vectors before to make a developer think they needed their password to kill an app, and stole their credentials to then breach their other assets. If you think the server is all that matters, you are a good target in my eyes.


Just because there is a cve in a dependancie does not mean its exploitable in context of your application.

Say there was a regex dos in nodemon, I only use this for local development and has no effect on my main application.

Point is npm audit is an advisory as it doesn't know the context in which it used.


> Just because there is a cve in a dependancie does not mean its exploitable in context of your application.

Sure, but I'd rather be alerted to the issue so that I can investigate.


Problem is, reacting to these alerts is a full time job. And when you’re faced with endless false positives you stop paying attention and eventually disable it.


Shouldn't there be a Node package where someone's opinion of what is a false positive gets automatically applied.


Then, if you build a successful product, you get:

* ransomware

* free PR from haveibeenpwned (but not the kind you want)

* a burning desire to put my (or one of my AppSec/IR peers) kids through college.

That said, YMMV.


I think we all agree with this. However, the semantics boil down to, if you use n:

- with no advisory database/dependency scanning, you'll eventually suffer this parade of horribles

- with an advisory database / dependency scanning which results in tons of false positives, you will also suffer from the same parade of horribles.

A scanning system with relatively few false positives could change the result. But very very few groups can afford to follow up on all of these . And those which can tend to take ownership of their own dependencies, perhaps privately forking them (as Google did with Linux)

Maybe dependency advisor will eventually grow into the role we want them to fill and become far more useful. It's probably better than nothing in its current form, but it's hard to say how much better than nothing.


I think the bigger issue is when people rate ReDoS as higher severity than code execution vulns.

But ultimately, vuln lists are just lists not magic. You still have to apply them to your context.


How do you feel about false positives?


Is it me or dependabot has a bit higher rate of false positives? Does this mean that now npm audit === dependabot then?


If Microsoft owns Github, then doesn't that Microsoft now controls all of npm audit?


Microsoft controls all of NPM since NPM is owned by Github.

https://github.blog/2020-03-16-npm-is-joining-github/


Ehhhh...... that just makes me feel so uneasy. Things like npm should ideally be maintained by the community or its own organisation. It's too tempting for a big corp like Microsoft to add their own little bits to listed packages here and there. :\\\\\\


It was maintained by a separate (VC-funded) organisation. That got sold to GitHub/Microsoft.


looking at the latest illissued lodash CVE they seem to deliver CVEs that are withdrawn in GH Advisory Database


Product Manager for npm here. That was correct. As part of our integration, we were not excluding withdrawn advisories. We've since corrected this. Apologies!


I kinda think I wish there was a bigger cost to dependencies such that adding one hurt your project more than it currently does. for example what if node added one second of startup time per module including nested modules. I'm not suggesting they actually do that but it would be interesting to see if and how it changed behavior of adding so many dependencies. maybe npm could charge for every dependency over 10. Again I'm not suggesting they actually do this, just wondering if behavior would change


npm audit npm ERR! code ENOAUDIT npm ERR! audit Your configured registry (https://registry.npmjs.org/) may not support audit requests, or the audit endpoint may be temporarily unavailable.

looks like everyone's trying it :)


Or NPM is having issues ;-) https://status.npmjs.org/


that’s the point :-)


I ran into this today. Delete node_modules and package-lock.json then install and run audit again.


I wish there was equivalent functionality built into PIP.


not quite the same thing, but there is https://pypi.org/project/safety/


It’s not ready for showtime yet, but this is being worked on[1].

[1]: https://github.com/trailofbits/pip-audit


How long before Github becomes Azure Codespace, NPM becomes Azure packages And nodejs/typescript becomes AzureScript.


Weird, it's breaking Vercel builds too.


That might be unrelated to the audit database change as NPM is currently having a service outage.

https://status.npmjs.org/incidents/7pqfqvkwvb58


Probably yeah :) The builds fail at the "npm audit" command, so I incorrectly made the link.


They "own" my workplace as well. My editor. My programming language. It starts to look bad.

And yes, we have major deployment night and npm is offline, nice.


NPM is down at the moment, so ...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: