Hacker News new | past | comments | ask | show | jobs | submit login

While I agree that regex DoS isn't a very useful thing to highlight in a build, there are risks that come with vulnerable code running in CI/CD systems.

There's a whole topic in information security and infrastructure hardening that centers around SDLC (secure development lifecycle) and it covers such thrilling topics as:

1. Is the code from your SCM the same as the code that was built and deployed?

2. Has anyone tampered with your build or deployment artifact?

3. Who all actually has access to the boxes that do all this? Can you tell when folks access your build boxes and what has changed on them?

4. Does your CI/CD system have privileged access to secrets or other internal networks?

5. How confident are you that your digital supply chain is reliable and trustworthy? Would you know if someone fed you a poisoned dependency?

The list goes on and on. There be dragons here. I'd love to see less focus on regex DoS and more awareness around the fact that build systems need just as much scrutiny and security as the production environments they feed into.




People act like tooling that promotes useless busywork is "free security advice". It's actively harmful.

Prioritising security activities that aren't useful (fixing "regex dos" in dev deps) takes time and effort that could have been spent on real bugs or product improvement.


It’s not actively harmful. No one is forcing you to do anything. If your team sets the standard as no security issues that’s a team workflow issue not this. Information is information. What you do with it is your choice.


It's a problem at scale,

Incorrect and expensive to evaluate information, broadcast to tens or hundreds of thousands of people is misinformation and yes, it's worse than no information.

By notifying you of "vulnerabilities" which aren't, these tools effectively "amplify bullshit".

It wastes the time of your team and in an open-source context also wastes the time of your downstream consumers. Triaging (evaluating the bullshit) takes time. It is often quicker to "fix" the non problem by upgrading but of course this pushes additional version churn on everyone downstream of you.

Even ignoring it wastes time as you have to communicate why you're doing it to "helpful" third parties. In an organisational context you're often stuck with security people who don't know any better or an org policy with metrics around this.

If the information were categorised correctly at the source, or the tooling were smart enough, or the rules nuanced enough to capture the reality of the situation then the time and effort of every team in the ecosystem could be saved.

Unfortunately researchers are incented to produce CVEs regardless of quality, and of course the CVSS score is always calculated "worst case" despite the fact that 90% of the time these issues are in barely used parts of the codebase or exploitable only in unusual configurations.

The tooling then makes this poor quality data worse by completely ignoring context. For example, basically every docker scanning tool on the market will report sev 9.8s on "linux kernel vulns" (based on installed headers) or systemd or cron privesc bugs... in a container... based on matching package versions inside the container.

It's all just incredibly lazy engineering from scanning tool vendors who afaict don't QC their data feeds at all, and are also incented to maximise "findings" on every scan.

I believe there's room for a startup that does "collaborative triage" of security issues to help stem the tide of this, because no vendors seem interested in fixing it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: