Hacker News new | past | comments | ask | show | jobs | submit login
The SBOM Frenzy Is Premature (crashoverride.com)
7 points by todsacerdoti on Oct 3, 2022 | hide | past | favorite | 15 comments



I disagree with much of this article.

Package managers should be deterministic when using lockfiles.

Manually updating dependencies in a commit and pushing that is a good idea.

Having automation keyed off a release to push an SBOM for that version is not asking much. The scans are extremely fast.

Deployments should be as immutable as possible, if for no other reason than to know you've tested what's being deployed. I'm astonished this article has been written with this conclusion in mind.

Curious as to what others think, though.


I heavily disagree with this article as well. The author comes across as someone who was not in the war room of any large or security-concious company trying to determine whether they were vulnerable to Log4Shell. Anything is a significant step forward from what we have now, which is nothing.

The fact that package managers aren't deterministic is an even stronger argument for SBOMs, not against.

I suspect the confusion is that they think SBOMs are disconnected from the actual artifacts that get deployed. They shouldn't be. In most circumstances, even if you arent deploying to "the cloud", you should be following basic "12 Factor App" practices and building your artifact once. As you mentioned, deploying the same artifact to your production and staging environments not only simplifies deployment, but also makes your SBOM accurate.

There is a whole other discussion about software packaging best practices. Saying "SBOMs are bad because packaging is bad" is like saying airbags are premature because we need to fix the brakes first.


I was actually involved in a load of Log4J responses. I was the founder of sourceclear, the first SCA security pure play. I do get your point but what I see time and time again are things like a repo being built to say a war file and no one knowing where that file was deployed. We just finished a set of fresh interviews with over 50 security leaders and they almost all spoke about that issue, they don't know where things are despite using Snyk and other tools because there is no linkage in their environment between the repo and the deployment. I get your analogy nut I think its more like saying there is no point in a fancy dashboard on a car if the telemetry coming off the engine is questionable.


I looked you up after posting that, so I am willing to eat crow. ;)

While I understand and agree with the larger thesis of your article, I feel like it sends the wrong message. I work at a large FI where it's common for applications to be developed by a vendor and then dumped into the environment as a massive .war file — the White House's push for SBOMs gave us significant ammunition to drive changes around in-house and vendor built apps. Is scanning all of this stuff with a SCA tool like Snyk, Xray, or Nexus Lifecycle going to give you 100% coverage and help you realize that an intern installed a vulnerable version of Elasticsearch on a VM without telling anyone? No. Are there going to be false negatives where the scans don't report the proper dependencies? Yes. But having an inventory of what you have is a great first step, even if it isn't 100% active or you don't know where it's deployed — as long as you're cognizant of those limitations.

> We just finished a set of fresh interviews with over 50 security leaders and they almost all spoke about that issue, they don't know where things are despite using Snyk and other tools because there is no linkage in their environment between the repo and the deployment.

I was the "war room" and heavily involved with Log4Shell remediation, so I completely agree and empathize with this experience. We were lucky to have a large suite of tools like Tenable, Aqua, Mergebase's open source log4j-detector, and an in-house built catalog of all servers and assets, which allowed us to piece together info and get a better understanding of the environment. We did multiple passes of environments with multiple tools. It was a greulling month of work, but it would have been even more so if we didn't have existing imperfect solutions.


Def don't need to eat crow. Never heard that phrase before funny. It's just my opinion. I often get it wrong and there are def a few ways to think about it here.

100% agree on getting more visibility and ammunition and 100% agree on having an inventory. I was leading the effort at the OSSF to create to the plan that was taken to the WH summit ;-)

I can't believe that there isn't a good tool for being able to scan production to match build outputs. Sounds like a good OSS tool project !


I don't quite understand the deployment issue. I mean, I understand people might not be tracking what's deployed, but I don't understand what is missing for it to be happening today, other than will.

For example: I build some software into a Docker image, version tag it, sign it, and generate an SBOM for it. That image goes into production with signature validation. Even if I've included 100 jar files in there, I should know exactly which ones I have. I can upload the SBOM to my DependencyTrack[1] instance to so over time no dependencies have vulnerabilities I'm not aware of.

What doesn't work in that scenario? What scenarios can't conform to that one?

[1] https://dependencytrack.org


Certainly 'will' is a huge issue, the biggest IMO. I def on't disagree it can be done but my experience and from interviews recently people just don't know. People don't know where their containers are deployed. They know whats in their registries of course but can't trace it all the way though. What I have also seen is people using deploy optimisation tools that dynamically pull from multiple code repos, containers and orchestrate highly optimised global deploys. I def on't disagree it can be done, just it usually isn't.


Right, that makes sense. In that instance, they need to be enforcing some (internal) standards. E.g. "everything should be deployed on monitored k8s so I can pull deployment info from them and find out what I have deployed".

But then, the issue you're now describing doesn't seem to be anything to do with SBOMs being deficient in any way, or lockfiles being bad. How are you connecting those things?


100%. Sounds like a great OWASP project to capture those best practices doesn't it ? Want to volunteer ? ;-)


I also strongly disagree with his take on dependency pinning:

> Solutions to this problem are running SCA combined with automatic updates at every build and hot patching in production. What is not a solution to this is dependency pinning, a technique getting widespread adoption but very dangerous for security in the real world

Not pinning versions is a small benefit — automatically downloading the latest compatible version of a dependency — at the risk of enabling a whole new layer of supply chain attacks. I don't want npm to automatically download a dependency that was published 3 minutes ago and hasn't been vetted or tested by anyone.


Point taken and there are two trains of thought. The way I think about it is that it's a double edged sword. If you have already trusted a dependancy then trusting an update is a risk but less than the risk of having known vulnerable versions. I rarely see developers actually looking at the code of new versions when upgrading. My take on all this is a pessimistic one based on what I have seen and not on best practices. If teams I saw reviewed the updates I would fall on the pinned side of the sword.


You might trust a dependency from a security perspective, but they might still have accidentally introduced a breaking change into a non-major version bump. It seems like a recipe for disaster to deploy other versions of dependencies (which might pull in further different versions of transitive dependencies) and assume it'll all work.


> That also means that unless you build a project on two identical hosts then it is unlikely you will get the same SBOMs.

I don't understand why the author thinks this is such an insurmountable issue.

Reproducible builds are possible with a little care: large parts of Debian are built reproducible. There are tools from https://reproducible-builds.org/

Bazel and similar build systems support reproducible builds, and thus identical SBOMs.


I was referring to things like Maven, WebPack and NPM. I should have made that clearer. What I have seen is in general supply chain in more mature tech and certainly OS tool chains doesn't have that issue or is certainly more aware of it and Bazel is great and all that but dev teams building business apps that are all Devopsy use plug and play and any friction to the velocity is rarely a trade off they even discuss.


If you use Nix on the host OS you can probably make the OS as reproducible as the application layer as well, which is interesting.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: