Fifteen years ago the principle security worry in running a web application was that some "script kiddy" would break in and deface your homepage.
Today the threats are much more real. Ransomware, cryptocurrency miners, even state actors.
An enormous point of weakness in modern software is the supply chain - many projects now have thousands of nested dependencies.
Most of those dependencies represents at least one human being who can be threatened with a crowbar and forced to ship an exploit, which can then infect vast numbers of production applications.
Ultimately the system will need to support signatures which represent not just "I made this" but "I reviewed this", and people will need to set policies for whose reviews they trust, and how many reviews they require for each component.
If reviewers can build up a reputation anonymously, that will make it harder to find the human who needs to be crowbarred, but I'm not sure how you prove you are a good reviewer in a way which isn't gameable.
Alternatively, the reviewers could be well known teams in multiple jurisdictions, such that an attacker would need to buy multiple crowbars and multiple plane tickets.
Those are interesting points / possible approaches, however is there any indication that this particular project enables any of that?
This seems focused on signing binaries / build artifacts.
IMHO it seems like if you have the threat model of "crowbared maintainer forced to insert backdoor" you probably don't trust sources let alone binaries and need to vet your dependency sources and then compile your own binaries from them.
Many open source dependencies will not have a jurisdictionally diverse review team, or any review team at all (single maintainer).
With reproducible builds, the difference between signing a binary and signing the source code from which it is built should be meaningless.
I agree that the threat model should include the threat of untrustworthy source code, because we want the countermeasures to work equally well against backdoors, "bugdoors", and genuine bugs.
I suspect for a lot of projects reproducible builds are themselves a bit of a hurdle and not being verified in the rarer case that they already exist, but the point of reproducible + signed builds as indirect source-signing stands.
You have to be in person for that attack, which is a much higher cost than taking over someone's account remotely from a different country. It's also a much higher risk of getting caught and going to jail.
Ok but the premise of physical harm in person comes from the parent comment, not mine:
"Most of those dependencies represents at least one human being who can be threatened with a crowbar and forced to ship an exploit, which can then infect vast numbers of production applications."
This is where the transparency log comes in, certificate signing is openly auditable. It's the same as certificate transparency, malicious or mistakenly administered certificates are openly auditable (instead of being a transaction that occurs behind the doors of a commercial CA).
Also, in future implementations it would be possible for security conscious publishers to attach cryptographic attestations produced by trusted third party hardware manufacturers to their signature demonstrating the private key was generated on the hardware device and therefore could not be under the control of the service operators. Security conscious clients could start to require this for dependencies they pull in.
Today the threats are much more real. Ransomware, cryptocurrency miners, even state actors.
An enormous point of weakness in modern software is the supply chain - many projects now have thousands of nested dependencies.
Most of those dependencies represents at least one human being who can be threatened with a crowbar and forced to ship an exploit, which can then infect vast numbers of production applications.
So yes, for me this is a very real concern!