Published versions are immutable, you can only submit a new patch with a new version number. It's common for dependencies to be pinned to a minor version (getting patches automatically), however if you use a package-lock.json, as is the default/best-practice, I believe you should be guarded from any surprise patches. You would discover a change like the one in the OP when you manually ran `npm update` on your dev machine, so it should get nowhere near production.
Moreover, anyone who either has malice intentions (or depend on other packages, of whom authors do) can make the whole process much less noticeable with relying on variables from URLs that get executed, which may themselves be linked to other dynamic dependencies, creating all sorts of logic/time bomb or RCE attacks.
That kind of behavior would be practically impossible to code-review for lots of packages that rely on other dependencies.
Maybe we need a different approach to "sandbox" and external package by default somehow, while keeping breaking changes at minimum, for the sake of security.
This is what the folks working on WASM/WASI and related projects are trying to achieve.
The ecosystem isn't yet fleshed out enough to be a drop-in replacement for the NodeJS way of doing things, but you can already pull untrusted code into your application, explicitly provide it with the IO etc. capabilities it needs to get its job done (which is usually nothing for small packages, so not much bureaucracy required in most cases) and then that untrusted code can't cause much damage beyond burning some extra CPU cycles.
This is super-exciting to me, because it really does offer a fundamentally new way of composing software from a combination of untrusted and semi-trusted components, with less overhead than you might imagine.
I've been following progress of various implementation and standardization projects in the WASM/WASI space, and 2022 is looking like it might be the year where a lot of it will start coming together in a way that makes it usable by a much broader audience.
Nothing could be further than the truth. Capability-secure Java code just looks like Java with no surprises. The only difference is that ambient authority has been removed, which means that no code can just call new File("some_file.txt") and amplify a string that conveys no permissions into a file object conveying loads of permissions, you have to be explicitly given a Directory object that already conveys permission to a specific directory, and on which you call directory.createFile("some_file.txt")
Just remove the rights amplification anti-pattern and programs instantly become more secure.
Which GraalVM can do much better as well, even optimizing through language boundaries (it can effectively inline a C FFI call into whatever language made that)
Java's security manager blocks access to existing APIs that are already linked. The new approach relies on explicitly making only specific APIs available.
This is on a different level than pledge. pledge applies to the whole process. This sandboxing, as far as I understand, would restrict syscall access to individual functions and modules inside a process.
I think this will be the way of going. It resembles me of having all server components all over the place creating a mess, and now we have Docker and Kubernetes. From what I see this would be a more lightweight version of containerization: not for VMs/services but for each JS package.
Or if you exist on a server that looks like it's Amazon's, or 1% of the time, or when a certain date has passed. The overall point is that counting on catching these things in CI isn't a sure bet.
I mean... that's true if you ever use any code that you haven't read through line-by-line. That's not specific to package managers in general, much less NPM, so I think it's out of scope for this discussion.
Not really. I can be reasonably sure that end-user applications I download for a desktop are limited in the damage they can do (even more so for iOS or Android). This isn't something that happens often with programming libraries, but there's no inherent reason they can't be built in a way that they run in a rights-limited environment.
A fine-grained permissions system could fix this by disallowing raw shell execs, or at least bringing immediate attention to the places (in the code) they are used.
Interesting. Do those reviews apply to packages as a whole, or different versions of a specific package? Edit: Yes, the reviews can apply to specific versions.
I'm personally a fan of using Debian/Ubuntu packages, because generally code goes through a human before it gets published. That human has already been trusted by the Debian or Ubuntu organization.
This aims to explicitly solve the problem of "okay, but most maintainers just skim the code at best and spend time on packaging", plus it aims to parallelize it.
And while some packages have been distroized (eg. a lot of old perl packages, a lot of python packages, some java/node packages) I have no idea if any rust package is distro packaged separately. (Since rust is static linked there's no real reason to package source code. Maybe as source package. But crates.io is already immutable.)