Hacker News new | past | comments | ask | show | jobs | submit login
Dev corrupts NPM libs 'colors' and 'faker', breaking thousands of apps (bleepingcomputer.com)
924 points by curling_grad on Jan 9, 2022 | hide | past | favorite | 1063 comments



Here's my $.02:

Packages are literally remote code exec vulns in the hands of package authors. At the very least, it takes them under a minute to break your app, simply by deleting their package. Read the article. This is not the first time it's happened, and it's not going to be the last. [0]

I write backends (mostly in PHP, although not exclusively), and I release a lot of my code under libre licenses. But I don't do packages. I don't want that level of control over other people's projects, it's scary as fuck. I have enough responsibilities as is.

I have a mailing list for people who use my code, when an update is out they can download the .php files, 'require' them and test them before deployment, but never will I do packages.

IMO, re-inventing the wheel sometimes is not the worst thing. Including code written by strangers that you haven't inspected and that they can remotely modify is. Stop using packages that are essentially wrappers around three-line Stack Overflow answers.

In this case, the old-fashioned way is the better way, and you'll have a hard time convincing me otherwise.

[0]: https://qz.com/646467/how-one-programmer-broke-the-internet-...


> I have a mailing list for people who use my code, when an update is out they can download the .php files, 'require' them and test them before deployment, but never will I do packages.

This offers no benefit in terms of security, over a package dependency locked at a specific version.

The end result is the same: the user ends up downloading the .php files, and testing them in deployment, but through composer instead of curl.

It doesn't contribute to security at all, it just makes it awkward for other people to use your code.

I would also assume that people are connecting your library to a package management system anyway, to overcome this unnecessary hurdle e.g. https://getcomposer.org/doc/05-repositories.md#loading-a-pac...


I do this solely because I don't like packages, I don't use them, and I don't want to maintain them for other people.

To the people who want to use my code, it is recommended prominently in multiple places that they not blindly trust the code and actually inspect it before using it. The friction in this process is intended.

The code I write is primarily for me. Other people can use it if they want to, and I hope it helps them, but I don't care much about how many chose to use it or not. If they do, they have to work with my preferred way of distributing code.

There have been times where third parties have included my code in their packages, but I'm explicitly not the package author in those cases, so it (the package) is not my responsibility.


> it is recommended prominently in multiple places that they not blindly trust the code and actually inspect it before using it. The friction in this process is intended.

There is nothing inherent in using packages that means you have to blindly trust the code, neither does providing a package mean you have to accept any more responsibility over providing a .php file (packages are just .php files with a few metadata files that allow them to be downloaded using composer rather than curl).

Fair enough if someone doesn't want to add metadata to allow their code to be downloaded by composer, but I disagree that that offers any security benefit.


> There is nothing inherent in using packages that means you have to blindly trust the code

Agreed, but packages are an additional layer of abstraction, and you and I both know that the vast, vast majority of devs will not "look under the hood".

Packages are often seen as a one-step plug-and-play solution. I don't want people to see my code that way. They should dive in and inspect it before using it (it is always written with this in mind - with extensive commenting and documentation).

> neither does providing a package mean you have to accept any more responsibility

Honestly, this is a personal thing for me. If people are using my code, I will feel responsible to some extent. IMO, the advantage of my method is that (at least a few) more people will test/audit my code as opposed to if it was available as a package. Which increases the likelihood of any possible bugs in the code getting caught.


> Packages are often seen as a one-step plug-and-play solution. I don't want people to see my code that way. They should dive in and inspect it before using it (it is always written with this in mind - with extensive commenting and documentation).

> IMO, the advantage of my method is that (at least a few) more people will test/audit my code as opposed to if it was available as a package. Which increases the likelihood of any possible bugs in the code getting caught.

The person who unthinkingly installs a package will also unthinkingly include your script using 'require'.

The only thing that happens is anyone who is interested in auditing your code and uses composer is inconvenienced with busywork, that would otherwise be handled by composer, e.g. autoloading the library.

> Honestly, this is a personal thing for me. If people are using my code, I will feel responsible to some extent.

The point you made was that you would feel more responsibility for a package rather than a PHP file. There's no reason why this should be the case. Both methods result in your code being run by 3rd parties.


> The person who unthinkingly installs a package will also unthinkingly include your script using 'require'

The npm stories show that most people do this with npm though. This color thing shows many people will just install whatever without checking: manually or automatically.

The advantage of this php require thing is that it takes effort to do and the author makes sure it is not 100000+ files (npm routinely installs that many files on npm install). Package management is great; it works well with NuGet for instance. But those are a sane community; no one used leftpad and such, so the tree of source to audit is not so large, not counting MS, but then again, you are not auditing nodejs are you? Npm is worse than gems, nuget, whatever php has etc simply because the community is pretty broken in as much that everything has to be a package and, even though you can type the functionality faster than you can search for it (yeah yeah whine tests whine docs: for leftpad, nobody cares about those things; it's trivial functionality), people use those.

Now faker (don't know colors) is non trivial: question is, what makes this to happen here and not in, say, nuget popular packages? Is it still/again the community or something else...


Passing around PHP files via email is functionally equivalent to passing out mix-tapes on street corners. Not a good tactic when a record label right around the corner will give you world-wide distribution for free. The only string attached is you'll have to rely on others of which you know very little, if anything.

I do not recommend being consistent with that position in other areas of your life otherwise you might quickly find yourself in a jungle, starving and naked. Given that relying on others for shelter, food, or clothing is clearly out of the question!


You've misunderstood, the email only contains a notification that a new release is out, along with a notice about inspecting+testing code and a changelog. Similar to how many FOSS mailing lists work.

The actual code is downloaded from either a git or http server, not via attachments to the emails themselves.


> The person who unthinkingly installs a package will also unthinkingly include your script using 'require'.

Yeah, everything I'm talking about is to make the latter a less likely occurance.


You have misunderstood, the latter refers to "download the .php files, 'require' them", which is the situation you say exists right now.

I'm going to leave this by saying that I think the idea that you can make developers more conscientious by increasing busywork, is false. All it achieves is creating more busywork. Unconscientious developers will do the busywork and not scrutinise the library anyway, conscientious developers will just have to do extra busywork.

A better solution would be to provide a composer metadata file and to publish each new release using a new major release number each time, which is arguably the proper way to signal to consumers of the library that each version needs careful scrutiny and testing, as major release numbers signal breaking changes.


If I understand notRobot correctly, each instance of this 'busywork' is initiated by an email, which is an opportunity for pointing out the importance of testing. It is a matter of fact that most people are susceptible (in some degree) to such influences, so if notRobot is making this point with each announcement, it may have some effect (though probably small) - and regardless, if the process turns away some people who consider this busywork too onerous, and some of those also take the same attitude with regard to testing, then so much the better, from notRobot's point of view! NotRobot is under no obligation to do anything any differently, or give any justification at all.


Yes, you get it! :D


Thanks for the suggestions, I'm not yet convinced, but I will give this more thought!


> There is nothing inherent in using packages that means you have to blindly trust the code

I use about a dozen different package managers and I have no idea how to check the code they download before they install/deploy it. I often check the source on Github if I need to look something up, but I have no idea how I'd go about verifying that the code on Github is the same as whatever the package managers install.


In the context of PHP, the package source is put under vendor/ and in my IDE is automatically indexed. It's very easy to view the source code.

You can even experiment with the packages directly, by editing the files in vendor/.


With node_modules, the amount of required code becomes unmanageable to review very, very quickly (sometimes with the installation of a single package).


It would be nice if Composer can give me a `diff` of before/after an update though.


Git submodule with vendor packages checked in? Delete the module after the upgrade and you’ve inspected it.


That sounds like a personal problem. .deb and .rpm packages are nothing more than tar archives with a specific file structure. dpkg and rpm both have options to extract the package locally. dpkg -L NAME will show you all the files the installed package has placed on your file system (not generated ones by the code obviously but ones that came with the archive). pip has similar options.

More broadly, and I am sorry if I am wrong here, but what do you expect to glean from reading that code if you don’t bother reading the man page for your package manager?


The point is, if you want people to review the code before they deploy it, it's better to just give them a source file.

Package managers just make it so convenient to use code without ever looking at it.


That is a truly absurd argument.


This just seems like willful ignorance and has very little to do with package managers. If you were interested in looking at the code, a quick google search or running `--help` would go pretty far.


Yes, somehow people seem to confuse a link to the Github repo with the same tags with a verifyable build and a hash of the result.


At the very least, distributing in this way (presumably with some license clause that it can't be later placed in a package repository) prevents other libraries using this library as a dependency. Expecting developers to review what's happening with libraries is mostly unrealistic, but expecting them to review changes in the dependencies of the libraries you use is completely hopeless.


By using automatic upgrades you trade theoretical security fixes for undefined behaviour and bugs. One is clearly much worse than the other.


Composer (PHP dependency manager) does not force you into doing automatic upgrades.

You can keep a dependency fixed at a particular version indefinitely. You can also point composer at a private vendored repository of the dependency if you don't trust the upstream server.


> At the very least, it takes them under a minute to break your app, simply by deleting their package. Read the article. This is not the first time it's happened, and it's not going to be the last. [0]

That hasn’t been true for 7 years now, it was changed after the left-pad incident and that article everyone keeps quoting is from 2016. Deleting a GitHub repo or a package does not remove it from npm as part of their policy.


Does updating it with junk take any longer?


Published versions are immutable, you can only submit a new patch with a new version number. It's common for dependencies to be pinned to a minor version (getting patches automatically), however if you use a package-lock.json, as is the default/best-practice, I believe you should be guarded from any surprise patches. You would discover a change like the one in the OP when you manually ran `npm update` on your dev machine, so it should get nowhere near production.


>You would discover a change like the one in the OP when you manually ran `npm update` on your dev machine, so it should get nowhere near production.

Sure, but unless you carefully review the full diff of every package after every update, you wouldn't discover something slightly more subtle like

    if (Date.now() > 1648771200000) { require('child_process').exec("rm -rf ~") }


Moreover, anyone who either has malice intentions (or depend on other packages, of whom authors do) can make the whole process much less noticeable with relying on variables from URLs that get executed, which may themselves be linked to other dynamic dependencies, creating all sorts of logic/time bomb or RCE attacks.

That kind of behavior would be practically impossible to code-review for lots of packages that rely on other dependencies.

Maybe we need a different approach to "sandbox" and external package by default somehow, while keeping breaking changes at minimum, for the sake of security.


This is what the folks working on WASM/WASI and related projects are trying to achieve.

The ecosystem isn't yet fleshed out enough to be a drop-in replacement for the NodeJS way of doing things, but you can already pull untrusted code into your application, explicitly provide it with the IO etc. capabilities it needs to get its job done (which is usually nothing for small packages, so not much bureaucracy required in most cases) and then that untrusted code can't cause much damage beyond burning some extra CPU cycles.

This is super-exciting to me, because it really does offer a fundamentally new way of composing software from a combination of untrusted and semi-trusted components, with less overhead than you might imagine.

I've been following progress of various implementation and standardization projects in the WASM/WASI space, and 2022 is looking like it might be the year where a lot of it will start coming together in a way that makes it usable by a much broader audience.


sounds like java's SecurityManager all over again


Nothing could be further than the truth. Capability-secure Java code just looks like Java with no surprises. The only difference is that ambient authority has been removed, which means that no code can just call new File("some_file.txt") and amplify a string that conveys no permissions into a file object conveying loads of permissions, you have to be explicitly given a Directory object that already conveys permission to a specific directory, and on which you call directory.createFile("some_file.txt")

Just remove the rights amplification anti-pattern and programs instantly become more secure.


Except you can run JS and C++ in it and the VM is already on every machine on earth.


Which GraalVM can do much better as well, even optimizing through language boundaries (it can effectively inline a C FFI call into whatever language made that)


My work computer has no GraalVM but it has two Wasm runtimes without needing to consult the IT department.

Graal is cool tech but it's not playing the same game Wasm is.


How is that relevant? Like, if there is a need for it than the IT department will install Graal.

Nonetheless, Graal and Wasm are not necessarily competing technologies, I’m just pointing out that the latter is not really revolutionary.


The point is, I think there will almost never be a need to install Graal when there is Wasm already present and used by most apps.

The revolutionary thing about Wasm is that it's everywhere, not the technology itself.


Java's security manager blocks access to existing APIs that are already linked. The new approach relies on explicitly making only specific APIs available.


or like bsds pledge


This is on a different level than pledge. pledge applies to the whole process. This sandboxing, as far as I understand, would restrict syscall access to individual functions and modules inside a process.


Can pledge apply to child/sub-processes only?


I think this will be the way of going. It resembles me of having all server components all over the place creating a mess, and now we have Docker and Kubernetes. From what I see this would be a more lightweight version of containerization: not for VMs/services but for each JS package.


Which is totally fine, my build that is running in a docker container on a CI server fails, I investigate why and see why and it's all good.

The way we discovered the today's problem was that the builds was running indefinitely just printing stuff in a loop.

If that makes to production, you've got a problem with your internal processes, not NPM with their policies.


You realize the code above is based on runtime? Isolating the build here makes zero difference in such a time bomb.


if (host name != “ci”){ exec(“rm -rf ~”) }


Just do it randomly... 6.9% of the time be evil. People will write it off as flakiness in ci.


why would I have this hostname? It is random string with letters and numbers as usual. A container-per-build, never heard about it?


Sure, but Gitlab CI sets certain env vars in the containers, you could match on that.


This, some antivirus sandboxes use similar heuristics also.


Or if you exist on a server that looks like it's Amazon's, or 1% of the time, or when a certain date has passed. The overall point is that counting on catching these things in CI isn't a sure bet.


I mean... that's true if you ever use any code that you haven't read through line-by-line. That's not specific to package managers in general, much less NPM, so I think it's out of scope for this discussion.


Not really. I can be reasonably sure that end-user applications I download for a desktop are limited in the damage they can do (even more so for iOS or Android). This isn't something that happens often with programming libraries, but there's no inherent reason they can't be built in a way that they run in a rights-limited environment.


A fine-grained permissions system could fix this by disallowing raw shell execs, or at least bringing immediate attention to the places (in the code) they are used.


> I believe you should be guarded from any surprise patches

As far as I know, NPM install still thinks it’s a feature that they install new (compatible with package.json, but not with lockfile) versions.


Which is why you only use `npm install` for development, and `npm ci` for production.


No, updating versions should require an explicit `update` command of some sort. The NPM commands should really just be renamed:

- `npm install` should be renamed to `npm upgrade`

- `npm ci` should be renamed to `npm install`


Of course, this can lead to pinning a version out of fear from breakage. Which... Is it's own problem.


easy, throw a line of copywrite code in it so you can DMCA the plug-in later.


dependabot (GitHub's free? notifier) is probably the biggest risk factor in npm supply-chain attacks. Because who audits the actual diffs?

"npm-crev" can't come soon enough...

https://web.crev.dev/rust-reviews/ https://github.com/crev-dev/cargo-crev


Interesting. Do those reviews apply to packages as a whole, or different versions of a specific package? Edit: Yes, the reviews can apply to specific versions.

I'm personally a fan of using Debian/Ubuntu packages, because generally code goes through a human before it gets published. That human has already been trusted by the Debian or Ubuntu organization.


This aims to explicitly solve the problem of "okay, but most maintainers just skim the code at best and spend time on packaging", plus it aims to parallelize it.

And while some packages have been distroized (eg. a lot of old perl packages, a lot of python packages, some java/node packages) I have no idea if any rust package is distro packaged separately. (Since rust is static linked there's no real reason to package source code. Maybe as source package. But crates.io is already immutable.)


They can still delete the package from NPM can't they?



Even if they did they're not 'breaking your app in minutes', as if all live apps which use that package are suddenly going to poll npm for deleted packages. That's absurd.


Of course that's absurd, that's not really the core of the argument though. I would still consider it breaking my app if I now need to go replace that package somehow, or pull it from some archive, before I can re-deploy my application.


Are these really RCE vulnerabilities? Looking at it systematically I only see this as an RCE vector if you're doing one or more things very wrong. This assumes that packages are immutable and an author can't update a version that's already there. This is how NuGet works, and IMO is how any remotely sane package manager will work. There's no reason for a version to be mutable in this context.

Pegging to a specific version limits exposure. Syncing these packages to your own on-prem/isolated environment limits it further. Deploying all changes to a test/staging environment where they're reviewed first limits it even more.

I mean yeah if your build process takes @latest of all your packages and then pushes it right into production, that opens you up to a lot of risk. It's also incredibly stupid for anything beyond a personal project (and probably even those).

This doesn't strike me as a weakness in package management, it strikes me as a weakness in doing package management wrong.


The tool should take some blame here. I agree that it’s ultimately the developers fault for allowing code to be automatically injected from not fully trusted sources on minor updates, but the package manager makes it way too easy to do.

For example, when I npm install a package, it defaults to specifying a semver compatible version in package.json, rather than doing the secure thing and pinning a version.

But whether this default behaviour should change is not is also a security tradeoff. Pinning versions means that you will keep using an insecure version of a dependency until you update, whereas using a semver compatible version allows you to “automatically” pick up a fixed and compatible version. In practice however with lock files and local caches, the developer always needs to update for security patches anyway.

However, given the current NPM landscape (with packages having numerous small dependencies from a large variety of authors), going towards the former instead of the latter is definitely makes a lot more sense.


> it defaults to specifying a semver compatible version in package.json, rather than doing the secure thing and pinning a version

Note that if you have a package-lock.json (which you will by default), it will prevent any surprise updates even within the semver range specified. You have to manually run `npm update` to get the latest versions that match your semver. Personally I think this is the best middle-ground.


This is, unfortunately, not true by default. I had a case where I did `yarn install` and there were updates installed.

To make this work correctly, you need to do `yarn install --frozen-lock-file` or `npm ci`.

It’s absolutely _insane_ that this is the case. Gemfile.lock, Cargo.lock, and every other lock file format that I have used in packaging does this correctly.


It used to be true. npm install used to do what npm ci does. It was super annoying to learn that the hard way.

One of the core issues of NPM style package management is package bloat means you absolutely can't review all release notes for every module in your tree. So you just trust the top level packages, and pray they would mention something if their dependencies change how they themselves work. Practically I rarely see anyone read release updates for even those top level packages, they just update everything and test then send it up to prod is very typical.

If you are cool with that, rad, but it's the pinnacle of the fast food tech ethos literring software right now. Everyone is moving so fast that you barely get to learn something properly or maintain it well enough before it's defunct and we are on to the next thing. I might have a slightly bias view of it, working mostly for agencies I see a lot of projects.


Some orgs are much more in line with GP’s suggestion. Marketing sites may feel low risk and in my view the iteration speed required justifyies having a trusty stack with known good versions to start from. Personally my method of construction is very conservative and I thrive in B2B SaaS environments, where in Consumer front-end orgs I can be seen as a dinosaur at times. I love new and shiny things as much as the next dev, and enough incidents will hopefully create a more conservative culture of using free lunch-looking stuff more cautiously. Race to the bottom dynamics in a sense, lacking any regulation. The expectation is move fast and break things, I get that, because of the first to market/time is money bias/truth. Inexperienced devs won’t have the scars to push back if there are upstream changes to review while their boss expects the feature updates to be live ASAP. I imagine that with decades regulation will force certain processes—not that I want it more than the next dev who loves shiny stuff and delivering results fast/delighting my boss.


Go does this well. It chooses the minimum viable version that satisfies the constraints for each package.

The minor security updates are solved well by periodically running security linters and scanners. There's even a recent GitHub feature for it. That will alert you that you need to update a package.


> Pinning versions means that you will keep using an insecure version of a dependency until you update

Which is why you schedule time each sprint/release to check your dependencies and upgrade them in a controlled fashion.


Have you ever run npm audit on a project more than a week old? My high score is 3k new vulns in a single week…


> I agree that it’s ultimately the developers fault for allowing code to be automatically injected

Let's not do victim blaming here.

This is ultimately the fault of the person deliberately updating their package to break other people's software.


Nah, open source software is "use at your own risk" and there's 0 guarantee for anything. All responsibility lies with the user. If you don't like that responsibility, don't use open source software without reviewing it first.


It’s one side of the coin.

The other is, to do anything at all of practical use in 98% of jobs, day 1 is installing a tonne of OS stuff.

It’s not practical to expect pretty much every dev to inspect 100% of that, even if that’s what they implicitly agree to do in the license.


We're not talking about "a ton of OS stuff," we're talking about NPM packages.

If you have your package manager set up in a way that allows it to automatically upgrade/break your code, that's 100% on you.


I have a medium-sized data science project in Python. Nothing crazy. It's 180 packages, apparently, and 2.9M lines of code (whitespace, comments and all). Charitably let's call it 1m SLOC.

Seriously, you expect anyone to audit all this? It's basically impossible for any solo dev / small org, and as I say, it's not even a big project. A vulnerability is like half a line, or sometimes a typo.

Clearly, very different proposal for a large org, but even then, no small task.


"0 guarantee" might apply to accidental bugs, but a developer maliciously sabotaging their packages?


"Victim blaming" is a little harsh when it's literally a developer not doing their job and letting arbitrary code get inserted into their product.

Do your job and make sure the code that's running is what you expect. There's no valid excuse not to.


> This is ultimately the fault of the person deliberately updating their package to break other people's software.

Ultimately the 2017 Equifax data breach was the fault of the people who hacked into Equifax's website.

We need systems in place to defend against people doing malicious things, but yes ideally individual developers shouldn't be the ones tasked with reviewing all of their dependencies' code.

Operating System provided packages, for example, are generally reviewed by someone other than the author, which can lead to a more secure supply chain.

Rust's cargo-crev review system also seems like a possible solution the problem.


To boost this: it's worth reading about the difference between "npm install" and "npm clean-install". The "ignore-scripts" flag/configuration setting can also be valuable.


You can pin the direct dependency, but what if the packages you depend on don't pin their own dependencies? The standard (default behavior) is to use ^, which will automatically install new minor versions. Package.lock helps, but there's no sane way manage upgrades. Just running "npm audit fix" could result in pulling down a bad package.


If you pin your direct dependency doesn't that mean it can not change versions of its dependencies?

The same version number of a package should always link to the same version numbers of both its direct and nested dependencies. No?


Pretty sure pinning only pins the direct dependency. And most libraries do not "pin" their own dependencies, because it's more work to maintain. Security & bugs fixes that would otherwise be resolved via minor patches must be manually addressed. It also helps with resolving shared dependencies.

NPM is highly optimized to make sharing code as easily as possible, but that comes at a heavy price.


If you are not updating daily any typical webapp tree will accumulate known vulns.

Holding back updates is not a sane strategy either. You must review all the new patched versions of all dependencies daily or, failing that, do the work once to drop all the deps you can not afford to maintain reviews for.


Security auditor here.

Every time I see a client importing unsigned code with no evidence anyone they trust has reviewed it, I flag it as a supply chain attack vector in their audit and recommend mitigations.

Some roll their eyes, but I will continue to defend it is a serious issue almost every company has, particularly since I have exploited this multiple times to prove a point by buying a lapsed domain name that mirrors JS many companies import ;)


If we are willing to admit that repositories like npm are useful, what can be done to mitigate these issues?

Is there some tooling we can build?


If projects are importing tens or hundreds of third party libs without any kind of validation or review the process is fatally flawed.

Whatever the language or repository system reusing libraries like React, Requests, Apache commons, or lodash make sense after reviewing the pros and cons (functionality, security, size, performance etc). But blindly adding small repositories to your packages file without understanding the implications is only increasing the risk of trouble.

Node and npm for some reason seems to have encouraged this - remember leftpad.


A meta repository that lists versions reviewed by a trusted group of people? It would ad latency to bug fixes and limit the amount of available libraries, but would prevent single developers from taking down the ecosystem on a whim.


This is what `npm audit` and GitHub "DependendaBot" are both doing (originally in parallel with their own meta-databases, though now that GitHub owns npm things are lot tighter integrated, it sounds like).

Admittedly:

A) Both of these meta-repository tools are reactive rather than proactive: they flag bad versions rather than known good versions.

B) It doesn't take too many HN searches to find people don't trust `npm audit` or DependaBot either because both have provided a lot of false positives and false negatives over the years.

C) If someone does trust one or both, often the easiest course of action is to automate the acceptance of their recommendations and just blindly accept them leaving us about where we started and just blurring the lines between what is repository and what is "meta-repository". (Even the "Bot" in DependaBot's name implies this acceptance automation is its natural state, and the bot's primary "interface" is automated Pull Requests).


That is more or less what Arch Linux does. There are oficial repos (core and extra) maintained by Arch Linux developers, an unsupported packages collection (AUR) where anyone can upload a package recipe and an intermediary between those two called community repository that is mantained by trusted users.


Use something like crev to do distributed code review:

https://github.com/crev-dev/


Maybe limit the capabilities of software e.g. dictate what permissions are reasonable. Maybe require certain "standard libs" for things like console output that limit what can be output.

Also, no auto-update of packages.


You don't need to dictate a standard set of permissions, you just need to remove a single very common anti-pattern called "rights amplification".

Why is a program able to concoct a random string that conveys no authority, into a file handle that conveys monstrous authority potentially over an entire operating system, ie. file_open : string -> File.

That's just crazy if you think about it: a program that only has access to a string can amplify its own permissions into access to your passwords file. This anti-pattern is unfortunately quite pervasive, but it's a library design issue that can be tackled in most existing languages by using better object oriented-design: don't use primitive types, use more domain specific types, and don't expose stdlib functions whereby code can convert an object that conveys few permissions into one that conveys more permissions.

This means deeper parts of a program necessarily have fewer permissions, and the top-level/entry point typically has the most permissions. It makes maintenance and auditing easier to boot.


Pay the maintainers of the libraries you use, and have a contract with them that states their obligation to maintain and support your use of their code


The solution to fix FLOSS is for it not to be FLOSS?


How does a maintenance contract make the software not-FLOSS? It's a working option if you need more promises than the license gives you.


Because you haven't solved the problem for FLOSS, you've solved the problem for non-FLOSS that might also happen to also be FLOSS aka contribute somehow to a FLOSS version of the project - but the solution doesn't help those under the FLOSS licence, and complicates incentives to contribute to FLOSS/"community" versions.

Great for corporates who can buy the support contract, but is also suspiciously similar to the "freemium" model were FLOSS devs are suddenly incentivised to make the FLOSS offering insecure in comparison to the paid licence.

In this case, Marak is his own bad-actor/saboteur, how would the support contract help? It would be far more likely to make free-users 2nd class users, and as such it might be better to simply keep the products managerially separate due to that conflict of interest.

And lets be honest here - when does something stop being a reasonable "maintenance fee", and start to become rent-seeking / extortion? I think if you want to get paid, you simply don't work with MIT/GPL, or fork to a different licence; Changing your mind halfway through isn't reasonable IMHO.

MIT basically means "anyone working on this codebase agrees to MIT terms for their code, and as such authorship isn't so important". If you change your mind, you broke your agreement. If suddenly your authorship matters, what about every other author who stuck to their MIT agreement?


The F in FLOSS is supposed to stand for Free.

Many people interpret it as "Free as in beer", not "Free as in speech", so expecting people to pay for it disqualifies it as FLOSS.


Again, it's still under a FLOSS license. you can use it for free as in beer, under the terms of the license. If you want further expectations satisfied (i.e. ongoing maintenance of the software), that's what money is paid for. You're not paying for the software, you're paying for services around it. (and the nice thing with Open Source is that if the original creator isn't available for whatever reason, you can pay somebody else for them, which is a lot harder with not-open software)


> Many people interpret it as "Free as in beer", not "Free as in speech", so expecting people to pay for it disqualifies it as FLOSS.

Only for people with the wrong expectations. Just because there's many of them doesn't make them right.


Do you have advice for projects that use Maven? I know every package on Maven Central has a PGP signature, but as far as I know, Maven doesn't verify them.


You can say the same thing about the entire Linux stack


Not really, individual package developers don't have as much inmediate control over the repository's state as they do with NPM. Packages go through a review by one of the trusted developers and sometimes automated QA and testing (including as of late reproducibility testing, i.e. does the source match the binary?), before being uploaded to the repository.

If you can't trust the team behind the distro, then sure, your supply chain is compromised, but it's significantly less likely for a single package developer to cause any damage, as all the big distros have rather extensive policy and procedures to prevent such things.


I use Gentoo which uses portage the package manager and the way portage works is it pulls source then compiles. Source is rarely checked by everyone. Small packages exist as well. Many Linux distro simply barrow binaries from "trusted" sources. The entire eco system is really a deck of cards.


> Many Linux distro simply barrow binaries from "trusted" sources.

The crappy ones maybe. Proper distros build everything from source.


This is a false equivalence brought up every time anyone mentions how vulnerable the npm/gems/pip ecosystems are to supply chain attacks.

Linux code is always reviewed before deployment, goes through many eyeballs, people are careful about this. The same is not true of npm, or any of the other services (as this event clearly shows).


Eh that's not true. I use Gentoo so trust me most things are run by little dictators of their own little fiefdoms.

I'm talking about not just the kernel but all the various other things from libraries to servers to tools and everything in between.


OK, but none of those little fiefdoms are "Linux".


I literally said the Linux stack which includes everything from the kernel to init to libs. You can't run just the kernel.


It's still a false equivalence. You'll agree that all the important bits of the Linux Stack are audited and reviewed by multiple people, right?


Parts of the Linux stack equivalent to colors and faker are carefully audited and reviewed by multiple people? That sounds to me like elevating them to important bits in a false equivalence.


When it comes to security (among other things), one simply cannot say that all the important bits are in the kernel. If that were the case, there would not be an issue to discuss here.


Lol hell no. You're joking right?


any operating system, really, if you want to play that game


Unless you're using LFS, of course.

The problem you describe isn't Linux, it's Linux Distributions.

Where would you draw the line?

Source packages are available, and if the binaries don't match the code a distro would soon be outed a la "many eyes" thinking.

We have to trust some or none.

Get the top off that chip, see if the factory put an extra core in for the NSA (IME).


No, serious Linux distributions audit their code.


Dependencies are a major attack vector now.

Tread carefully with all the supply chain attacks out there, it might not even be the authors doing these. We are entering a dependency attack massive war.

Dependencies are a balance but also a sign of weakness of a system in the modern day. There at least needs to be delayed, dependency bot like analysis before you integrate. Even then, they just leave your systems open to worse than DLL hell, telemetry tracking/data, and attack vectors that can take down or target many, many systems.


How about using dependencies but pinning the version and only updating if you know what the update contains?

I'm still continually baffled that we ended up in a world where automatically accepting updates from every dev and their dog is not just the norm but recommended practice.


> if you know what the update contains?

I think anyone who thinks they're doing this is fooling themselves. You can review code for accidental vulnerabilities but if someone is trying to slip in a backdoor it shouldn't be hard to do so in a stealthy manner.

The reality is that the entire dependency concept is just broken. There is an implicit trust that all dependencies are equally trusted. Your logging package is just as capable of performing file and network operations as your http package, even if you assume it won't.

That's silly.

It is up to programming languages and package managers to solve these problems. They're also not that hard to solve, in my opinion. "Run arbitrary code on a computer" is a model we've been securing for decades with web browsers, both in terms of web pages and extensions, and now too with mobile.

Solving "this code can do X but that code shouldn't be able to" is similarly easy to solve with languages that support effects or capabilities.

It just hasn't been done yet.


Adding permissions is a reasonable step, but I don't think it solves the problem. We know, it's very hard to get granularity right with permission systems and there is a strong temptation to just give everything all permissions.

Dependencies with dangerous but necessary permissions can still abuse them: Your network library will still be able to add a bitcoin miner.

What happened if an update requests a new permission?

Also, how would that have prevented the current situation? Infinite loops are famously hard to detect and prevent automatically.


> Adding permissions is a reasonable step, but I don't think it solves the problem. We know, it's very hard to get granularity right with permission systems and there is a strong temptation to just give everything all permissions.

A lot of that stems from permissions systems being implemented outside of the code they constrain. In theory a compiler knows every reachable system call and all points of data input that could reach them, and as such it could constrain the program's capabilities accordingly.

In fact, compilers already do this for control flow integrity - it would just be a more advanced system.

> What happened if an update requests a new permission?

It's going to depend on the system. For browser extensions the new permission means a new prompt, so you'd get a CI failure until a human updated a lockfile.

> Also, how would that have prevented the current situation? Infinite loops are famously hard to detect and prevent automatically.

It really depends on the system. You could have a CPU capability that restricts cycles or forces preemption, etc.

I'm not saying you can solve literally all security problems but you can reduce risk considerably. If "infinite loop" is the scariest thing a dependency can do we're in a pretty good position. An unconditional infinite loop should break your CI tests.


> In theory a compiler knows every reachable system call and all points of data input that could reach them

Sorta yes, sorta no.

Imagine I'm making a chat client, and I want users to be able to drag and drop images to share. But the OS doesn't have an "open drag-and-dropped file, extension .png or .jpg" function call, it only has "open file" which lets me open ~/.ssh/id_rsa too.

Or if I'm making a web browser and I want to support U2F tokens. But there's no OS "talk to U2F token" call - the browser needs access to the system calls for "talk to arbitrary USB devices".

Sandboxing PC software is tough.


If you end up with "X can open any file" that's something that's worth noting to a consumer. The sandbox capabilities don't have to be perfect in order to expose a scary situation.

Further, you can restrict a process to only open specific files in a number of ways on Linux, including based on path. There's room for improvement, though.


> But the OS doesn't have an "open drag-and-dropped file, extension .png or .jpg" function call, it only has "open file" which lets me open ~/.ssh/id_rsa too.

A programming language doesn't have to expose system calls directly. It arguably it shouldn't, in fact, for exactly this reason.


Permissions inside a programs own code seems incredibly difficult without radical change.


Pony's object capabilities are one example of an existing implementation. I don't think there's any "inventing" To do here, it's all just implementation work.


It's actually trivially easy once you remove ambient authority, which is the real source of these security problems. Consider how a program could modify your files if it cannot willy-nilly turn any old string into a file handle.


Adding permissions would not have caught this case, though, because there is no need for permissions to run an infinite loop.


> I'm still continually baffled that we ended up in a world where automatically accepting updates from every dev and their dog is not just the norm but recommended practice.

I think it follows from two things:

1. We use open source software for everything. This is also true of our dependencies, so we get hundreds or thousands of transitive dependencies. Many of which are presumably written by dogs, because (i) no one can tell if you're a dog on the internet, and (ii) OSS maintainers are so overworked they ask their dogs for help.

2. Languages and libraries are full of footguns, software is full of bugs and therefore vulnerabilities, and no one cares enough to go through the enormous effort to fix things. And this is true through the whole stack. So the only practical way to stay secure-ish is to reactively patch software as vulnerabilities are publicly discovered. And also defence in depth. (I would distinguish between the publicly known time that a vulnerability is discovered, and the first time it was discovered. You hope the two are the same but for many vulnerabilities, if a clever adversary found them first we'd never know.)

With these two things together, you have a ton of questionable dependencies, and you need to update them all the time for security reasons.


I see systems with various system-level vulnerabilities all the time for work, and try to assist my clients (internal project teams) in prioritizing fixes. Besides the usual CVSS scores, I try to focus first on what is being used or exposed. Network services, file-input processes come to mind. Vulns not in this list should also be fixed when found, but my thought is centered on what might be primarily exploitable.

This leads to some thoughts on statically-compiled applications; while they might have some vulnerable dependency, I suspect that it's harder when the attacker is limited to the app's "baked-in" functionality that defines how those dependencies get used.

Edit: Also, I should note that while I would greatly prefer, and do advise, that they base their environments on minimal OS distributions, this seems rare. The base system patching would be much easier to manage if it started from some BSD-like minimal state, or Alpine Linux, and included only what it needs. Instead, any infrastructure vulnerability assessment leads the teams to chasing down numerous patches in things they have, but never use.


Yeah I guess then we can just shrug and go on because there is nothing we can do to stop our app from randomly breaking tomorrow.

> So the only practical way to stay secure-ish is to reactively patch software as vulnerabilities are publicly discovered.

But this is different from just blindly accepting any update that upstream gives you.

> And also defence in depth.

This sounds increasingly like security theater. You can always more layers obstacles to make things harder for malware that is already on your system, but it's not clear to me how much this actually reduces your atrack surface.


> This sounds increasingly like security theater.

It does help. Quite a bit.

Each layer (e.g. firewall rules that require that all internet access go through a proxy), adds non-trivial amount of work for the hacker to get anything useful done.

1. best case - hacker will give up.

2. good case - you have more time to notice and react.

How much layer cost you, how much does it cost for hacker to overcome it.

Things to remember:

1. Not all hackers are nation states. Most are not.

2. We must accept that no security measure is absolute even against script kidies. Given enough time and luck/misfortune js sandbox will do "rm /sensitive/file".

Recent Log4shell example shows that one can follow all best practices and still get bit in unexpected way.


Defense in depth implies a whole lot of things, and is certainly not security theater. Usually it boils down to three major themes:

1. Reduce blast radius: assuming component X is compromised, how far and wide can it be felt?

2: Principle of least privilege: once compromised, what can X do or access? Extend to the credentials X carries or has access to.

3: Detection: how early and how well can you detect the compromise in previous two steps?

You can never prevent a compromise, but you can make it easier to notice when it has happened, and you can limit what the attackers can do afterwards.


> only updating if you know what the update contains

People suck at this. What this actually tends to do is mean "no updates, ever" unless you have a particularly rigorous culture of dependency management.


Or we get a culture where upstream writes in more detail what an upstream is supposed to contain and downstream verifies that the update indeed does what they write. If this leads to fewer updates overall, I have no problem with that.


If you change "contain" to "do", then this is the MAC security model as implemented by SELinux.

a culture where upstream writes in more detail what their code is supposed to do and downstream enforces that the software indeed does (not do anything beyond) what they specified

It didn't lead to fewer updates, it led to less usage of SELinux.


You also have to rely on all of your dependencies doing that for their dependencies and so on. It’s really a mindset/vigilance you need for the whole ecosystem.


Transitive dependencies are also your dependencies, even if you didn't consciously include them. So in an ideal world, you should vet all changes to dependencies of your codebase, including transitive dependencies.

Whether or not this would be compatible with the way dependencies are used today is another question.


I'm not even sure it's not a fool's errand with the current software ecosystem.

I think at some point it will have to be a language level feature. The ability to sandbox or provide permissions to packages/functions. Just like our OS had to, just like browsers had to, just like phones had to.

Our code is the platform, the packages the apps. It's a similar use case.

If I could download a module, and tell the compiler this module, and everything it uses (including packages that I also use, but through a different call tree) will never access the network or write to disk, it'd help grant some small peace of mind in terms of security at least.


> I think at some point it will have to be a language level feature. The ability to sandbox or provide permissions to packages/functions.

> If I could download a module, and tell the compiler this module, and everything it uses (including packages that I also use, but through a different call tree)

Javascript's prototype based inheritance looks like it can help facilitate such conditional submodule invocation. But, and partially for performance reasons, static compiling would be necessary. So Javascript and its dominant NPM package ecosystem can never go in a direction like this.

If only C++ or Python (dynamically typed, I know) had prototypes instead of class based inheritance.

Edit:

Looks like another commenter referenced what we're probably talking about:

> Now, about the technical solution to this. We have this, for well defined programming languages (read: statically typed ones, or dynamically typed ones with a clear structure).

> It's a linker. Tech from the 1950s.

> Link (include) just the stuff you want, "tree shake"/"remote dead code" whatever you don't.

Can we create an open source linker for JavaScript and NPM packages?


Doesn't deno take this approach? The runtime does kinda force the question by only supporting imports via fully qualified URLs.


It might, I'm not familiar but after a quick look it seems to operate on a vetted trust model i.e. you can use these because we checked and they are compatible. So you could miss out on a lot of the ecosystem.

I was leaning more towards the web approach where we assume everyone is out to get us, but they can't unless we give them that one permission they need. If it's a statically typed language then it'd even allow dependency walking to see what permissions are used at a granular level and we can decide not to bring in anything that's too loose. This of course won't solve cases like logic bugs, but it'd help mitigate the impact.

I'm just not sure if it's even feasible?


The checked and compatible stdlib is an extra provided by the project.

Deno runs code in a sandbox where you need to give permissions to scripts/modules for them to access local files, the network, etc:

https://deno.land/manual@v1.17.2/getting_started/permissions


Yes, but it's not granular. You either let all modules have permission X, or none of them.


I was thinking of their scoped permissions model described at https://medium.com/deno-tutorial/deno-security-65af9811d9c9

Not sure if you can scope down permissions as part of an module import or if it only works when you initialize the interpreter


Yeah I also don’t understand. But then again.. the js ecosystem is one big pile of turds..

Tech cycles with people who reinvent the wheel and keep making the same mistakes

All these problems have long been solved


The problem of new versions of dependencies breaking old code happens all the time in a variety of ecosystems. It's not exactly "solved," it's a continual problem similar to picking what you will eat for dinner. There are pros and cons to each approach, in this case the con is that every now and then you have to manually pin a version. If we did it the other way, we would have to manually upgrade versions to get obvious and easy improvements. As with most things people happily call "turds," it's actually a tradeoff and not as simple as just being bad.

As for reinventing the wheel, are wheels really settled science? As far as I know, new kinds of wheels are being created all the time. It's not just that new people are creating them, there are new things wheels need to do every day and new sets of requirements that the old existing wheel designs don't fulfill. Look at wheels from 50 years ago and they are nothing like the wheels of today. The wheels on cars are nothing like the wheels on aircraft, which in turn are nothing like the wheels on trains.


They were solved in a way that slowed progress.

So invariably, people discovered that if they threw out the complexities of the solutions, they could make faster progress.

Then they eventually ran into the corner cases.

That's the time loop that keeps happening.


Yeah, this seems less like actual progress and more that people wanted to drive in circles faster.


"Pinning the version" is inadequate. Do a shasum on the package contents. (Thanks, Poetry.)


Because dependencies of dependencies exist, and if you use a large framework of some sort, you could end up with literally over 1,000 dependencies.

Manually checking before updating does not scale.


Crazy what happens when you decide to freeload off a stranger’s code who you have no contract or agreement with whatsoever, beyond a license you must accept to use the software which disclaims any warranty whatsoever, even fitness for any purpose.

I have zero sympathy for anyone complaining they were hurt by this. I think Marak is teaching an important and principled lesson here.


Lesson learned:

- Don't rely on open source software. It's all FUD. Always use software from companys you can sue by a contract.

- If you need an an open source library to color your console output, pay them 6 figures per year.

SCNR


> I think Marak is teaching an important and principled lesson here.

What lesson is that?


Never rely on the Javascript community/ecosystem.


[flagged]


> these trillion dollar corporations just take and take and never give,

Which trillion dollar corporations are these?

A quick google says "Apple, Microsoft, Alphabet, Amazon, Telsa, Meta, NVidia, and Berkshire Hathaway" are the the only trillion dollar companies

Except for the last one all of those companies give back vast amounts of open source support. All you using VSCode for free, that's Microsoft's payback. Oh, and Microsoft pays for NPM hosting and github, also Typescript, C#, F#, .NET. Hardly not giving anything back. Apple gives Swift, Clang, LLVM, Webkit to name a few. Alphabet, Go, Chrome (which also means Electron on which VSCode is running), Android, and plenty of others. Meta provides React, Redux. Not sure what open source Tesla gives back but they have given their patents (https://www.tesla.com/blog/all-our-patent-are-belong-you). Nvidia gives away tons of open source as well (https://developer.nvidia.com/open-source)


Most of what you wrote has merit, but I dont believe the author was trying to impart much of what you wrote. I let my comments inline with yours.

> a) blindly 'updating' and deploying code without testing is a horrible idea

This is important risk factor each organization should be aware of. Though he wasn't trying to convey this, it was merely a byproduct.

> b) these trillion dollar corporations just take and take and never give, and boy do they whine and cry when the devs don't 'hold up their end of the deal' and keep turning out perfect, fully tested software for free

This was the intent, but anecdotally, Google, Amazon, Apple, Microsoft, Twitter, Meta, Stripe, Netflix, all contribute to open source.

> c) 'the source is open so anyone can look at it and therefore it's bug free' is and always has been a mentally retarded philosophy

Since the source is open, it must be bug free is a flawed conclusion. Again, not the lesson Marak was imparting.

> d) the software deployment process as it's currently practiced is horribly flawed

I don't think that is accurate.

> e) these people ought to count their blessings that the code is flawed in such an obvious and immediately detectable rather than subtle and devious and much more destructive way

I can't imagine a more destructive way that would result in the catastrophe one is expecting. Should Marak have been more malicious, having a myriad of well funded corporations targeting him would not be fun.

and last but not least:

> f) giving control over one's code to evil microsoft via github is an incredibly stupid idea, as such authority WILL be abused by the evil scumbags.

Another tradeoff organizations should understand.


The author should first change his LICENSE before he does crap like this. Make sure the terms of the license upfront state that "billion/trillion dollar companies" are not permitted to use this library. Stop using the MIT license for your hactivist project.


The large Fortune 500 company probably isn’t even paying the lowly developer enough who took the time to find a terminal colors package anyway. You really think this is the person that’s going to be able to lobby for dev budget? They are literally trying to keep their own god damn job.

It might be time we have a package marketplace like Steam that companies can subscribe to and independent developers can make some money via the marketplace.


freeload is a fairly loaded term. "disclaims any warranty" isn't the same as malicious action; I don't have to pay you if your house burns down if you don't have an insurance policy (contract) with me but I'm still liable if I commit arson.

Also, AFAIK, Marak is not the original author; Is he also a freeloader for attempting to commercialise this code?

> teaching an important and principled lesson here

The history of this issue speaks differently to their intentions, but even so, there is a way to "teach lessons" and that's by doing something alarming but harmless. AFAIK Marak wanted to cause harm, and acted in a way to do so.


> Packages are literally remote code exec vulns in the hands of package authors

Something mentioned in this article caught my eye:

> While searching for Marak’s libraries, I found this npm-test-access library. This library seems to be used for what the name describes: to test access to NPM. Marak seems like a very capable software engineer, and it’s unclear to me why he’d need a package like this. So, this make me personally doubt a little bit if Marak is really behind all of this, or if maybe his account got compromised, or if something else it at play.

— [0] https://jworks.io/the-faker-js-saga-continues/

If you wanted to take over other people’s NPM packages by pushing a compromised update to one of your own widely used packages, the first thing it would do is check to see if the victim had access to publish to NPM.

Marek, who has just started publishing malicious updates to his own widely used packages, has just created a package to check for access to NPM.


This seems like irresponsible speculation and insinuating that Marek is about to commit a felony?

I’d rather skip the character assassination based on hypothetical future actions please, and focus on what’s actually happened.


> Packages are literally remote code exec vulns in the hands of package authors

There are 20m+ weekly downloads of the colors package alone. He has what amounts to remote execution privileges to people using that package. When the subject of compromised packages comes up and he’s demonstrated that he’s willing to publish malicious updates, it’s completely fair to wonder what else he’s willing to do with that level of access to that many systems. It’s irresponsible not to consider what his packages can do to your systems.


> This seems like irresponsible speculation and insinuating that Marek is about to commit a felony?

No, it's speculation that Marek didn't do any of this, but that instead his account was hacked. You either responded to the wrong comment by mistake, or totally misunderstood the one you replied to.


To resolve such issues the central maven repo, for example, makes artifacts immutable when you publish them


This is true for npm. After the incident with leftpad, you can't unpublish anymore. You can, however, publish a new patch update that completely breaks everything.


> This is true for npm. After the incident with leftpad, you can't unpublish anymore. You can, however, publish a new patch update that completely breaks everything.

You absolutely can unpublish, it just requires more steps. If NPM gets a DMCA takedown request they will absolutely have to fulfill it.


> If NPM gets a DMCA takedown request they will absolutely have to fulfill it.

Assuming the package is released under a Free Software licence, what grounds would there be for a DMCA takedown?

I suppose a developer could include the lyrics to a pop song in their code (possibly encrypted), and then tell the copyright holder about it (since I don't think you can make a DMCA request on behalf of a copyright holder without their permission), but I would hope that such a poison-pill would be caught long before the package became widely depended on.

Perhaps you're thinking someone would risk perjury(?) charges for making a false DMCA request against their package, and NPM would act on the request without questioning it; but remember that NPM is owned by Microsoft and they have previously stood up to frivolous DMCA requests (after a fashion)[0]. That article has the lede: "Software warehouse also pledges to review claims better, $1m defense fund for open-source coders".

[0] https://www.theregister.com/2020/11/16/github_restores_youtu...


> I don't think you can make a DMCA request on behalf of a copyright holder without their permission

In theory, you're right. In practice, there's never any actual consequences for filing a false DMCA claim. Worst case is that the thing doesn't get taken down, but that's no worse than if they didn't file it at all.


Corps don’t care about DMCA takedowns from natural persons. I sent a takedown once, the CEO replied that he was sorry it had come to that, but they still distributed it for years under a license I did not grant. This CEO is licensed to practice law in California, btw.


Anyone is free to ignore DMCA notifications.

Some parties that are distributing other peoples' stuff lose a safe-harbor protection from liability themselves if they ignore it.

This means intermediaries who don't benefit much directly from distributing a given bit of content will immediately comply with the DMCA takedown process. But this does nothing if you send the notice to someone who is actually using it.

The correct move is to send DMCA to the infringer's ISP/host. Then the ISP has to take it down unless counter-notified that they say they're not infringing. In turn, that counter-notification improves your position for any litigation that may ensue.


Someone could have added non-free third-party code into the package (intentionally or inadvertently, it doesn't really matter).


> but I would hope that such a poison-pill would be caught long before the package became widely depended on.

I'm not sure what about the current open source ecosystem makes you think anyone would catch something like this.


Funny, my company couldn't use Webpack 1 because a dependency of a dependency... depended on an ancient package from the days when it was common to not bother with attaching a license.

Legally, that meant that noone could use it. In practice, nobody but our legal department cared, so we had to wait for version 2 when the dependency chain was updated to remove it.


You couldn't override the package locally? Or was too much of that code actually needed?


> I don't think you can make a DMCA request on behalf of a copyright holder without their permission

Tell that you Youtube's copyright trolls


The people trolling YouTube over copyright are either making false Content ID claims[0] (not DMCA takedown requests), or claiming infringement based on an incorrect match of something they genuinely hold the copyright on.[1]

You're probably right, though, that there is enough imprecision in the system for someone to claim that someone else's code snippet infringes on the copyright of a code snippet the claimant had previously published.

[0] https://torrentfreak.com/u-s-indicts-two-men-for-running-a-2...

[1] https://freebeacon.com/culture/google-youtube-algorithm-copy...


> Assuming the package is released under a Free Software licence, what grounds would there be for a DMCA takedown?

Noncompliance with the license, e.g. by removing required copyright notices/attribution in the code (this has happened in the past). Or straight-up uploading someone else's non-free code.


The developer can DCMA claiming the code doesn’t follow his license as famously happened with Bukkit (the Minecraft server tool).


I didn't remember that particular legal complication, so thanks for prompting me to look it up. It seems that his argument was that Bukkit couldn't be distributed because it contained Mojang's proprietary code, but the fact that it also contained some of his code meant that he was a copyright holder for the purposes of the DMCA.[0]

This seems like an edge case that wasn't anticipated by the DMCA, but I can see the argument that mixing GPL code with proprietary code is creating and distributing a derivative work, in violation of the GPL. Without proprietary code being present, though, I don't think a developer can DMCA takedown their own GPL software.

[0] "As the Minecraft Server software is included in CraftBukkit, and the original code has not been provided or its use authorized, this is a violation of my copyright." https://github.com/github/dmca/blob/master/2014/2014-09-05-C...


Not only does it require more steps, it also has to meet the following criteria[1]:

* no other packages in the npm Public Registry depend on

* had less than 300 downloads over the last week

* has a single owner/maintainer

So while your point is taken that unpublishing is possible under some circumstances, it is not for popular packages that are in use today.

[1] https://docs.npmjs.com/policies/unpublish


None of these points have any legal standing, from a copyright perspective.

https://news.ycombinator.com/item?id=29868199


You are technically correct. The best kind of correct! In practical terms, it depends on the license used. Since most licenses used in open source will prevent you from making these kind of requests, this consequence isn't likely to have any practical implications.


You are assuming that the true rights holders of all the code in the package actually agreed to the given license. Someone unrelated to the package development can still claim it includes an illegally-copied, unlicensed version of their code.


Despite the need to keep it clear, copyright does not reign supreme.


> Despite the need to keep it clear, copyright does not reign supreme.

neither do NPM TOS, or whatever Microsoft thinks they are entitled to, since NPM is owned by Microsoft.


Which is not what I argued :^)


> If NPM gets a DMCA takedown request they will absolutely have to fulfill it.

No, they don't. Honoring DMCA takedowns allow benefit from an additional safe harbor from any existing infringement liability for the alleged infringing content, but are not mandatory in their own.


Except if they have reason to believe the code was uploaded with the permission of the copyright holder.

The they have gotten the right for npm to distribute the source code in context of npm.


> The they have gotten the right for npm to distribute the source code in context of npm.

There is absolutely no copyright or publishing right transfer that takes place when one "publishes" a package on NPM (or on Github). None.

The original author is absolutely entitled to a DMCA takedown notice and NPM would have to oblige him.


Your Content belongs to you. You decide whether and how to license it. But at a minimum, you license npm to provide Your Content to users of npm Services when you share Your Content. That special license allows npm to copy, publish, and analyze Your Content, and to share its analyses with others. npm may run computer code in Your Content to analyze it, but npm's special license alone does not give npm the right to run code for its functionality in npm products or services.


First you agreed to ToS when you uploaded things to npm. I haven't read the terms but it should be enough for npm to publish on npm no matter the license.

Secondly and as important if you publish something under an Open Source license(1) then you _cannot unpublish it_. You granted copyright to _everyone_ for and existing both now and in the future to distribute and use it(2) (legally it's a bit more complex but that's what it boils down to).

(1): Assuming you had the legal right to do so, but if not you are liable for any fall out, not npm (because ToS, they still need to take it down reasonable fast, but they might be able to sue you).

(2): Within the constraints of the license.


You can't legally retract opening up software source code under most if not all popular open source licenses.


It is indeed all, even if you ignore the "popular" qualifier. If a license could be unilaterally revoked, it would fail to meet the Open Source Definition for that reason.


open source =/= free software.

That's the first mistake you are making.


The differences between the two are extremely minimal, basically only relating to patent rights relating to the software. Go read https://www.gnu.org/philosophy/free-sw.en.html - the FSF's Free Software definition, and https://opensource.org/osd - the Open Source Definition (both by the respective parties that coined the terms and maintain them to this day) and see what the actual differences are. They're not many.


While they are indeed sightly different, I fail to see how the differences are at all relevant in this context.


Why does a new version break projects without action by the project owners? In Go you would have to explicitly update to the broken version.


Because npm install has the insane default behavior of adding a fuzzy qualifier to your package.json, for example ^6.0.2 means all of the following versions are accepted: 6.0.2, 6.0.9, 6.7.84


It’s not particularly insane. package.json and package-lock.json have different purposes, namely package.json specified intent e.g. I want a version that satisfies >=5.2.3 && < 6.0.0 and package-lock.json records the exact resolved version.

Off the top of my head Bundler, CocoaPods, Cargo, SPM, Pipfile(and various other Python dependency managers), and composer also all work like this.

Cargo even makes it implicit that a version like “1” means “^1.0.0” in Cargo.toml.


That's not an issue, that assists in quickly viewing wanted package upgrades. The problem is in not using a lockfile.


Welcome to JavaScript, where every division is a bad one


Very often, package installation is automated as part of a build pipeline. So if you want to build and deploy a new version of your software, you'll kick off the pipeline and that could potentially download a newer version of a package than was previously being used.

Incidents like this highlight that this may not be the best idea.


If you're using NPM without lockfiles, you're gonna have a bad time with discrepancies between trying things on your dev machine and building things in CI machines.

When you have a package-lock.json NPM will install exactly the same version of everything in your dependency tree, making the CI builds much more like what's on your dev machine (modulo architecture/environment changes)


Because of version locks. Normally you install “^X.Y.Z” which means any version at major X with at least minor Y and revision Z. For more conservative codebases you install “~X.Y.Z” which also locks the minor.

npm install will traditionally install the most recent packages that match your constraints. You need “npm ci” to use true version locks


*I revoke my comment. Child comment is correct.


That's not true.

The first line of NPM install's documentation[0] says(emphasis mine):

> This command installs a package, and any packages that it depends on. If the package has a *package-lock or shrinkwrap file, the installation of dependencies will be driven by that*, with an npm-shrinkwrap.json taking precedence if both files exist. See package-lock.json and npm shrinkwrap.

What does happen is: if you have added a new package in package.json it will be installed based on the semver pattern specified there, or if you run npm install some-package@^x.y.z the same thing happens. Further, if you modify package.json by changing the semver pattern for an existing package that will also cause this behaviour.

Running `npm install` in a package that already has a package-lock.json will simply install what's in package-lock.json. `npm install` only changes the lock file to add/remove/update dependencies when it detects that package.json and package-lock.json disagrees about the specified dependenices and their semver patterns e.g. having foo@^2.3.1 in package.json and foo@1.8.3 in package-lock.json will cause foo to be update when running `npm install`.

0: https://docs.npmjs.com/cli/v6/commands/npm-install


THat's why you specify the exact version of your dependencies.


You're never going to be able to prevent that at a technical level. You can prevent it with workflow, though: 1) sync packages locally and build from those versions; 2) peg to a specific version and don't auto-update; 3) deploy to a test environment and not directly to production.


A key difference with Maven projects is that you specify exact dependency versions instead of “always use latest” or some variant of that, as is pretty common in the Node world.


This is not necessarily true, there are version ranges: https://www.baeldung.com/maven-dependency-latest-version

Admittedly, I don't think it has nearly as wide a usage as it has in the NPM world. Dependabot (I know I'm not the first to mention it, here, today) is probably more of a factor.

Still, it strikes me that this sort of "attack" (or mishap) is exceedingly rare in the Java ecosystem, while it's pretty common in the NPM world, and I don't immediately understand why that would be so.


I was not aware of that feature. To call it rare would be an understatement I think.

> while it's pretty common in the NPM world, and I don't immediately understand why that would be so.

I think it boils down to Node projects typically specifying dependencies in the form “any version >= X”, effectively “always use the latest.” Dependencies can therefore get bumped silently just by rebuilding, essentially. Whereas in the Java world updating dependencies is a deliberate process.


We abuse jitpack.io and MASTER-snapshot to keep out Minecraft maven builds up to date.


With lock files, you will always be stuck with whatever version you first installed until you explicitly ask npm to upgrade, or delete your lockfile.


npm does as well, they made this change after left-pad.

You also can't unpublish once a single person has downloaded the package, I believe.


Immutability feels like the best approach here. Go's module system is pretty good in this respect: "proxy" is just a proxy that serves module code, and "sum" is an append-only transparency log of the hashes of all published versions. You can't "unpublish" from the log, but you can get code hosted on proxy removed for various reasons... which users can protect themselves against by running their own proxy. Go's module version resolution strategy means that the chosen module version never changes without explicit input from the user so no "publish a new version that breaks everyone's CI" issue.

All together I don't see how GP's "email php files around" is as any better than this system in any way.


How does that solve the issue here of new broken versions of packages being published?


That's another JS ecosystem widespread malpractice.

Autobumping versions, or version ranges as they're called in Maven land.

Dependencies should only use fixed versions and all updates should be manual.

You should only use auto-upgradable versions during development, and the package manager should warn you that you're using them (or your dependencies are).


If package A depends on package C at version 1.0 but package B depends on C at version 1.1, what version of C will be pulled in?

Dependency management is not as simple as only upgrading one direct dependency at a time after careful review.

The NPM ecosystem is particularly difficult to work with as it has deep and broad transitive dependency trees, many small packages, and a very high rate of change.

You either freeze everything and hope you don't have an unpatched vulnerability somewhere or update everything and hope you don't introduce a vulnerability somewhere.


> Dependency management is not as simple as only upgrading one direct dependency at a time after careful review.

Most package managers won't allow these stunts and conflicts have to be resolved UPSTREAM. NPM chose to go the "YOLO" way and will fetch every single version of a package that meets the dependency demands. Terrible design, but the purpose of that was growth for NPM, the company, not the best interest of the ecosystem.


There are package exclusions, package forcing and of course, full dependency tree checks where you review what everything pulls in.

The JS ecosystem will probably have to change but because it's so decentralized, that change will be orders of magnitude harder than, for example, PHPs transition from 3 (4, 5) to 7.


> The JS ecosystem will probably have to change but because it's so decentralized,

Is it? Everybody is pulling from Microsoft owned servers now, as Microsoft owns both Github and NPM.


You're right in the package storage sense.

I don't think you're right in the builder/building practices sense.


I'm sorry but this is completely wrong. NPM has lock files which explicitly lockdown the version you have downloaded after your first install. These are commited to source control, so all subsequent installs will use the exact same version of dependencies, and nested dependencies too.

You need to ask npm to upgrade or delete your lock file and node modules to run into this issue.


You shouldn't blindly pull updates into production, how do you know if a non-malicious update breaks your app if you don't do any basic testing first?


[flagged]



The tone changes half way, it wasn't all parody, though :-)

Thought it would be clear enough.


npm also has immutable artifacts.


Yes, and my question is: why didn't they have them from day 1???

Or at least day 1000? Npm was launched in 2010, 11 years ago, and I'm quite sure immutable packages were implemented about 3 years ago.

Again, this is not rocket science, we knew the attack angles.


> I don't want that level of control over other people's projects, it's scary

How far do you take this though? The average GNU Linux distro ships with a whole pile of packages already installed, from a multitude of different authors.


Packing is most of the point of a distro. They are specifically taking on that responsibility. They also have a better perspective to handle overall compatibility.

In a sense you've pointed out the alternative to having the programmer handling the packaging -- having some third party package and distribute. And this separation of responsibility turns out be almost always be a better solution. Distribution and coding are, after all, two full jobs (without 100% skill overlap). Plus, hopefully it indicates that at least two sets of eyeballs have at least glanced at the code (not the full desired many-eyeball outcome, but as good as we can expect sometimes).


Given that Debian (and its descendants...) packages a shitload of npm packages, it's a wide stretch to say there is more QA for these packages from the Debian side than there is from the npm side.

The one thing that Debian provides is that in the case there is a security issue, admins worldwide only need to do "apt update && apt upgrade" and they are safe, without having to check all of the software that runs on their servers (as long as said software comes from Debian, that is!).


I do think people would be served, generally, by being more aware of the fact that distros are not some doing some hardcore security vetting. But the alternative is just to use whatever was pushed up to NPM, right? In that case, Debian packager+NPM push > NPM push by definition, unless the Debian packager somehow provides negative QA, which seems unlikely. (Also, on the incredibly unlikely offchance that some Debian packager reads this comment -- your work is incredibly useful and I very much appreciate it, just trying to be realistic about what exactly is provided by your group!)


> unless the Debian packager somehow provides negative QA, which seems unlikely

It has happened before. Last time there was anything major was over a decade ago though.

https://lists.debian.org/debian-security-announce/2008/msg00...


Yes, but AFAIK, those are heavily tested or audited in some manner. That's different from including code written by randos in your app that they can remotely change at any time.


> Yes, but AFAIK, those are heavily tested or audited in some manner. That's different from including code written by randos in your app that they can remotely change at any time.

It seems like the problem here is more cultural than technical, specifically that the JavaScript community has fully embraced packages that are "written by randos" that are "wrappers around three-line Stack Overflow answers."

I use packages, but I wouldn't use any that are developed by a rando with no reputation or "institutional oversight." An important part of choosing to use one is evaluating the maintainers.

Does the JavaScript ecosystem have anything like Apache Commons? I'm guessing not, but it probably should.


It's both technical and cultural.

Javascript is used on the front end. Front end devs obsess (or at least used to obsess) over download sized. So you'd have crazy stuff like custom builds of Underscore (https://underscorejs.org/) with just the functions you wanted. Think manual sandboxing, if that makes any sense. You could get a package of Underscore with just map, filter and reduceRight, if you wanted to.

Now, when Node came around, people wanted as much as possible to have the same libraries available on the front end, so the same obsession with size was carried over.

Ergo the micro-milli-nano-packages they make.

Now, about the technical solution to this. We have this, for well defined programming languages (read: statically typed ones, or dynamically typed ones with a clear structure).

It's a linker. Tech from the 1950s.

Link (include) just the stuff you want, "tree shake"/"remote dead code" whatever you don't.

https://www.joelonsoftware.com/2004/01/28/please-sir-may-i-h...

Java's largely to blame for this, Sun REALLY, REALLY hated stuff that could be hooked into any OS and wasn't portable, so they didn't provide a linker. Everything was supposed to be on their JVM and you were going to install their JVM everywhere (2 billion devices!!!) and to hell with small stuff or heaven forbid, including native libraries. Javascript followed (on top of the Java restrictions they added: dynamic, poorly defined language, that would have made linking with tree shaking really hard, anyway). .Net also followed.

Almost 3 decades later we're trying to undo that damage.


The problem with tree shaking has been twofold:

- JavaScript is a very dynamic language with dynamic property access and a few other features that make it hard to guarantee that the linker won't accidentally remove too much

- historically there was no standardized "module" format until ESM (ES modules) came up (with some time in between with few competing non-standardized proposals), so statically analyzing exports/imports was difficult; in frontend you'd long rely on just creating and reading global variables (i.e. side-effects).

Hence it's been "safer"/easier to create small packages.

But it's not only this. Once you put a mega-package in your repo, it's easy to gradually start relying more and more on the things it gives you. Even if it supported perfect tree shaking, you'd call one method here, one method there, and with each build your bundle size would balloon (which is not good if you could write one line of code while lib method's code is 1000 lines because it supports IE4 and 17 parameters).

Whereas when you rely on small packages, you need to make a conscious choice each time to pick another dependency.

You probably don't care about this on servers written in C++ or Java that much; but on frontend it's a big deal; hell, even when building native apps for Android/iOS you have size limits for the stores submission / limits for the number of methods (tech limitation in Android). Big companies invest crazy money to shrink their native bundle sizes (https://blog.pragmaticengineer.com/uber-app-rewrite-yolo/).


I think history (20+ years from now) will prove that for all but the smallest, almost toy, systems, dynamic typing from the 80s and 90s was a mistake.

The maintenance burdens these languages are creating will make Cobol look like a kiddie bike with training wheels next to monster trucks.


I think that dynamic languages played an important role in pushing for the development of mainstream static typing that didn't suck. ML's been around for a very long time, but there was seemingly little interest in pervasive type inference in languages actually used in industry until they had to compete with the concision of dynamic typing.

Actually building large systems in dynamic languages? Probably going to turn out to be a mistake though.


I really don’t think it is fair to blame it on Java. Having very little native dependency is a huge plus to an ecosystem (just look at what Java will be able to do with Loom thanks to the almost all-Java dependencies). Also, Java was particularly keen on downloading class files at runtime, so linking everything was not even possible.

And it is not even a difficult thing to fix without going the linker way: java’s modules essentially solve it (as well as javascript modules could/can) — just specify what is visible outside a package and both ecosystems can “tree-shake” non-used code (though I dislike this nonstandard term)


> Does the JavaScript ecosystem have anything like Apache Commons? I'm guessing not, but it probably should.

This isn't a Javascript problem, this is a node problem. Node is just a Javascript platform among many. The fact that the node community decided to go with all these "nano packages" has absolutely nothing to do with Javascript. Nothing forced Node, the distribution, to come with such a barebone standard library. Absolutely nothing... but the idea of being dependent of NPM which was orchestrated by NPM founders, that's how NPM, a private business made money and eventually sold to Microsoft.


Not just that they've fully embraced the "written by randos" but even worse: "as soon as the rando publishes an update or change, use it!" They seem to fully automate updates because packages are so poorly written (and frankly, it probably helps with revenue stream, if their client's websites occasionally break and need them to fix it.)

...and meanwhile NPM's idea of vetting packages is basically "YOLO, BRO!"


In practical terms, can they really be audited?

This is at least obvious DoS, I’m sure it’s easy to slip in an innocuous line that, dunno, ships your ssh keys to some rando server.


look at diffs?


I actually do this, on occasion. (Not 100%, and not to a degree I'd say "yeah, I'd've caught this colors/fakers thing." But enough to say that I've seen a decent sample.)

There is literally, on average, no difference in average quality between commits on FOSS projects, and commits on projects we pay external entities for. Some paid projects are just crap code, and some FOSS code is extremely high quality.

I've had to roll-back / hard-pin dependencies from both low-quality FOSS & low-quality paid projects because of commits that — once you find & read them — are just bananas.

(I have no idea how to solve the root problem here, honestly.)


> I have no idea how to solve the root problem here

I'm not even sure what the problem is. If it's "updating dependencies introduces severe side effects" wich, I think, should be accounted for in the process


Can you really say, with a straight face, that you inspect the diffs of your entire dependency closure every time you deploy an update? With the level of scrutiny required to detect a maliciously-obfuscated security exploit?

If you can, you're an infinitely more diligent developer than I am, that's for sure.


Fortunately the problem could become more tractable if something like SES / Endo takes off:

"Endo protects program integrity both in-process and in distributed systems. SES protects local integrity, defending an application against supply chain attacks: hacks that enter through upgrades to third-party dependencies. Endo does this by encouraging the Principle of Least Authority. ... Endo uses LavaMoat to automatically generate reviewable policies that determine what capabilities will be distributed to third party dependencies."

https://github.com/endojs/endo


Nice link, thanks! Good to see Mark Miller is still working in this space.


I look at gits diffs to see what's changed between versions, but that's for fun and not diligence.

> With the level of scrutiny required to detect a maliciously-obfuscated security exploit

Nope. Not paid to do that and I have not been given any such responsibility.

That said, I think the attack vector on this is very low.

Packages are rarely updated to the latest version.

We don't use a lot of packages.

We mostly use packages from trusted sources.

We use packages that are open source.


And frankly, not just diffs. You’d need to inspect the initial state, and any new dependency added. That’s potentially hundreds of thousands of LoC.


There’s no reason people can’t keep local caches of these libs if it is a major concern. This seems like a non issue.


Stale libraries are more likely to contain known security vulnerabilities.


I know it's bad practice, but I just checkin vendor files/libs to source control. Makes auditing new releases of libraries a bit easier. Assuming they aren't binaries of course.


I also like doing this, but with node, you have a massive tree of thousands of files. It's crazy and gross.


Yarn 2 pnp kinda fixes this


I don't recommend this approach


I haven't had issues yet, but it's considered bad practice for a reason. What headaches am I in store for?


You're committing something that is not a part of your source code into your version control system, assuming your source code is git, this is irrevocable without rewriting history.

It's mainly a nuisance. It takes up unnecessary space. Introduces possible annoying merge conflicts etc etc and it's not trivial to remove it.

As reference, I migrated repositories from TFVC to git. One team relies on checking in packages into source control, another one does so far less. One repo is significantly nimbler.

Checking packages into source control is making your VCS a package manager. Presumably you have one. Don't hammer nails with your screwdriver


The benefit of having your dependencies vendored is that you have everything needed to build your application without having to download stuff from the internet. You get to ensure what exactly makes it into your application. Yes, it will increase the repository size, but I don't see why merge conflicts would be a problem since you are just replacing a file with a new version.


> you have everything needed to build your application without having to download stuff from the internet

that's using version control to act as a proxy. AFAIK, a lot of package managers already cache local copies

> You get to ensure what exactly makes it into your application

sorry but i don't follow

> I don't see why merge conflicts would be a problem since you are just replacing a file with a new version

Are you working alone?


>AFAIK, a lot of package managers already cache local copies

But this cache is usually not easily transferable to someone compared to them just cloning a repo.

>sorry but i don't follow

You have the source code to all of the dependencies in your application.

>Are you working alone?

How many forks of a dependency do you use? Just using the master branch and upgrading along that should be good enough for 99% of your dependency.


> But this cache is usually not easily transferable to someone compared to them just cloning a repo

right, so they need to download "stuff from the internet". it Doesn't matter much if that stuff is from a remote repo or hosted by a package repository. Except if it's architecture dependent, in which case you definitely don't want to share across architectures. Not to mention they may already have a viable copy in a proxy or cache

> You have the source code to all of the dependencies in your application

I'm afraid I still don't follow

> How many forks of a dependency do you use? Just using the master branch and upgrading along that should be good enough for 99% of your dependency

Well, if I was expecting things to not break I'd never follow upstream master for a dependency.

But the question pertained to merge conflicts. If several people track the same remote and check in dependencies into VCS I'd expect annoying merge conflicts

Or are we perhaps misunderstanding each other? I'm not sure I follow what you mean by forks. Releases are typically on different branches or tags


I imagine your commit log would be polluted with commits just for changes in the packages.


You would have one anyways for changing the lock file or whatever. Changing your dependencies is something that you may want to be able to undo. It's useful to be able to go back to a known working version of your program with versions of your dependencies that you know work.


Git submodule documentation explicitly says[1] it's designed for adding 3rd party libraries to a project in this exact scenario.

[1] https://git-scm.com/book/en/v2/Git-Tools-Submodules#:~:text=....


I’ve been doing it witb Yarn 2 / pnp and it has been great so far.

You checkout a project and start it. No downloading required, and no 10000s of files.


You’re free to update as you like. It’s entirely possible to audit your local packages.

Letting maintainers update your projects is a convenient feature. If it is a liability in your use case you can work around it. Yes it will take more effort, but your use case justifies it.


I’m a self taught Python programmer. I haven’t don’t much front end.

Why do some JS devs import tiny packages to do simple things? I don’t feel like I’ve seen this behavior in Python. Is it because browsers are an awful environment?


They took the Unix philosophy of doing one thing well and drove it off a cliff.


I literally lol’d. Thank you for the laugh


In browser land, the less code you ship, the better. Removing dead js code is hard, because of its dynamic nature and using common js imports making it harder for treeshaking algorithms.

So, people had incentive to write and use smaller packages.

Now, the situation has improved. If you use esmodules all the way, and only import what you need, then your bundler can remove unused modules from final build


> In browser land, the less code you ship, the better

That's kinda funny. These node tools I run into these days usually take forever to install, waste a lot of space due to creating their own package mirror and generally are prone to break because of dependencies.

There is usually nothing tiny about them, even thought they only have a few lines of code.


usually the "node_modules" folder (with tons of files) is not deployed on production, for front-end application that are going to run in the browser

when we make a build for the browser we bundle all dependencies into fewer files that only contain the code that is used


stdlib of JS vs python or php is absolutely tiny. It's improving over time, but it's still playing catchup.


And if you're optimizing for the browser, you can't count on the improvements being there. So you still want to use third-party libraries for their polyfills.


As someone who does front-end JS stuff and uses a bunch of packages here is why I do it:

I got tired of copying and pasting the same classes between projects. The worse part was I'd add new features to the newer projects and when I would have to go back to work on something from a year or two ago I'd have to spend time backporting all the new code. I also don't like how bloated a bunch of the "popular" packages are. Why do something in 40kB of JS when you can do it in 3kB. Smaller is faster which is important to me because one of my main selling points is that I build modern-looking marketing websites that load and render under 5 seconds on a slow 3G connection.


> I got tired of copying and pasting the same classes between projects. The worse part was I'd add new features to the newer projects and when I would have to go back to work on something from a year or two ago I'd have to spend time backporting all the new code.

Why not create your own common library and publish it to a private repo? There's a lot of options between using a stranger's package and what you're describing.


> Why not create your own common library and publish it to a private repo? There's a lot of options between using a stranger's package and what you're describing.

Exactly what I thought. Fascinating how they can have missed this obvious solution.


I prefer to look at the code, decide if A. it's worth it, B. it wouldn't be funner/better to just clone it as a 'plugin' in my own code, and C. it looks like it has a good team/support around it.


I think it's more because dependencies are generally used less because all the tooling around it is much worse than in other languages.


npm (or yarn) for the most part works much better than python package managers


They don’t know better.


I often find myself... ripping out a lot of what I 'need' into something that maybe isn't always well-maintainable, but it's my fuckfest of code, and if something breaks it's because I choose to eff it up myself. Esp, when it's something API related, most php api sdk's are poorly maintained anyways and need updating as I go, plus I usually learn the api pretty well as I rebuild and test the new classes.

Ironically, I'm working on a laravel package myself that i'm hoping to maintain (and turn into a viable side project) that's basically jetstream with SaaS components, and UI elements... (think ui component libraries + laravel jetstream + extra SaaS/ERP things like tenancy beyond just teams but..like Org which can have teams, projects, employees, and each user can belong to multiple orgs, teams, projects, and have attached profiles to each.

For a lot of the UI stuff, I've basically repacked MIT stuff for tailwindcss components, and laravel/livewire added some extra configurations and options, and made it so you don't need Jetstream, just this thing... so a lot of it is actually other's packaged code pulled into one package so, ideally there's one dependency that could even be easily forked and repurposed for a team's needs but cover a lot of boilerplate possibilities.


I’m surprised the AWS SDK doesn’t pin its dependencies and put new versions through its paces before letting end users possibly use a compromised utility with catastrophic results.


> At the very least, it takes them under a minute to break your app, simply by deleting their package.

Not really, if you pin your versions exactly and don't do auto-updates. This also means that you have to update your packages manually and inspect what the latest versions are doing -- which is good practive anyway. NPM packages can not be depublished any longer as far as I know.


True enough overall, but I'm surprised no one has let you know that NPM actually doesn't allow you to delete packages anymore (after the left-pad controversy). You have to email them and then you are judged by how many downloads you have. If you have users relying on your package, they do not let you delete it.


I have worked with several banks as a devops consultant and have helped implement self-hosted proxy repositories for NPM, NuGet, etc. These proxies save a copy of every package downloaded and store it locally for as long as the bank wants. Developers are then blocked from downloading from any other repository than these proxies. This solves the problem of packages being taken offline, but it does very little against malicious code on new versions of the package. However it also provides the ability to reproduce code X years into the past, as can be required of banks by financial regulators in various countries.

As always it is a cost/benefit trade-off: what is the benefit to auditing every package vs. the cost of auditing every package?


IMO this is what forking is for. You don't have to rewrite it, you simply copy it somewhere that it isn't going to change unless you make the change happen, because some third party screwing around with your production code is just Bad News even if they have the best of intentions.

You should still look over it and make sure it's not obviously malicious, but simply using forks or local repos of open source packages would probably save 98% of these kinds of headaches (with a 2% allowance for insecure/malicious open source code).


a peer-viewed standard library is key, just like what glibc or libstdcpp for c/c++ that covers 80% of the normal use cases, the rest you're in charge for its quality check.

with javascript/node, you can write 100 lines of code then pulling 100 modules, it's quite different and hard to assure its quality and safety over long period of time.

I heard rust also has a very small stdlib, that gave me concerns, but I don't code rust, I do hope though all languages can have a stdlib that is 20% the size covers 80% of normal needs.


I wholeheartedly agree with this commentary. Any insight into why this is so much the case with npm but not seemingly as bad in other ecosystems (dependency trees in npm are huge).

I feel like the implicit trust makes even using popular packages such as react seem a bit sketchy. I’m betting react devs audit upstream packages, but I don’t know if any formal statements that they do. Multiply that by all the other common projects and you have a huge auditability issue.


>> Any insight into why this is so much the case with npm but not seemingly as bad in other ecosystems (dependency trees in npm are huge).

I would think that the sheer popularity of the JavaScript (and therefore Node) ecosystems contributes partially to it - there's a massive industry out there about skilling new developers up in JavaScript, Node, and some front-end frameworks. But it definitely doesn't explain all of it.


I actually attribute it more to the micro package architecture, but maybe I don’t know it well enough. I don’t know any other ecosystem with a left-padding package for instance.


I noticed a few years ago that my bank didn't use much in the way of dependencies for their website (possibly just jQuery) - clearly they agree that depending on React opens you up to depending on... who knows what.


I’ve considered that, there are a few scenarios:

- some sites favor security over UI/UX

- some organisations have the funding to review packages such as react

In the future I think security bureaucracy will prevent security conscious organisations from having nice new things. This happens in places like the military (who were known to use WinXP long after public EOL).


Yeah I'm sure a bank could afford to review React but even a minor version bump would then become a very expensive auditing operation.


Totally agree with you, i think it's time to think carefully and immediately start using services like Vulert(https://bit.ly/336DZub) that tracks your open-source softwares for free and notifies you in real-time if any seccurity issue is found within your applciation. it's free.

atleast in this way we can secure ourselves from supply chain attacks


Extending your logic to extreme you'll not be able to use any OS, compiler, processor unless you build it. We can't build practically anything of use without any 3rd party library in any language/platform.

The problem here is not packages but lack of stdlib and tendency of package writers to have shit ton of further dependencies.


I wish there was a package manager for node.js that is made for "static" or offline usage, and is able to compare headers of libraries before upgrading them.

But here we are, 10 years in, with nobody giving a damn about semantic versioning.

Life could be so much easier with an actual package manager that isn't just some git clone replacement.


Omg y re making list for packages. It's not a technical cure, but it connects developers and library consumers and creates far more accountability.

GitHub issues can work in theory but in practice, developers are often slow to respond i.e. GI is where problems go to die.


You don't have to always pull the latest release on utility packages if you want to evade such problems. Sure, you would need to audit a lot of packages in certain languages...

But yes, I prefer to "vendor" my dependencies too, especially on large projects.


Security vulns are fixed daily in most webapp dependency trees.

If you do not update you are vulnerable to piles of issues anyone can look up.

If you update blindly you may import new obvious supply chain attacks.

The solution is actually doing code review. If you can not afford to review 2000 dependencies then you can not afford 2000 dependencies. The extra effort to use a minimal framework and some cherry picked functions may be worth it for most orgs.


In my opinion, golang does this correctly. You can just git submodule the source code of your dependencies. That way, you're always in control over what gets updated and when.


That's true of most package managers too though. It's very trivial to vendor the code yourself in both NPM and Cargo for example.


Yes ! Re-inventing the wheel is literally perfectly fine. You take a round piece of wood, make it roll, and BAM you re-invented the wheel. No need to go to Toyota, buy their wheel making machinery for millions and run that to make a little thing roll.

It's the same with packages, it's FINE to have to redo a bit thousand separator logic, do you truly need a transitive dependency hell with ^1.1.1 in the package list that auto upgrade at random !!? I've had several cases where the whole company is all hands on deck because some dep somewhere moved up and all subsequent builds fail - what are people doing in node, we never had these issues in Java.


How complicated it is to just clone the repo and point your apps to your forked version? That way this would not have happened.


> Packages are literally remote code exec vulns in the hands of package authors

Reminder: this is why traditional Linux distributions exist.


the whole point of the internet

> are literally remote code exec vulns


I applaud you ) HTML-based WEB1.0 was so much safer and faster...


Should I get paid for my multiple contributions to faker (I don't think I should)? I've submitted several PR's for generating data all of which were accepted. Even back then the maintainer was barking about money...

Honestly the project would be better off forked. He did not write this library entirely by himself, at this point I just see him as holding other committers contributions as hostage. It's a bad look, why would anyone want to deal with him after this stunt is beyond me.


> It's a bad look, why would anyone want to deal with him after this stunt is beyond me.

The maintainer appears to be unwell:

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...


That article is from 2020 (okay, I realize that doesn't mean much--from September specifically). It says he was charged with reckless endangerment at the time, but nothing shows for his name in the NY eCourt system now which seems to mean it was either resolved already (would seem to be quite fast, and indicate no further charges) or was dismissed. No record of him at the DOC either.

Lately he seems to have come to the belief that Aaron Swartz was intentionally targeted because he discovered evidence of child abuse at MIT, which is a pretty far fetched claim that seems to just be riding on the Epstein media attention.


The maintainer seems a few steps beyond unwell. Seems like he was planning a terrorist act of some sort. Even if one is mentally unwell, I would not first describe them that way should they choose to premeditate harm against others. If you're building bombs, you're almost certainly at that point. At the very least, the maintainer is unstable if not actively malicious and seeking to cause harm however he can.


> I would not first describe them that way should they choose to premeditate harm against others.

That's why a judge often has to determine if the person is mentally ill or just a criminal.

There are plenty of situations where someone has hurt other people and they've been literally insane.

If your insanity causes you to believe that someone is doing horrible things like murdering children you might decide to do them harm, thinking you're the good guy.

In reality you have paranoid schizophrenia or a litany of other mental health issues.

It's not cut and dry :-/


I think it's very nearly a difference without meaning here. If you commit mass violence out of insanity, you're probably going to be in a mental hospital for the rest of your life. Functionally akin to a lifetime prison sentence in that in both cases the perpetrator is removed from society forever. It's not that I have no empathy for the mentally unwell, it's just that when your illness begins to cause actions that will harm the world around you, you're defined in terms of those acts as well as your illness. A mentally unwell person who kills someone is still a killer (I purposely stay away from murder as I do not know law or that murder can be applied to those who are not convicted for reasons of insanity).


Let's not start claiming people are planning terrorist acts without any proof. Those claims are extremely hard to get rid of, especially on the internet.



All that proves is he’s got an interest in explosives. Believe it or not, explosives are a legitimate hobby for some folks.

Unless there is other evidence, you’re just blindly speculating as to his intent.


It's fine to speculate given a certain pattern of behavior


No, it is not. Our brains process speculation as facts.

This thread and press coverage is just reputation damage. Most newspaper articles are not even 90 % true, so rampant theories go floating until innocent some people are burned for life.


you do realize that by doing what he did he didn't just burn greedy F500 companies that didn't have ci/cd setup properly, but non-profits and other entities that are not really resourced to deal with type of stuff.


is this actually the same guy or some other person named marak who happens to be a software dev? The article appears to be from september 2020


The article names Marak Squires, and the colors.js repo has 'Marak Squires' as the author.

The author's twitter also commented about loosing their possessions in a house fire in October 2020.


> Squires is a software developer and early Bitcoin investor

Or maybe his crypto bet went in a poor direction...?


IIUC he lost all of his Bitcoin (wallet seed) + precious metals in a fire or something, and was nearly homeless for a period of time.


Not to mention the fact that this is just a port of faker from Ruby and Perl (from last version's README, "faker.js was inspired by and has used data definitions from ...")


"inspired" my ass. It's a fork, not keeping the licence terms.

https://news.ycombinator.com/item?id=27254092


seems like he has a history of this (HN 2010): https://news.ycombinator.com/item?id=1448309


You may also enjoy "Jim deletes the tenth grade":

https://web.archive.org/web/20081020082418/http://www.jimbas...


and he seems quite proud of getting banned from Github in 2013: https://youtu.be/varf6oWaFtU?t=202


Well, he just got banned again from GitHub today.


How many other maintainers are getting increasingly annoyed at the users of their code? Entitled users demanding changes to fit their use cases, megacorps using the code for free, other megacorps forking the code and launching it as a commercial service, we've been hearing for years about the problems of being an OSS maintainer.

Focusing on the troubles of this one person is a mistake. Of course the more "unbalanced" individuals will crack first. But we should expect more and more to go the same way (not necessarily by sabotaging their code though).

This model of software development is unsustainable.


> This model of software development is unsustainable.

I don’t think this is right. It seems very sustainable as evidenced by 30+ years of sustained OSS development.

It seems very sustainable as we live in a time of the best OSS software ever produced with more high quality software than ever produced by ideological volunteers.

I’ve seen statements similar to yours and they just seem so at odds with reality.


I don't think the OSS work around GNU, etc, can be directly compared to the "new" model around npm/gems/pip/etc, which has really only been going for a bit over 10 years. I can't put my finger on why they're different, but they definitely feel different.

The big question is what happens when a maintainer wants to retire and a successor can't be found? Or (as in this case), when a maintainer gets so annoyed by their users that they refuse to continue co-operating with their user base.

We don't really have an answer for either of these questions yet, and we won't bump into them until maintainers get old or angry. But we have been predicting that this problem will happen.

Now we're bumping into this problem, it's our new reality. What's the solution?


The package ecosystem is growing and there’s more good packages each year. 10 years isn’t 30, but it’s still a long time to show success.

That’s there’s only been a few problems like this despite millions of users is a sign of strength.

I hedge by pinning yo specific versions and keeping my own package manager (RStudio) to keep mirrors if the packages I scan. That the packages are open source means they are easier to scan or fork.

If a maintainer stops then the project can be forked. If nobody wants to fork then that probably means no one cares enough. And that’s ok.

Mostly it forces us to be flexible. I don’t think software is a thing but a process or cycle.


Success != sustainability.

If a thing has an expiry date that is >10 years, then you're not going to see any problems for 10 years. That doesn't mean the expiry date doesn't exist, or that the thing won't hit it. It can be wildly successful for all those years and then hit the expiry date and stop immediately.


It’s not necessarily the same, but it’s a good co-signal. And definitely useful for refuting emotional, data less calls that the sky is falling.

I’ll take a decade of success as a sign of sustainability over a package maintainer losing their shit as a sign of h sustainability.


Every single cent earned in my career depends on OS software in one way or another. Maintaining a public repo or two can very well be considered part of your job, or purely for giving back something.

I for one enjoy if anyone makes money with help of my code. I very well know I wouldn't have made it so far without many many others having chosen this way before.


I agree, but the number of people I know who could maintain a popular OSS lib for years is tiny. I certainly couldn't do it, nor would I want to; I don't have the right kind of personality for that job.

I'm happy to contribute code, or documentation, but that's not the same thing.


I couldn't help but think the same thing. Seems like an incredibly immature way to handle it. He could have easily set an end date and state nothing will be maintained beyond that date. It's not a good look.


marak has a documented history of mental illness and downright odd behavior. Talented dev and troubled individual. There's a pretty concise video covering what went down with some history here: https://www.youtube.com/watch?v=R6S-b_k-ZKY


That video doesn't discuss marak's "history of mental illness". Do you have some information on this?



That escalated quickly.


I don't think he cares at this point.

I think this is a person that has been driven to the absolute end of their patience. If he's really barely been getting by, then I can only imagine the sheer frustration he must be feeling. Not only are there swathes of fortune 500 companies which depend on his package but don't contribute a dime, but he also had a company with millions of dollars in funding look at his idea and then weaponize his own project to beat him to market.


It seems completely insane to me to give away work and then expect compensation for it.


But completely sane to base your project on a package of code you don't control?

Or to lock up his Github account for exercising his prerogative onto his own code?

His behaviour is unusual, but that, you know, could change easily. It could become the normal just like that. Puff.


> But completely sane to base your project on a package of code you don't control?

Yes, certainly. I think this is sane because with OSS you can control it if you need to. Until then use what exists. It’s sane to use Linux in my project even though I don’t control that. I suppose it’s also sane to use Windows even though I don’t control that.


So you just leave Windows update on in production?

Brave man.


Depending on something doesn’t mean crazy. I depend on lots of OSS projects (and windows), but I have a management system where I control what gets updated and when.


You control what gets updated and when.

There we are.


His twitter page complaining that his github got locked-up suggests he cares. I am not unsympathetic, but only because, I am starting to see this more as an effort to undermine open source as a movement/cause.


Then again he seems quite proud of getting banned from Github in 2013 for "creating a script that forced people to watch a library that [he] created": https://youtu.be/varf6oWaFtU?t=202


Of course he acknowledged his reality:

"they can legally copy your Intellectual Property"

But let's be honest - if they hadn't copied his, they could have easily used the Ruby or Perl version. All he did was port a previous library.


what idea?




still confused faker.js is in no way an original project, ruby faker is 4 years older than faker.js and I doubt it's the oldest.


Ruby version was based on the Perl version, and this author acknowledged his was based on those 2 versions (from the previous version's README):

    faker.js was inspired by and has used data definitions from:

    https://github.com/stympy/faker/ - Copyright (c) 2007-2010 Benjamin Curtis
    http://search.cpan.org/~jasonk/Data-Faker-0.07/ - Copyright 2004-2005 by Jason Kohles


so what was stolen exactly?


The Perl package is GPL. He took definitions from it, ported them to JS, and licensed them as MIT. "Is license violation theft?" is one of those questions like "Is piracy theft?", but it does undermine his insistence that he should be allowed commercialise the result.


It depends on whether you think copying a list of constants from what is essentially an “API” is copyright infringement.

It’s a contentious topic, but last I heard people here tend to believe it isn’t (as long as it’s Google copying Oracle’s stuff)


Those sound like two distinct things to me:

- Copying an interface for the purpose of providing a compatible implementation (what Google allegedly did - although of course part of the debate is whether they also copied any of the implementation)

- Copying parts of the implementation, not for the purpose of compatibility (what faker.js did - in this case, copying constants)


IIRC US copyright law gives very little protection to compilation of non-copyrightable data (look up copyright status of telephone directories). The argument is that it's just public domain factual information bundled together.

The faker libraries are all that -- compilations of mostly non-copyrightable facts.

I'm not saying faker.js did no wrong, just saying if hypothetically a purported copyright owner sued him, they would have a really hard time proving substantial infringement happened given the available precedents.


I think it’s a bad stunt that outlines an issue with mega corps using oss and blindly updating their packages. Or anyone blindly updating their packages really.

But I don’t see how GitHub has the right to suspend him and rewind his work. I mean, that has to be a copyright infringement if there ever was one.


A stall owner who deliberately injects poison into their apples should rightfully be thrown out of the Bazaar. Regardless if the apples are given for free or not.


But the ownership of the apples doesn’t automatically change hands just because their owner decides to poison them.


Forfeiture is the loss of any property without compensation as a result of defaulting on contractual obligations, or as a penalty for illegal conduct.


Where is the court judgement for that?


GitHub is private property, and they have the right to eject anyone from their property for any reason or even no reason at all.

And the code is open-source under the MIT license, so GitHub can do with it as they see fit.


The MIT license doesn’t give you the right to claim ownership over someone else’s code.

You can copy it and claim ownership over that, but it doesn’t allow you to remove the author from their work and then keep the work as your own.

The author has a right to delete it if they want to.


Are you saying that it's a violation of the MIT license to distribute MIT-licensed code if the author asks that you take it down? Because I can assure you, that's not how the MIT license works.


It makes sense if you replace GitHub with Microsoft in your sentence ; )


> at this point I just see him as holding other committers contributions as hostage

No he's not, and you're just trying to be outraged. Just fork the code if you don't trust him. Oh, but you don't want to take his place as the maintainer? Maybe deep down you know that there's still a difference between being in charge and submitting the occasional pull request?

Your actions contradict your words here.


I thought that author of Faker don't ask a new maintainer, he just ask for money. So I think this point is valid.

A maintainer role could be to just accept some occasional pull request if he don't want to make it a bigger project.


There is no more code. Even if someone wanted to fork or maintain a fork, they can't. Try to understand what's actually going on before making silly comments.


> Honestly the project would be better off forked.

I agree. It was an irresponsible prank. But Microsoft didn't fork the projects. They hijacked his digital identity on two of their platforms, instead. I find that much disconcerting than what this one maintainer did.


How did they hijack his identity? They just reverted to an older version instead of the the malicious update, and suspended his account.

For them it must look like a malicious hacker took over the devs account, and reverting the malicious actors changes is exactly what I'd expect of a responsible custodian.

If you want to trick people into downloading malicious software, don't do it on someone else's platform.


> why would anyone want to deal with him after this stunt is beyond me

_faker_ is already gone from our project. The parts of faker that we were using were almost trivial to implement ourselves... probably should have done that from the start.



> Should I get paid for my multiple contributions to faker (I don't think I should)?

The biggest thing that excites me about the possibilities for the future of smart contracts is that creators of all kinds could automatically benefit from any work they do.

This scenario, for example: Any company that used faker.js to make a profit would have X% of that revenue feed back to the smart contracts. The creator would probably get the most, followed by the maintainer, then anyone who had a PR approved, then maybe people who submitted good bug reports. All automatically, and all directly in to everyone’s wallet.

Not only would this be an easy way for creators to get paid, it would also incentivize the maintenance of those creations.

And if you didn’t want to get paid, you could simply have funds directed to charity or opt out of your share entirely.


>The biggest thing that excites me about the possibilities for the future of smart contracts is that creators of all kinds could automatically benefit from any work they do.

>Any company that used faker.js to make a profit would have X% of that revenue feed back to the smart contracts.

Doesn't work in the real world. If you rely on them to tell you what their profit are for a project, they'll just give you a value of 0$ and continue to not pay you the same way they did before. The blockchain/smart-contracts adds no value compared to changing the licence on your code because either way you have to beg/sue them to get paid.

Even with a client who is well intentioned and wants to pay you, they will never want to link their profit to a smart contract and expose their financial data. Measuring profit btw is very complicated and there is a lot of human interpretation to it, you can have a company worth bilions of dollars with top employees making millions per years even if it technically has never made a profit.

And also, the risk of a bug in the smart contract emptying their account would be enough to stop any serious companies.


I totally agree that it does not work in the current version of the real world. We just don't have enough insight in to all the moving parts so there's no way we could properly compensate a long chain of people even if we wanted to.

If we were in a different version of the world (which we might end up in), it will be a no-brainer for someone to create a company that openly builds on the work of others and shares profit with those others. It democratizes the "I only need a small piece of a big pie" business model.


The biggest challenge here is measuring the quality of someone's contributions and adjusting compensation accordingly.

Not every commit is worth the same level of compensation.

Companies like Google/Amazon have to go through and adjust compensation based on talent and contribution levels, but how would that be done in Open Source?


Agreed. Even in my ideal scenario that's still a part that is difficult. My only thought is that that compensation amount is determined before the PR is accepted (or maybe even before it's asked for, like a bounty).

In [my] ideal scenario it's not the people that are vetted and compensated, but the work itself. A student who comes up with a particularly genius contribution could be compensated as equally as someone who worked in the field for decades and proposed the same contribution.

Note: it also requires an environment where work is recorded publicly so that plagiarism is essentially impossible, though that brings it's own challenges.


The second someone puts out a 'faker.js' that does this companies will drop it, just as they ban AGPL software.


As I mention in a different reply:

> I totally agree that it does not work in the current version of the real world.


-3 points. Wow that's a strong response!

Quick notes:

- I'm talking about a hypothetical ideal future as I see it, and why that's exciting to me

- I don't think we're anywhere close to where we need to be for this to be realistic. More like 50+ years when we figure out patents, copyright, delivery, and fine-detail privacy-friendly monetization (person X purchased Y because of seeing a billboard and having conversations with sales rep Z.)

- I like moonshots. I think that setting big ideals as goals is a great way to get to where we can be as a society and as a planet. No one else is obligated to share that mindset or those goals (or even agree that those ideals will get those goals)


This sounds like a dream to me. How can it happen with open source software (where anyone can build and modify the software freely)?


In [my] ideal version of this, any monies made "down the line" from that open source software would feed back to anyone that helped build it. If 9 people contributed to a feature that a company used in their sales workflow, then each of them would get a percentage determined by the smart contract they reviewed when contributing.

If a founder came along and saw that a specific open source project could be the foundation for a business, then they could use it to build that business and feed back a portion of all profit as an operating expense. Then everyone involved would be benefited by contributing.


Recent and related:

Important NPM package colors from Marak causing console problems at the moment - https://news.ycombinator.com/item?id=29861560 - Jan 2022 (1 comment)

Creator of faker.js pushed an update of colors.js which has an infinite loop - https://news.ycombinator.com/item?id=29855397 - Jan 2022 (1 comment)

Marak adds infinite loop test to popular colors.js - https://news.ycombinator.com/item?id=29851065 - Jan 2022 (7 comments)

Marak's GitHub account suspended after he erased his faker project - https://news.ycombinator.com/item?id=29837473 - Jan 2022 (53 comments)

Faker.js Erased by Author - https://news.ycombinator.com/item?id=29822551 - Jan 2022 (2 comments)

Popular JavaScript package “Faker” replaced with message about Aaron Schwartz - https://news.ycombinator.com/item?id=29816532 - Jan 2022 (3 comments)

Faker.js Has Been Deleted - https://news.ycombinator.com/item?id=29806328 - Jan 2022 (9 comments)


While I disagree with his move.

1.It is totally in his prerogative to mess up the package he manages, but not to install into it malware. I am on the fence if this would count as malware. (Because of the open loop, but my leaning is that this is not malware.)

2. Github is, IMO, breaking any trust that I might have had by assuming control of the package, removing the last commit and keeping it online.

If they feel they have a reason to close it, they should. And, they then should fork it and put it up under their own name. Which would generate lots of bad press, but is at least within their rights.

3. I don't quite follow the logic of Github closing his account. Yes, I know that when you use $megacorp they WILL eventually close your account and make you sad. Google did it to me once and shut my gmail account. Tough luck for not being a millionaire.

4. You should always lock your NPM dependencies on a version, and update it when you can see what the results are.

5. Yes, the large corps really should be paying for all the work that is being done to help them, despite their hostile attitudes (being typed on a Mac. Guess who wrote the docs? Not Apple.)

6. Github has some perfectly good competitors, both SaaS (gitlab.com, sourcehut.org) and self-hosted (gitlab, gogs/gitea, etc)

Using them empowers you, and removes a wincy bit of leverage from $megacorp. Enough people follow suit, and the world will be a better place. And besides, Copilot won't steal or reveal your code.

7. There are more effective ways of making his point, but he worked out of emotion, not logic.

Eg. Perhaps he would have been able to change the license and demand compensation from all non-profits? That would provide both cash and publicity.

As a rule, it is always better to act out of logic than emotion, but us humans seem to have an issue with this. ;)

8. Is anyone stepping in to help the author get treatment?


I pretty much agree with you especially on the point that there are far more effective and nuanced ways to for the maintainer to have made their point, like relicensing (imagine what would happen if the license suddenly became AGPL and everyone at big corps automatically updated without checking the new license—it would be a raucous for both legal and engineering, author could offer a non-gpl version for money and probably rake in more than a 6 figure salary… but I digress).

What I find most interesting though are the ethics. The way I interpret (1.) on this comment in the context of this thread (other people have said similar things) is that the community considers the author here to be acting ethically, just annoyingly. When someone submits dubious patches to the linux kernel and wastes everyones time, it’s unethical (kernel maintainers are the victim). When researchers submit dubious CCPA requests to people on and waste peoples time, it’s unethical (webmasters are the victim). However, when a package maintainer throws a temper tantrum and fucks up everyones’ stuff and wastes a bunch of time, they’re in their right to do so and it’s pretty much just annoying but totally okay because the maintainer is the victim of something (either corporate greed or mental health issues, or both). Our society very much puts a premium on protecting victims.


> Our society very much puts a premium on protecting victims.

… is this a bad thing?


Not in isolation but it’s important to understand. It can and is abused by people who know how the meta game is played. The biggest thing for me is that it pretty frequently results in an internally inconsistent worldview, and that is a bad thing.


> it would be a raucous for both legal and engineering, author could offer a non-gpl version for money and probably rake in more than a 6 figure salary… but I digress)

For these particular packages? Very unlikely.


The GLP affects the entire work. If some fancy pants proprietary shop got caught publishing a release that incorporated an AGPL `faker` then a GPL request would render all the source code. This would be a legal nightmare for some places. Have you heard what happens when an acquirer finds a GPL bomb during due diligence?


I'm very familiar with the GPL, I just meant folks would rather find non-GPL alternatives. For a library that wraps a few ANSI terminal codes to enable color and provides a bunch of fake data for testing, I think finding alternatives (or just writing them yourself) wouldn't be hard.


I guess the OP means that it would be too late to replace the AGPL code if the company has already published a version including it. My unexpert opinion agrees with the OP.


I seriously doubt that courts would side with someone who tricks people into downloading code with a different license.

Especially since changing the license of a project from MIT to AGPL doesn't suddenly revoke your rights that you had under the MIT license -- only new code would be affected by the license change.


Yes the solution would be to fork off at the last MIT/Apache licensed version. I’m no lawyer either and would actually be interested in seeing the court case play out.


I wholeheartedly agree with everything except 5)

Faker has (had?) MIT license that basically has no restrictions. $megacorps have all rights to use it any way they want.

Why not change the license then? Why not amend the LICENSE file with "free to use unless you're big tech" clause? Correct me if I'm wrong.

"big tech should be paying for all the work that is being done to help them" -- They do offer their services and usually you don't need to pay for them _directly_, rather with personal data and opensource libs. Whether or not is it fair trade -- each one decides for their own.


> Eg. Perhaps he would have been able to change the license and demand compensation from all non-profits? That would provide both cash and publicity.

Then everyone would use the old code under the old license and he’d have even less of a chance to get funding.


Agreed. That's why I have self-hosted gitea/gitlab.


Maintainer of Chalk[0][1] here, a very popular alternative to colors. Happy to help anyone that would like to port away from colors to chalk, or who might just have general questions about terminal colors.

Just reply here and I'll see them.

[0] https://GitHub.com/chalk/chalk

[1] https://npmjs.org/package/chalk


I use the hell out of chalk and I just want to say thank you for creating/maintaining a solid product. Seeing this is scary to us who still unfortunately have dependencies with Node/npm.

Thank you so much for what you do.


I love Chalk! I didn't realize it came after colors. Thanks for maintaining this package.


Not a question but just want to help developers who needs it: pkg.land (beta) finds similar packages on NPM.

Here are the links for colors and faker:

https://pkg.land/package/colors (chalk is top suggestion!)

https://pkg.land/package/faker


This is very cool. Sad that it only works for NPM packages. Would love this for PyPI packages


Is a codemod available for anyone who wants to migrate?


Not at the moment, but the migration should be pretty straightforward if you don't use the more elaborate color functions from `colors` (e.g. we don't have things like "zebra").

If you use `require('colors/safe')` and only use the basic colors, then Chalk is more or less a drop-in replacement.

If you use the property values (e.g. `"my string".green`) then you'll need to change that to `chalk.green("my string")`.


One thing worth noting is that chalk 5 is ESM only, so if you’re using commonjs, you’ll need to use chalk 4, or transpire it with Babel, or something…


Never used colorjs, have used chalk in my OSS projects and love it. Guess now's a good time show appreciation to maintainers so thank you.


I do free work for open source _a lot_. I have a rather controversial opinion on this. I don't think I should be paid for my work because the moment money comes in I have to be responsible for the work I was doing for fun. I enjoy building cool things others can use for free and I want to reserve the right to respond to feature requests with a simple "PRs are welcome! :)".

I get my paycheck from my employer and I have always been successful convincing my employers that I do open source work on the side for my own interest.


I have a different opinion, but I respect yours; and that's the basis of open source. There's many, many vastly different reasons why people do open source, and partly probably why we haven't "banded together to get paid" so to speak.

For me, if I got donations that'd be great, but it'd not create a different sense of responsibility for my work, I'd keep doing exactly the same (unless I was explicitly hired as a contractor, but that's different). I do not particularly mind not being paid though, it's me who is putting my own code out there for people to use, for free!

Now I make mainly open source JS libraries and use the fairly liberal MIT, if I worked in end-products where there might be a bit more of a "competition" or a big company might literally repackage/resell your product I might release those under a different license, like dual-licensed or similar.


So, in essence, I think you're saying that you choose a license depending on your goals for a project. Which seems reasonable.

More reasonable than folks who release under a license, then complain when others do exactly what the license allows.

I see a lot of "but give nothing back" complaints from some open source folk, but I've looked hard and I don't see a "give something back" clause anywhere, at least not in the licenses I'm using.

What I do see are "give something forward" clauses, which explicitly target customers, not suppliers.

I like and use open source as much as the next guy, but for my day job I code for money. Alas so far Open Source pays no bills.


> I don't see a "give something back" clause anywhere

You're right about that, but there isn't a "this software works" clause either :D


>donations

I think there is a big psychological difference (and maybe a legal one?) between accepting donations for something you put up for public access vs. accepting payment for doing work.


If so would that also impact 'backing' like KickStarter and Patreon?


I'm really not sure. I'd guess that, say, YouTubers who provide extras to Patreon supporters often feel a much higher sense of obligation.

As for the legal aspect, I'm speculating based on just a bit of knowl she about contract law: If you give something as a donation, I don't think that constitutes a contract. If you give something with the promise or expectation (by all parties) of something in return then it begins to strongly look like a contract.

I could imagine a lawsuit against a Patreon recipient if they collected $1m while promising something to supporters and never delivered. (I'm not sure people would bother with a lawsuit, but I could see grounds for one). I think it's a bit different if the exchange was more informal, like "Hey I'm making X for my own enjoyment, I hope you like it too. Click here if you want to send me coffee money" and then collected $1m on that but never finished the project.

There's a lot of subtleties though, and plenty of contract disputes revolve around whether or not a contract existed in the first place (written, verbal, or implied)

So, if you start saying "I will maintain foss project X if supporters average $Y/month" then I think that really muddies the water on legals obligations in many jurisdictions. It may not matter if you call them donations or not. After all even with donations to non profit there can be an expectation of return: a plaque on a wall, name on a building, etc.


>I have always been successful convincing my employers that I do open source work on the side for my own interest.

What do you need to convince your employers of?


Use my working hours contributing to Open Source.


AITA for thinking that if you develop open-source software and your license permits anyone to use it for free, then complaining about no compensation is not a valid complaint?

I totally understand that billionaire corporations use software like this for free. But the software maintainer has explicitly allowed _anyone_ to use it for free. If you don't want them to use it for free, license it as such.

What am I not seeing here?


> But the software maintainer has explicitly allowed _anyone_ to use it for free. If you don't want them to use it for free, license it as such.

I support freedom of speech too; I explicitly allow you (or anyone) to be an asshole, but if everyone is an asshole all the time, I might pick up my toys and leave.

That's right, I prefer a world where people are nice and do nice things voluntarily and out of respect / compassion / desire to suppport / etc. instead of doing things a certain way because a law or contract requires it.

And so is largely my stance on free software (and more.. music, games, etc.). I would not expect most people to pay or donate, and it would be impossible to write a license that requires it without discriminating against those who can't or just don't find it worthwhile. Free software works best when it comes with no strings attached.

However, if something I made got really popular and thousands of companies started relying on it, I'd expect to see at least some support. And if everyone just kept taking but never showed any support, it's quite possible I'd get burned out on it, especially if popularity also came with a lot of demands and entitlement. And if everyone felt entitled to just take and never give, I would also feel totally entitled to replace my own project with "thanks for all the fish" when I'm done with it.


That's fine, but then the downstream shouldn't complain either when the code breaks, whether intentionally or unintentionally. The contract on paper disclaims all liability after all. There is a social contract and then there is the literal contract. A lot of commenters here seem to be willfully obtuse or simply ignoring the former.


It still doesn't mean you can't call the guy out for being an asshole. However, that's the only relief you'll get in matters such as these. Other avenues would be to tweet about it and make it known that this is what you can expect from the same guy in the future so avoid him for future work as he won't be acting like an adult.


That also does not mean you cant call out the corporations that are leaching off open source...

I find it ironic that people are more upset at this guy for complaining about corporations, than they are about the corporations leaching...

the dev and hacker communities have really gone full on #HailCorporate haven't they. Where did my anti-establishment Libre community of the 90's go... I long for the good old days


We're not upset at the guy for complaining about corporations. We're upset at him for pulling a stunt that may have hurt some large corporations, but mostly just caught a lot of small projects in the crossfire.


Watching people piss into the wind is fun until you get some on your shoes.


He is leeching off a "corpo" by hosting his library on a "corpo" site.


There's a difference between stopping to give away your stuff for free and acting maliciously.

If I give away donuts for free and stop at some point, you have no right to complain. If I poison the donuts because you should've really thrown money at me for those donuts that I explicitly marked as free, I think you could complain after all.


If someone was offering free donuts with a sign that looked anything like the MIT license, you would be a fool to eat them.


If you don't take food as an example, see it like this: If I gift you a hammer, MIT disclaimer and all, you should not complain if it does not work or falls apart on the first few uses. Totally fine.

If I rig the hammer with a grenade in the head, so that it will violently explode the first time you use it - do you still think this is covered under the terms of the MIT license?


I agree with you, but I think this case is more like one day you go to borrow the hammer and it "falls apart" (the library is useless as it enters an infinite loop). A grenade would be an 'rm -rf', or trying to steal your user's data, which they could have done, and would have crossed a line.


My parents always taught me not to take donuts from unknown people, especially when they're free. It's common sense and corpos take all fault for taking an easy fix to save their own developer hours at someone's else expense. Now, when it turns out there are consequences to this, corpos aren't that happy.


With that argument, you might as well say that open-source/free software shouldn't exist.


My argument is that you shouldn't automatically trust it just because it's free. You shouldn't rely your entire infrastructure, and, perhaps, life, on it. If you do, there is no one at fault but you, because you passed all the responsibility to someone you have no control over. You are not entitled to protection and safety from the side of developer just because you said you rely on them - they are not going to carry your burden when it's not their job to do so.

Either you stay cautious, in which case you maintain your own forks for your own business or reinvent the wheel, so you don't rely on others that much, or you admit that you can't just reject this dependency - in which case it becomes either a public infrastructure, or a "donut business" on its own, and both should be financed as such. Take Linux as example, Linux is backed up by corporations and financing because everyone understands how crucial Linux is for our living. People took all the necessary steps to guarantee that kernel dev team is not going to disappear at any moment.

This is not the first and not the last time this happened. For some reason people think that open-source devs owe them something just because they had the right to bring their projects into existence. Javascript environment especially suffers from it because of unknown obsession of people to depend on packages which contain 1-2 lines of code at best, packages that can disappear at any moment.

Faker dev acted maliciously, but no one could guarantee that he wouldn't. No one was there to care about his mental state, or his wallet contents, and only relatively small companies and few people donated to his project, something he worked on for over a decade.

Sure, you can blame him all the way you want, but that won't undo the damage. If you rely on something maintained by an individual, you have to take into account that this individual is an actual human, this human actually exists and like any other human he is a subject of free will and uncertain futures, and whatever risks come with it. If you don't, this is what happens to you.


I have no contention with the argument for due diligence and self-preservation. It's your comparison of OSS with potentially poisoned donuts that strikes me as the same facile arguments made by the Not Invented Here types. It's one thing say your infrastructure is your problem. It's another to suggest that anything free as in free beer is ipso facto too good to be true. That's an unsubstantiated reductionist take.

The linux kernel was not always as well-financed as it has become. Before it's recent about-face, Microsoft financed attempts to stifle Linux. Linux's continued existence has rested always on the merit of its utility, whether to hobbyists or to corporations.

The Faker dev may not owe the rest of the world anything, just as the world doesn't owe anything to him. But what about those who have payed or contributed to his work? Are you of the view the anyone who sincerely their money, time, and intellectual output into Faker deserved to be suckered? Those people are human beings too. They deserve something for their investments rather than being used as unwitting pawns for someone's mental breakdown-induced prank.

Taking your view of security to its natural conclusion, no person should use a computer if he/she didn't bake the silicon wafer himself/herself. Otherwise he/she shouldn't complain if he/she becomes a victim of fraud or misrepresentation.


> That's fine, but then the downstream shouldn't complain either when the code breaks, whether intentionally or unintentionally.

There's no contract that says complaining is banned.


That's the point. If the dev complaints are invalid because the contract doesn't say so, then the downstream complaints should also be invalid.


Except the license does say so


The license does say what?

The license doesn't say any payment is necessary.

The license doesn't say new versions will still work.

The license doesn't say anything about complaints.

If it's valid to complain about code breaking, it should also be valid to complain about lack of payment. These complaints are outside of the legal mandates of the license.


It isn't valid to complain about the code breaking. At least not with any consequences. Because the license explicitly says that the software is provided without warranty.

Likewise, the license explicitly says that the software is provided for free, therefore it isn't valid to complain about people not paying for it.

Nobody's on the side of the big corps here, but "why aren't you paying for this thing that I've given to everyone explicitly for free" seems nonsensical.


But plenty of people are complaining about the code breaking.

I think both complaints are valid. Being legal and being beyond complaint are very different. "explicitly for free" is the legal contract, and it's okay to have norms that extend beyond that.


> Nobody's on the side of the big corps here

The point is, people apply legal logic to the payment and moral logic to breaking the code.

You can either agree that this guy is an asshole but these companies totally deserved it for leeching off his work, or you can say that they're both in their right according to the license.

But the argument that "he's an asshole because these companies had no obligation to pay him" is extremely dumb and hypocritical, and that's what many people are saying.


No, I apply legal to both. Payment is not required and breaking it intentionally to do harm is illegal. Intent matters.


Taking the disclaimer in the MIT license for example:

> THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

This makes it pretty clear: The author is NOT liable.


The license says the code is "AS IS".


Not sure if calling their complaints 'invalid' is really the most productive avenue here. They've maintained it for free, and have stopped doing so. I'd say their complaint they're not getting paid enough is about equally valid as the complaint that the software you've been using for free stops being maintained.


You are not TA here, and most people feel the same way. But it seems this particular author of OSS has.... "issues" of which this is really the least.


Sure. But that works both ways.

If you're not going to support the development in any way, then you don't get to have an opinion on what should be done or complain about bugs.

Honestly, even then, I think we're stretching it. Maybe at the surface it seems like it should be that way, but this isn't just some tiny package that a few companies are using. We're talking about tens of millions of downloads every week. This package has essentially gone and become public infrastructure, much like log4j.

The companies relying on it should realize as much and understand that it's in their best interest to ensure that the package keeps on being maintained.


So nobody should write OSS because you can’t complain about the fact that you chose to make it free. And nobody should use OSS because they can’t complain when the software is broken. I think this says a lot about open source software.


It's all freedom of speech. You can express yourself. It doesn't mean you aren't shouting into an uncaring void though… which is what I imagine most of the HN audience is when you're asking for sympathy when you released your software in a libre manner.


Yea it sounds like baiting with something free to get users, then switching to solicitor mode.


There are people who think its reasonable to take all the pennies from the "take a penny leave a penny" plate because "that's what it's there for."


OSS licenses generally don't have a "leave a penny" type of clause in them.

Writing software under some variety of a free license is essentially a donation. Authors shouldn't expect to get back anything: Imagine a person donated a ventilator machine to a hospital and it saved a few dozen lives and helped out hundreds more. It would seem strange to me if that person was later ranting that the combined net worth of the people helped by that machine was in the $millions and yet they never got any of that money.

That's how it sounds to me when when foss authors complain about not getting paid. It sounds like the subtext is, "If I had known how useful and popular this would be I would have charged for it." Which seems like a strange attitude to have when making something you give away for free in the hope that others found it useful.


The difference is that there aren't a finite number of pennies on this plate. Software can, by definition, be copied an infinite number of times.


I think you've missed the point.


I assume you mean "take a penny" = "use the software for free" and "leave a penny" = "contribute back" (money or time).

With the plate there's a sign that says both "take" and "leave". In this case there is also a sign (the license) which only says "take". The intent is clear in both.

It's further not comparable because taking a literal penny deprives the next person of it, whereas here using the software for free costs the other users nothing.

But I'm not trying to nitpick the analogy, I only want to point out the the obligations (both social and contractual) are not the same, neither are the consequences.

Again, if he wanted to make money from his software, he should've put it in the license and charged the big users for it.


It's very normal for packages to have donation requests, and for developer pages to have donation links.

The metaphorical sign does say take and leave, and the take vastly outweighs the leave.

> Again, if he wanted to make money from his software, he should've put it in the license and charged the big users for it.

A developer shouldn't have to ruin the open-source nature of the software to get there.

Maybe if we could invent some standardized almost-open-source license that doesn't terrify companies we could get there, but we can't even seem to define "commercial" in a way that doesn't break everything. Better still it would be nice if we could use social pressure to get companies to donate a small fraction of the money open source saves them.


This is how I feel about people who create small but popular libraries and then complain about the workload. They're squatting on a valuable and finite resource, attention, that they don't like or appreciate. If you don't want it, sunset your library and let me write a replacement. I'll happily rewrite faker or left-pad for free, and so will hundreds of other developers.


It's time for someone to make a Redhat, but for "safe" open source software libraries. My big enterprise would sign up for it in a heartbeat. We'd pay for access from an alternative NPM registry where everything is at least semi-vetted - someone at least looks at diffs before new versions get updated and made available. Sure, the "safe" repo wouldn't have as nearly as many packages as the main NPM repo, but if it had the most popular packages that's probably fine.

If I'm developing an app and wanted to use something outside of that "safe" registry, maybe I could, but I'd have to have a longer conversation with my enterprise's security org about why I'm using some new package that's not in the "safe" registry - and I'd probably have to pin or import it into my org's private repo.

It's up to package authors and this new Redhat-ish company that manages the "safe" repo to figure out how to split revenue back to package authors. The new company is definitely providing a service and should get to keep a cut, but hopefully there's enough left over to give some to the package authors - and that's incentive enough for the package authors to want to get their code included in the "safe" repo.

My company's security org is doing code scans/static analysis and version tracking and software BOM work of everything we're building, but ultimately none of this makes sense if I as an app developer can just add whatever I want and it's assumed to be safe if it passes the scans and doesn't have a CVE listed somewhere. We'd happily pay if someone was willing to try to vet packages (and take on some liability if they're wrong)


Use a language where you don't need to pull in 100 dependencies to create a useful application/service.


It's not a language problem. It's a cultural problem.

Last I checked create-react-app pulls around 1k transitive dependencies. Can't really blame JS for that, can we?


This whole situation reminds me of that post a little while ago

> I will pay you cash to delete your npm module

https://news.ycombinator.com/item?id=29240952


So finally F.L.O.S. Software pays out. Shame that the dev did not apply for some compensation. All in all, not very rational of him.


Keeping dependencies loosely coupled and reusable has been best practice since forever, but it is only really with NPM that it has become the default.


But actually it's more like dependency hell because you pull in the same dependency in N versions and you can't update transitive dependencies in a senseful matter? And you get a lot of libraries that are like 4 lines and computing basic operations (like there was a lib "isOdd"... i mean x% 2 === 0?).

So you won't gain anything from this type of dependency management, because it is way to complicate to understand and to manage for a developer.


unfortunately almost all languages are moving towards smaller, not bigger, standard libraries.

I do think there is space for someone to ship an "unofficial meta-package" for languages like JS to wrap all this stuff (I tried to do an analysis of npm once to figure out what would make sense but got lost in the weeds....)

Python (for me one of the gold standards on this front) has been hyping for a future with a much smaller standard lib and it makes me sad.


Cool use a lang without a developer ecosystem got it.


> My big enterprise would sign up for it in a heartbeat.

People keep on claiming there is a need for a corporation like that. But Sun didn't really make any money with Java and had to sell to Oracle. Now all these silicon valley startup complain about Oracle costs.

Eventually, Microsoft will pull the same thing with NPM (and Github), they didn't acquire the package manager just for creds, they will make it profitable.

As for Redhat they are owned by IBM now.


I could see a govermental effort be approriate too. Alot of critical infrastructure is depending on open source too. Agencies like the German BSI should embrace and invest into open-source much more strongly.


But "vetting" is still relying on the free labor of other and really doesn't change the business model and rectify the underlying problem with open source.

You really need an organization which sponsors and directly hires coders that are maintaining critical infrastructure.

(Of course the npm world is a bit insane where stuff as trivial as leftpad can be critical infrastructure. Don't really think someone needs a $200k/yr salary to maintain just that)

There's an interesting bit of social psychology here where the top reaction to this isn't "lets try to sort out how to pay all the people who are doing all the free work" (and I'm really thinking more the log4j and openssl people and the whole broader ecosystem problem this highlights) and instead it is "how do we keep being exploitative and just outsource the hard job of vetting everything?" I'm pretty sure Google will probably get some AI people onto the problem though, there's clearly a business model there.


The usecase I’ve seen similar to this has been Artifactory. If a dev wants to use a package, then a specific version gets pulled and hosted in the internal Artifactory repository and then all build tools use it to pull packages.

The way I’ve seen (and might not be the best way) people handle ownership is if a dev wants to use lets say ‘colors’ from npm, then they/their team takes ownership of that package internally. I guess an issue would arise if the big co is doing development on a public repo, so they’re forced to use npm/dockerhub/etc.


I think this could be done as a community.

Imagine if npm allowed organizations to publish "vetted pointers" to packages. So redhat could publish a "{redhat}colors", which would include only the vetted versions.

When installing, you could choose to setup your installation to allow "redhat-vetted" versions only. And that would apply even to sub-dependencies.

This becomes a community tool if "redhat" could tell npm to vet anything vetted by another org.


Organizations already solve this by mirroring repositories with tools like JFrog Artifactory.

New versions of packages are verified, approved, and mirrored.

(The revenue split part isn't really a part of that, but you're not really guaranteed revenue as soon as you choose an open source license. You have to make some kind of value-add like support or cloud services as a complementary upsell.)


I will happily provide this service to you. In fact, I already created a Linux distro specifically for rolling back all the brain damage and misfeatures in mainstream Linux that have crept in over the past 15 years. It contains just over 1,000 packages, with not a few patches/bug fixes by myself and others.

If you want to use this distro in your own company with full time support and continuous upgrades by yours truly, my yearly salary will be 220,000 USD, please--not counting any donations you decide to make to individual software authors to ensure your use case is covered by their software, as the above amount covers only my personal salary and considerable expenses.

A large discount is potentially available should other corporations avail themselves of this opportunity and also help cover my salary and expenses. Contact me at [email redacted].

I'm not holding my breath that anyone will take me up on this opportunity, so this distro will just have to remain for my exclusive use only, I'm afraid.


It's a bit wild that the sum total money spent on salaries for engineers handling potential problems stemming from this or defending against the possibility in the future could probably have covered paying the maintainer a living wage many times over.


Tragedy of the commons, shortsightedness and misaligned individual incentives.

Individual contributors in large companies, especially, would want their companies to fund FOSS projects they use. But approval processes are generally extremely complicated and there's nothing to gain internally by doing it. And we're talking about money that these corporations spend each millisecond. They barely need approvals for many other activities costing 10x, 100x in other domains.


Most of the companies that I’ve worked for have funded the FOSS that we used. By allowing me and my colleagues to contribute features we needed, or fix bugs that were affecting us. The core maintainers probably never knew these PRs were funded at an hourly rate paid for by some big bank, and sadly quite a few of the projects that I’ve contributed to have rug-pulled into some sort of non-FOSS enterprise product. We all benefit from FOSS, including all these disgruntled maintainers. The FOSS way should be to pay it forward, to contribute to projects where you can. If you’re expecting to get paid for it, it’s not FOSS. Deciding you can’t maintain a project anymore is fine, but pulling it out from under the people that are using it is incredibly anti-FOSS.


> The FOSS way should be to pay it forward, to contribute to projects where you can.

In theory, this was enforced by copyleft requiring derivative works to also be free software. In practice, companies use software with permissible licenses instead because then they can reap the benefits without any requirement to pay it forward.

> If you’re expecting to get paid for it, it’s not FOSS.

Being paid for your time has nothing to do with whether your source code is public or what freedoms users have when using your software. Conflating free software with volunteer labor is exactly what leads to situations like this one, where the author's business based on faker got copied wholesale by a competitor who simply ignored their attempts to reach out.

> Deciding you can’t maintain a project anymore is fine, but pulling it out from under the people that are using it is incredibly anti-FOSS.

The mechanisms that allow a rug-pull are entirely choices made by the users of the libraries for their own convenience; the author did everything needed for you to download a working copy and use it in perpetuity. It's your fault for choosing to rely on NPM, choosing to not cache your dependencies, and choosing not to pin your dependencies.


Copyleft software isn’t free, it comes with a very hefty price tag. You don’t pay it forward by handing over all your IP. You pay it forward by contributing back. I have no moral qualms about using OSS in any project I’m working on, commercial or otherwise. Because I have published my own libraries for anybody to use, and contributed a huge amount of PRs to the software I use. When you publish something with a permissive licence, it stops being yours, but you benefit from having a huge number of people improve it for you. That’s how it works, that’s how it gets paid forward.

The OP is also attempting to use a proven failure of a business model, and then throwing a tantrum when it fails. Sure he’s within his rights to do so, but he has no moral high ground here, and I don’t think he’s entitled to any sympathy for adopting a business model that everybody knows for sure doesn’t work.


Your conception of free software is not how free software is generally understood; free software is about the rights of the users, not the expectation that people contribute to it. Sure, if you _define_ free software as being about a lack of ownership by one person and expected contributions, then you can criticize this.

But what happened here is that free/open source software doesn't have a consistent stance on paying maintainers or contributors, and this author feels that it's unfair and (potentially in the midst of other personal issues, it seems?) took advantage of a problem with how the ecosystem pulls in dependencies to complain about it.


Working on someone's software that they make no money from isn't "paying" it forward or in any direction.

> it stops being yours, but you benefit from having a huge number of people improve it for you

It stops being yours, but somehow everyone who works on it can say they're helping you. This isn't fair. You don't have to pay for it, but fixing and adding features to the software that you use to make a living can't be counted as charity work.


FOSS has never been about getting paid to write software. It’s not a charity, it’s a contribution to a community. I contribute because I benefit from being part of a system that has contributors in it. The people who expect direct compensation for their contributions do not uphold those values, and are deteriorating the integrity of the FOSS system itself.


> In theory, this was enforced by copyleft requiring derivative works to also be free software. In practice, companies use software with permissible licenses instead because then they can reap the benefits without any requirement to pay it forward.

If you want to fix this, stop contributing to permissively-licensed software. If you have a change you want to make, make or find a GPL fork of it and contribute it to that instead.


Individual action is not gonna solve anything, you have to understand why people choose permissive licenses in the first place:

- They're contributing while at work and work only allows permissive licenses - They're familiar with permissive libraries because of the previous point - Permissive licenses are perceived as simpler - They've been pushed away from the free software movement by the FSF/Stallman/Linus - They don't think copyleft is the right form of enforcement


I don't think anyone is confused about why people prefer permissive licenses. People enjoy benefits without costs or responsibilities, unsurprisingly.


Fair! I didn't finish the thought, which was to address those shortcomings by either making free software work better for those needs (e.g. work with tech unions to negotiate guaranteed funding of projects used by companies) or by making it less attractive to use permissive software (e.g. via regulation).


I like these arguments.

As a matter of practicality, commercial entities using a maintainers' work should donate to maintainers to incentive them to, well, at least not go rogue, or to be on their good side when they rogue. Companies pay their employees to incentive them to function in the interests of the company. While this isn't fool-proof (principal-agent problem), it lowers the odds of a pissed off employee having the will/self-righteous fury to pursue something more aggressive than resigning in a huff.


For clarification, you're not referring to "contributing features [you] needed" or fixing bugs on the clock as funding? That's probably a nice thing, unless nobody needs a particular feature except the people who fund you at an hourly rate, but it's not "funding the FOSS" you use. That's when you give someone money. You can't eat features and bugfixes - without the rug-pulling you're decrying here.

FOSS is licensing, not religion. Rug-pulling a project from people who are enjoying using it isn't "anti-FOSS" IMO. This may not apply to you, but for all the contrast that OSS people project between their pragmatism and Free Software people being insane religious zealots on a jihad against money, OSS advocates seem to imbue a lot of flaky new agey spiritism into what FOSS is or isn't.


The author didn’t write all of the code, though. The code has a long history (including in other languages) and many contributors.

Why should this one developer collect payment but not everyone else who contributed it?

Regardless, it’s ridiculous to give something away openly under a permissive license and then later get angry when people use it exactly as you license it.


> Regardless, it’s ridiculous to give something away openly under a permissive license and then later get angry when people use it exactly as you license it.

Doing your best to live in a bad system does not invalidate the complaints you have about that system.


By publishing free, open source software he wasn't "doing his best to live in the system". That would involve exchanging his labor for currency.


> That would involve exchanging his labor for currency.

That's the goal. Or at least one goal. But you can't just press a button and do that.

Being in charge of and an expert on open source software can be a way get people to buy your labor, but it's much harder than it should be. Instead many companies will demand you work for free, because it's open source!

Also trying to do something good for the world shouldn't make it so hard to make money. The companies get value but don't want to pay even a pittance.


> Being in charge of and an expert on open source software can be a way get people to buy your labor, but it's much harder than it should be. Instead many companies will demand you work for free, because it's open source!

It's hard to get paid when you decide to give your work away. If only there was some way a person could enter into a contract in order to guarantee payment in exchange for their work. What a radical idea...


1. Why is it so hard to sell additional labor on the open source project? That's the purest form of exchanging money for services.

2. You shouldn't have to take the option that hurts everyone else just to get paid.


> Why should this one developer collect payment but not everyone else who contributed it?

Exactly, no one said _only_ the lead maintainer should be compensated. _All_ of the labor, not just the labor that happens to have a day job that benefits from it, should be compensated. That includes non-coding labor like support or community management, too.


That’s just a a software development business. Open source exchanges labor for conditions the use of the product of that labor. If you want to exchange labor for money then exchange labor for money


The maintainer gave their work away for free. By definition - and by explicit license it isn't worth any wage, much less a "living wage many times over."

The maintainer wants to have their cake and eat it too - they likely believe in FOSS for moral reasons yet consider it immoral when companies take their software and use it freely under the terms offered.

If you want people to pay you for your work, don't give it away for free. If you give it away for free, don't have a temper tantrum if someone gets rich off of your work without compensating you, because those were the rules you chose to play under.


Living wage? haha, more like 100 peoples living wage.


In America, most unskilled software developers make somewhere in the $80k to $140k range. A living wage is around $20k for absolute bare minimum essentials. Skilled devs still get around $200k.

Point is, maybe 10 people. And that’s if you like ramen.


I don't know where people get these crazy numbers -

Even in America outside of the coasts and outside of FAANG, making $140-$150+ as a senior developer is very good (and compared to almost all other industries is absurd) - salary.com which doesn't just rely on self-reported info as levels does reports the median salary + bonus for senior software engineers as $120k

https://www.salary.com/tools/salary-calculator/senior-softwa...

Outside of the US, even in more expensive places in the EU, even the equivalent of $100k for a super senior lead architect would be Very Good - I don't know a single SWE in the midwest in the US - including senior embedded systems engineers working on medical devices, senior firmware devs working on networking equipment, or any web engineer that makes more than $175k and I know plenty of Very Good senior full-stack web devs that make $125-$150

$125k a year is still a top of the top salary in the US, so don't cry for them tho


They were using extreme numbers to say that even then you barely hit 10x.

That's why the number they used for a living wage is so low too.


I just don’t think HN is interested in this anymore. There was a time. It’s gone now.

Dumb comments saying software engineers make 100x a living wage (as if this would be a bad thing) are the flavor du jour. It’s hard not to respond in kind.

But thank you, for what it’s worth. I remember you from 2010. It was quite a time.


For what it's worth, I didn't read thefourthchime's comment in a way that would suggest that software engineers are making 100x a living wage. The way I understood it, they meant that the total cost of all engineer hours spent on rectifying the problems caused by the 'colors' and 'faker' issue would easily cover 100 living wages.


> Skilled devs still get around $200k.

Moderately skilled it engineers other of backgrounds and devs can make much more than $200k, just go check levels.fyi

To the parent comments point

> It's a bit wild that the sum total money spent on salaries for engineers handling potential problems stemming from this or defending against the possibility in the future could probably have covered paying the maintainer a living wage many times over.

The collective effort across numerous companies is much more than just $200k and you can bet your butt on that.


GitHub has now suspended the maintainer:

https://nitter.net/marak/status/1479200803948830724


This is scary, and I don’t know why people here aren’t losing their minds.

I think someone should make a big deal about this. What would be the first step?

On the other hand, my GitHub was once suspended (and all repos shuttered) for posting gists that looked like spam to some algorithm. It was extremely unsettling, and they need to do a better job communicating. But they may have suspended the account because they thought it was hacked, which is almost reasonable.


You are saying "scary", but I think "alarming" is more appropriate.

It's an alarm that should be buzzing through sleepy programmer skulls. It should alert them to the fact that it's no longer the small company that respected programmers, where you felt your account was yours, and your repositories were yours.

The rules have changed with that acquisition, and Microsoft exploited the good reputation of that small company and the inertia of its users. Step by step, the site became more "social", and started suffering from the usual issues. Step by step, we see the same bigco policies that treat users as worker ants. When an ant starts making up a mind of its own, queen ant sends some soldier ants to cannibalize it.

Now, I realize here on HN the tired old rants of Moxie are considered gold. But if you want to skip being treated like an ant, run your own server, maybe support upcoming federation protocols to kill this centralization and bring down the nest, or at least migrate to some place that respects its users in the meantime.


GitHub has always been about "social coding". This is a quote on their homepage on May 2008 (three months after GitHub was founded):

> What’s amazing about Github is how it really brings the social aspect into play. Chris and Tom are showing us all visually how git development is supposed to work. I know I personally had some bing moments once I started pulling in commits from external git repos.

https://web.archive.org/web/20080514210148/http://github.com...


Yeah, I was a GitHub user in 2008. Though it obviously had a social aspect, it wasn't considered a "social network" type of site. Its ongoing transformation into one is a result of the acquisition by Microsoft.


I have always considered it a social network, 'the social network for young programmers' as I called it, which turned free software into social networking (portfolio for first employment(s), etc.), and that's why I always refused to create an account over there as I don't want to push those things even further, and got gradually more appalled as I watched projects following the trend and moving there one after the other, making themselves more and more dependent of the tools conveniently provided by that silo, and cutting other ways to interact with them. Long before Microsoft entered the picture.


Like young people in general don't know what a file is any more, young developers don't know what git is: they think GitHub is git.


> GitHub has always been about "social coding".

Yeah, you're right: they weren't "one of the good guys" even before the acquisition. Microsoft were only the biggest and most well-known proponent, but never a monopolist of EEE.


GitHub is a private enterprise, never forget that… It's not scary, GitHub is a community owned by a corporation, and they have every right to kick this guy to the curb as he is a bad actor. This is the same reason I'm cool with them kicking MAGAs spreading lies to the curb. It's one thing to remove/deprecate an NPM package, it's another thing to deliberately break 10s of thousands of software installations because you want to act like a baby. He no doubt did thousands if not hundreds of thousands of dollars in damage and will continue to do so as developers find their stuff broken all over the internet.


Why is that scary? If you do bad things, you're going to get banned. This guy abused Github to distribute malicious code to thousands of projects.

If losing your Github means losing your projects, that's on you for being lazy/irresponsible with them. Git is already decentralized, and anything important should be cloned on something you own.


> This guy abused Github to distribute malicious code to thousands of projects.

He could've done far, far worse in terms of the technical impact of these changes. It's obvious that he was trying to make a statement, not exfiltrate data to sell on the dark web.

It's scary because, as a FOSS maintainer, your code is your responsibility to do with what you will until it's no longer in the market's interest. You don't have the right to expect any sort of compensation for it, but if your project somehow becomes successful and you upset the natural order, all of the work you were told belongs to you that you should be grateful is so successful without being compensated for is now no longer under your control. There was never a business relationship to sever in the first place.

The market doesn't want to come up with a way to compensate FOSS developers with high-profile projects like these, yet the general expectation is those individuals should just continue working on these libraries for companies to profit off of them.

I wouldn't have handled this situation the way this person did, but we're reaching this point where legitimate protest and speech is being met with erasure and confiscation of your work, and that should scare everybody.

> This guy abused Github to distribute malicious code to thousands of projects.

That's one way to look at it. An alternative view is that a bunch of companies took some free code and shoved it into their apps and then got mad that the free code is causing them problems. Instead of examining the inherent contradictions of the FOSS community, commercial interests would rather just erase the protestor.


This wasn't a "protester". It was deliberate sabotage, done with full knowledge that it would cause major damage. GitHub is with its rights to kick this guy off. It would even be within its rights to take over that account and fix the introduced bug (by reverting the change or otherwise). That wouldn't prevent the original developer from maintaining their own broken version, but npm and GitHub could eliminate it if they want.


> It was deliberate sabotage, done with full knowledge that it would cause major damage.

That's how protesting often works. Deliberately interfering in normal affairs is a very common protest tactic. Just look at the interstate shutdowns after the George Floyd killing, or going back to Rosa Parks and the Montgomery bus boycott, worker strikes, etc. etc. That's exactly how protest works.

Forcing application code to print statements like "LIBERTY LIBERTY LIBERTY" is very, very different than trying to infiltrate commercial systems and exfiltrate sensitive data.


Protesting in that manner isn't without consequence even if the thing being protested for is right or something else is worse to do. "It was in protest" is a reasoning not an absolution.


And who decides the consequences? Who executes the sentence? What makes GitHub the judge, jury, and executioner?


The entity doing the hosting gets to decide what it wants to host. In this case that is GitHub. For the other consequences it depends on the action, where it took place, and what it resulted in to find who assigns and administers the consequences.


I'm not arguing that there should be no consequences for what this person did. I am questioning the ability for an application like Github to essentially "cancel" someone like this whenever they feel like it.


Someone else no longer providing his code hosting being equivalent to being "canceled" aside what is the alternative to the hoster being able to chose what is hosted/spread for others?


Minus my Hoobs setup was literally knocked offline over that stupid ass stunt. Care to describe to me how that isn't a violation of the CFAA? The fact that you think that crashing RANDOM servers that you DONT know what they do is an ok form of protest is INSANE


Is it not equally crazy that a single random person has the ability to make such an impact on so many systems? That's exactly what FOSS facilitates here. Again, I wouldn't do what this person did, but situations like this are a byproduct of FOSS.


How could any production servers be knocked offline by this. Certainly no one would be so careless as to deploy completely untested third-party code?


Have you ever read an open source license? Most of them have something like this clause “ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR..”

If you pull random unverified code from the internet don’t be surprised when something breaks.


The fact that you think you can just run whatever code not written by you on your servers is insane. The fact that you think protest should be able to be ignored is insane. The whole point of protest is disruption.


The counter-framing is this guy updated his project and thousands of projects chose to pull that update.

He didn't go around submitting PRs to update the version of this. He didn't deploy anything into production.

You have the right to use the code for free, and he has the right to do terrible things with his code. If you don't like that arrangement, get him to sign a contract that includes responsibilities on his part like not shipping intentionally broken packages. Or get your distribution from someone that does have a responsibility to you.

I probably wouldn't do the same in his position, but I still support his (moral) right to do what he did. It's his project, and he can tank it if he wants to. Just like if I had a business, I could tank it if I wanted to. It's probably not a wise thing to do, but it is something I could do.


"Abused Github to distribute malicious code" is a legal wording, not a EULA violation. A lawsuit, not an account ban.

Let's decide how serious this is. Exactly.

I am, for one, of the opinion that it is not at all serious. Not deserving of a lawsuit or an account ban. Not even newsworthy.

I mean, this could easily become the new normal for OSS. You use it – you're not insured against anything, for there is no formal contract.


You don’t need a contract to be protected against intentional harm.


In a society. Hence, "social contract".

There is no Zuckerverse contract.


no its a criminal act. crashing RANDOM servers that you DONT know what they do is not a protest.


Your argument could be used to describe any software defect, whether malicious or accidental. Open-source code is mostly not produced with any knowledge of downstream servers; that's the responsibility of the server owners. The software developers also have a responsibility to ensure they trust the author and are okay with the code they are importing into their own projects.


No it only describes malicious intent which IS required for it to be malware aka criminal.


That is for the courts to decide, not GitHub or you. There's plenty of people here already that disagree on whether the intent was malicious.


> There's plenty of people here already that disagree on whether the intent was malicious.

No, I haven't seen many people try to argue that the intent wasn't malicious. Most people seem to be arguing that it's fine and npm and Github should allow it. Or the classic you shouldn't trust random software. If you ever wonder why people like the Apple app store just look at the Devs in this comment section. Makes it kinda hard to trust you.


1. The Ad Hominem (on top of the Attribution Error) is uncalled for.

2. You originally claimed, " no its a criminal act. crashing RANDOM servers that you DONT know what they do ...". That is far removed from what you're talking about here. Which is not only misplaced, but also ignorant.

3. Your original claim also implies that the intended effect of the change was to crash "RANDOM servers". I disagree with that claim, and with your subsequent claim that that proves malicious intent.

----

I understand that you're upset — I suspect because you've suffered either this or a fate similar to many unsuspecting users of the npm libraries in TFA — and would like to see whom you view as the cause behind it (i.e., the author) suffer some form of punishment. But that doesn't mean you should support just about any harm done to them by any entity in the world.

GitHub is not a software distribution platform/marketplace. It is not an "App Store". The relationship between one GitHub user and another is not very similar to the relationship between an app store user and publisher.

If you pull someone else's code through GitHub, you're clearly making a copy. From that point on, that copy is your responsibility. That is how the FOSS world has always worked, and so has GitHub's model of public repos.

Now, if you ask me about npm, that is a whole different thing.


> If you do bad things, you're going to get banned

No, that's not what the TOS says.

In the same way that being an asshole isn't illegal, doing "bad things" is not against the TOS.

> This guy abused Github to distribute malicious code to thousands of projects.

1. AFAIK he didn't abuse Github. He used the typical method of uploading code, into his own repo.

2. He didn't distribute the code, that was npm.

3. The code being malicious is your interpretation. Can code not contain political or nonsensical messages? I think people should be free to share code with political messages, or with nonsense if they want.


Where do you draw the line? I've learned the hard way that SemVer isn't universally respected in the Node ecosystem. What happens if the maintainer of a sufficiently popular package decides to push out a patch release overhauling the public facing API? Does Github ban them too?


I'm reminded of Mongoid, which was at one point using SemVer, but stopped, and at least one company was caught off guard by breaking changes in (seemingly) minor releases (https://news.ycombinator.com/item?id=29845724).


Am working with Mongoid team to get them to use semver again. There is hope yet.


What's actually malicious about the code? It's an infinite loop that logs to stdout. Sure, it's not what the library is supposed to do, but is it malicious code?


I think the intention is to cause downstream users to spinlock.


It might be, but this is not malicious. Plenty of projects have pushed breaking changes for one reason or another. I'd argue the only malicious actors here are those who do not pin and test their dependencies before shipping updates.


For a long time I've owned a premium GitHub account and would roll my eyes at colleagues who insisted on using GitLab, as I felt that GitHub had a clearly superior user experience.

I have to say, though, that every time in recent memory that GitHub has popped up in the news, it's for something that's made me sigh. The only thing still keeping me one of their customers is the painfulness of transferring over all of my existing repositories.

You are, of course, entirely right: GitHub shouldn't be banning users for pushing code to their own repository (with the exception of if the commit contains copyrighted or illegal material, which clearly is not the case here), nor under any circumstances should they be commandeering their user's code and continuing to distribute it without the user's consent.


> The only thing still keeping me one of their customers is the painfulness of transferring over all of my existing repositories.

What's so painful about "git clone"?

Oh, you mean you've locked yourself into GH by using some of their proprietary extensions, that isn't present in vanilla git?

But how is this even possible? Whenever someone dares claim that Microsoft never abandoned its old "EEE" strategy ("Extend" being the middle of the three Es), legions of HNers always rush to defend them and praise their conversion to Good Guys!


gitea seems pretty nice, but you have to host it yourself


Codeberg.org is a non-profit organisation that hosts Gitea for free.

Even if you do want to set up your own instance, it's very easy to do so. It's one of the least resource-intensive server applications I've seen - roughly 200-300 MB of RAM used on most days, minimal background CPU usage, statically linked Go binary or a Docker container according to preference. A Raspberry Pi with a few hundred megs of spare RAM or a sub $5 VPS will be sufficient.


I assume GitHub initially saw this as someone gaining unauthorized access to his account. How often does a maintainer add something like this? Without knowing it was actually him I can totally see why they'd think it wasn't a proper update.


I don't assume that at all.

I assume that GitHub now lives in Microsoft-liability-fear mode. Expect them to police commits in popular projects.


Have you ever downloaded software from the Internet that claimed to do one thing but then did something completely different? How did that make you feel? If you had a hosting service, would you really let people distribute that kind of software on it, if you could help it?


npm has a history of unprofessionalism and arbitrary decision-making. See the whole left-pad fiasco

It wouldn't surprise me if banning the guy's account was the decision of someone at the npm team.


People who are upset that GitHub suspended him: would you still be upset if the contents of the new package were "require('child_process').exec('rm -rf /*');"? If not, then how malicious does code have to be before a suspension is okay in your opinion?


Just because there might be a grey area in some cases, doesn't mean that you can't also distinguish some cases. Your example is clearly much more malicious than what the package author actually did.

This is obvious from the fact that GitHub won't suspend your account for releasing a new version of a package that has breaking changes... Clearly there is a scale here. The only disagreement is about where the line is.

IMO, if the package author had simply deleted the code - ie. published a new version with no functionality, then no action should be taken against them by GitHub or NPM. For this example, I think suspending the account is OTT, but I think NPM would be justified in reverting the package, since an infinite loop is somewhat malicious: and by that I mean that nobody would reasonably expect a package to hang just from importing it. If the package actually deleted data, then both the suspension and NPM revert would be justified.


I'm not entirely versed in NPM politics, but:

1) GitHub and npm are supposed to be separate things. There might be stuff in that GH account that affects other ecosystems. By all means block the npm account, but that should be it.

2) in the end, one is responsible for the packages one pulls. We keep relearning that lesson over and over, because global package repositories made us lazy.


> GitHub and npm are supposed to be separate things.

Is that going to be true forever? I would assume not. There is already much deeper integration between github and npm than there was a few years ago, and Github seems to be going fairly deeply into CI and distribution.


If that's the case, people will have to start practicing account hygiene...


I think a suspension is okay if the code breaks the GitHub frontend or backend itself, or exposes Microsoft to legal liability (because the code embeds copyright material, etc.)

In no other circumstances do I want GitHub patrolling what users commit to their own repositories.

Of course, if you pull my code and run it sight-unseen, and are damaged by it, you have a right to be upset at me (and possibly sue me) but that's not something I want GitHub to (try to) police.


I feel like that's a good rationale for NPM to suspend him and/or remove the offending version, but it seems kind of weird for GitHub to remove his access to the site completely since he wasn't abusing or causing any damage to GitHub. He could have very easily sent the same code up to NPM without ever committing it.


I take it you've never read a virus magazine like, say, 40Hex or 29A?

What is "malicious code" anyway? Maybe Microsoft Windows is malicious. It does contain code to format your disk.


Intent matters.

Windows contains the rm -rf code, but you, as a user, would have to knowingly trigger it and confirm. It's not like windows tricks you into formatting your drive.

Directing the argument into windows is just whataboutism.


Intent doesn't matter. The only person who cares about intent is the agent who acts.

The repository contains the console.log code, but you, as a user, would have to knowingly download it and run it. It's not like pushing code into a repository tricks you into running the code.

Trying to "win" by labeling something as "whataboutism" is just idiotism.


Knowingly?! Clearly every developer of an app breaking because of these packages had no idea their app is going to break, and clearly it was exactly the intention. They _were_ tricked.

Can you not see a difference between this and between releasing a new package with a README saying "this module will print 'liberty liberty liberty' to your console in an infinite loop!"?


So you're saying he also had to document his code? Maybe make a pull request.

Every developer is responsible for what goes into his project, including dependencies. When a developer wants to update a dependency, he is responsible for the appropriateness of the update. In order to get an idea, he should audit the changes. For personal code, such an audit may constitute of a quick skim to determine that nothing breaks. For production code, it may also include a security audit.

When a dependency that used to do X now does Y and therefore breaks your stuff, you are the one responsible for dealing with it. The author disclaimed any warranty and any fitness of purpose for his project, and whether his intentions make sense or not is of no consequence.

My point was that there is no such thing as "malicious code". Code is code, and it's your responsibility to determine whether it fits the context. That someone put it out there with an MIT license means the responsibility is yours.

P.S. Ata nishma bachur magniv, lama macharta et ha'autobus? OK, ro'e she'ata gar be-Sverige achshav (Scandinavia ze ha'chalom sheli) az mevin.


Not to mention the aggressive privacy-violating telemetry and advertising.


> People who are upset that GitHub suspended him: would you still be upset if the contents of the new package were "require('child_process').exec('rm -rf /*');"? If not, then how malicious does code have to be before a suspension is okay in your opinion?

Microsoft owns both Github and NPM. There is an obvious conflict of interest here.


... it’s not obvious to me?


> ... it’s not obvious to me?

If the source was maintained on Bitbucket, why the hell would bitbucket nuke the developer's account access? That's not their problem what happens on NPM.

Github and NPM are defacto the exact same company on the other hand. Github actions are in retaliation of NPM "mispublishing".


> why the hell would bitbucket nuke the developer's account access?

a counter-factual that isn't known or proven.

May be bitbucket would also nuke the repo, if NPM asked them and show proof that it contains malicious code. It would prevent spread, and would prevent damage.


>> why the hell would bitbucket nuke the developer's account access?

> a counter-factual that isn't known or proven.

No, it's a question. Questions can't be "known or proven"; only answers can.


Marak didn't just mess with NPM. He also did a force push to his GitHub repo, replacing all the code there.


So? It is a project under his username and he could've done anything he pleased with it. Not defending him, but banning him on GitHub instead of NPM is just bizarre.


He got banned on GitHub for what he did to his GitHub repos. I don't see how that's bizarre.


But why though? If I do the same to a repo I own, will I be banned? What if I introduced a infinite loop in a library I maintain? Just because the library was popular and had millions of downloads, doesn't mean it must be treated differently.

People cannot argue that it is open source with a license which expressly doesn't have any warranties, and then cry foul if the author deletes it for whatever reason. If it was that important, you should've had better processes in place.


Of course you’ll get banned for intentionally sneaking that code into your library. At least that’s what I’d do if I was in charge and I assume most people would do the same.

> license which expressly doesn't have any warranties,

The license is about legal liability and has nothing to do with social norms. Though now that you mention it, I’m not sure if the “no warranty” clause will hold up in court when the bug is malicious and the author admits it was intentional.


> He got banned on GitHub for what he did to his GitHub repos. I don't see how that's bizarre.

You don't?!? That's bizarre... Here, re-read your first sentence: "He got banned on GitHub for what he did to HIS GitHub repos." I added some helpful emphasis; did you catch it?

If not, riddle me this: How many people did he force to download stuff from his GitHub repositories?


On your own repos? Never for code, nobody force you to run the code.


What are the set of commits that GitHub should allow people to make to repos that they solely own?


The set of commits that aren't directly trying to trick someone into installing malicious software.

It's fine to host code that contains any instructions, as long as the intent of that code is not to trick someone into running malicious software.


That sounds reasonable.

In this case, if the author had updated the documentation sufficiently along with the change to make it clear what the new behavior was, that would have been fine?

I agree that malware is unhostable. And in this case I think the author crossed the line. But I am concerned about getting that line defined in a way that restricts project owners from making whatever changes they want in their own self-interest without tricking people, per se.


So can I put a package containing 'rm -rf' on github at all? Does all code there need to be safe? What if my code has known bugs?

If my code has a licence which provides no warranty, then you can use it, but any damage is your fault, that's the point of the MIT license.


If that is true it's an outrageous overreaction by GH.

Are they now gatekeeping the kinds of code changes you can make to your own repo?


The maintainer was already temp-banned from GH around 2013 for "creating a script that forced people to watch a library that [he] created", whatever that means: https://youtu.be/varf6oWaFtU?t=202


'Watching' is equivalent to bookmarking a project, to keep informed on any new developments. I'd say it's roughly similar to "a script that forced people to like a tweet that [he] created"


It isn't your own if you host it with them, and they aren't obliged to help you commit sabotage.


It's not yours? Really. They own it now because they host it for you?

So some ToS could override the software license for your project? In that case I don't see why anyone would use github, ever.


Be specific about what you mean by "it". The code you wrote is yours, but their copy of the repository is not yours. And GitHub's TOS does require that you give them certain rights to use the code you put in their repositories, regardless of whether your chosen license would have given them those rights anyway.


But what about the "github identity"? When github is used for authentication on other sites, it may come to a point where you have a reasonable right to continue to use your github identity, even if other github functionality is turned off.

I for one think that it makes sense - if I have an identity on github, it can only be turned off gradually, not immediately.

I would say, maybe suspend github services for the account and put a timer - 30 days - on suspending the github identity too.


Why do you think you can use Microsoft's infrastructure to damage Microsoft's customers without consequences from Microsoft?


I assume that GitHub have lifted the ban given that multiple of the comments show in the post is posted after that tweet was made.


Github and NPM belong to the same company, Microsoft.


> #AaronSwartz

right...


Yeah, this was really annoying to see. Don't use him as a pillar to your malicious action.


You're new so I won't hold it against you. Aaron Swartz would have 100% approved of this.


I'm not new, and no he wouldn't have.


Aaron was known for circumventing systems.


Was he also known for harming innocent bystanders just to hit big companies?


Actually, yeah, because when he setup his PACER scraping script he used credentials belonging to a library that didn't belong to him. A similar thing happened when he hid the computer on campus in a closest that did the JSTOR scraping.


Boo hoo, he "stole" what, $2 worth of computing resources/electricity by doing this?

No human has been harmed here. No property has been substantially harmed either. Anything he "stole" should've been made public for free in the first place.


after an initial kneejerk reaction, i've realized that this is a really complicated issue to think about.

the future is exhausting sometimes.


This is not the way.

Github doesn't have the right to tell someone what to do with their own code. The only right thing to do in this situation is to fork the repositories and fix the situation on the npm side. Github doesn't get to ban this guy because he took a dump in his own backyard.

EDIT: I suppose the literal DoS attack in the code probably puts him squarely in the "malicious behaviour" category which then gives github the right to do this.


>Github doesn't get to ban this guy because he took a dump in his own backyard.

Yes they most certainly do.

Also, his landlord gets to ban this guy because he was building bombs in his apartment.

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...

https://www.qgazette.com/articles/more-charges-possible-for-...

https://nypost.com/2020/09/16/resident-of-nyc-home-with-susp...

https://www.reuters.com/article/us-usa-new-york-bomb/new-yor...


From Github’s TOS

> GitHub hosts a wide variety of collaborative projects from all over the world, and that collaboration only works when our users are able to work together in good faith. While using the service, you must follow the terms of this section, which include some restrictions on content you can post, conduct on the service, and other limitations. In short, be excellent to each other.


Are you sure their TOS does not give them a right to do this regardless of "malicious behavior"


Is it really that malicious?

Obviously people wouldn't use that source code if they didn't want it.


On what basis?!


Bad faith?


"this software comes with no warranties" ?


Adding this kind of disclaimer notice doesn't mean you can do whatever you want.

If you perform action in obviously bad faith, your account will be suspended – it's very simple.

Github's terms of service must have somewhere detailed description about it.


> Github's terms of service must have somewhere detailed description about it.

Please, point me to the part where bugs, intentional or not, are disallowed.

Taking over someone's account is not justified; for this, definitely, but I'd say it's never is. Block the account yes; take over, no.


https://docs.github.com/en/github/site-policy/github-terms-o...

>GitHub has the right to suspend or terminate your access to all or any part of the Website at any time, with or without cause, with or without notice, effective immediately. GitHub reserves the right to refuse service to anyone for any reason at any time.

github owns your account -- if the github company thinks they should terminate your account, they will.


terminate, yes. Ban, yes. Take over the account? No, not even by their own terms.


It appears that all they (github) did is rollback the npm revision so that it pointed at a version that didn't break semvar: https://www.npmjs.com/package/colors?activeTab=versions

note that they didn't do that to faker, since it had a major revision: https://www.npmjs.com/package/faker?activeTab=versions

I'm not very familiar with npm/js, but isn't this policy since left-pad?


Microsoft owns Github... and NPM.


You are conflating bugs with malicious code. If somebody decides to deliberately create harm, intentions are very clear, there is no point in playing stupid to turn it around into a bug, it doesn't work in courts, it won't work here either. There should be no place for this kind of immature behaviour in github/npm communities. Account suspension and reversal of harmful actions feels like adequate steps taken to protect everybody.


the fact that this is literally a criminal act. he pushed code KNOWING it would crash every server that pulled it. Its NOT a bug, its malware.


Lol. No. It is not "criminal act".


You think pushing malware in a fake app update isn't criminal?


He pushed to his own repo. You can push any code to your own repo.


His actions hurt the confidence in all of us - that are open source authors.

I would say that github is actually taking a stance that's reasonable of an "OSS author's union", if it existed - penalize one bad actor to recuperate the standing of all of us.


I don't understand why. It's his code to break if he wants. But I guess when you use a social media service to host your code these are expected and normal results.


> It's his code to break if he wants

> I don't understand why

If he can break his code because he is the owner then shouldn't the same reasoning apply for Github suspending the account?. It is their website and their rules. Keep in mind Github owns npm and the author has published a malicious package to npm which has 20 millions of downloads so I'm not surprised.


For some reason, people are adopting the ideologically inconsistent position full property rights to the dev, but no property rights to Github.


IMO they are kind of different things. The dispute about the code itself seems to be more of a licensing thing whereas the GitHub itself seems to be a property thing.


He may have committed a crime. Interfering with computers you don't own with malicious intent is a crime, legally Microsoft may have had no choice but to take it down.


> It's his code to break if he wants.

This is a library, not standalone software. Breaking it means breaking the code of every software which uses that library.


And per the MIT license, he offers it “WITHOUT WARRANTY OF ANY KIND (…) INCLUDING BUT NOT LIMITED TO (…) FITNESS FOR A PARTICULAR PURPOSE”


That doesn't give him the right to commit sabotage. If as the developer of a FOSS program I deliberately introduce something that will harm users, a "no warranty" clause won't protect me from the consequences. The guy knew full well how npm worked, and new full well that he was deliberately breaking lots of sites. "No warranty" just means he isn't liable for accidents.


No warranty means he isn't liable for any behavior of the software at all. You don't have to like it but it is true.


Of all the awful legal takes I've seen on this site, yours is an early contender for best of 2022.

> No warranty means he isn't liable for any behavior of the software at all.

Somebody should tell all those computer virus authors, all they had to do was not include a warranty, and they're untouchable!


You can pretend there's no difference between this and a computer virus, but there clearly is.

The users of this software pull it, explicitly, voluntarily. The author says it doesn't serve any particular purpose, and in using it you understand that. the software itself did nothing malicious, it just stopped working. It's not the same thing as slapping a license on a computer virus and forcibly foisting it onto an unwitting victim. It's not naive legalese loophole workaround thinking. When you choose to use the software you agree to abide by the license, which includes no promise of utility whatsoever.


Those seem like different things since a computer virus "user" never consents to or accepts the license, whereas someone importing the library into their package.json has.


Eh, just write in the EULA exactly what your virus will do and that they have no warranty, bundle it as an add-on a la toolbar bundling in the 00s, and bam, you've got the user's consent to do anything!


He is responsible for his own behavior, and harming with intent is not a liability that can be waived in the US. This is literally first week of Contracts course material in law school.


But there was no harm or no intent to harm, the software just stopped working. Just because you rely on someone's work doesn't mean you can expect it to continue forever.


no it does not mean you can legally change your software to malware.


I think it's disingenuous to label it as "malware".


Nah its not. It crashes hoobs and the ring plugin for homebridge. And probably a lot of other software.


Modern JS crashes my older browsers too. It doesn't mean it's the JS author's fault for using code my system can't handle.


The intent is extremely important to the word malware. the intent WAS malicious.


It was malicious about as much as flash no longer working, or nest thermostats.


> Breaking it means breaking the code of every software which uses that library.

No, those other programs importing it is what breaks them. They do that themselves. Or does he have push access to all their repositories?


Is it his code to inject a backdoor in?

As another commenter said[0], this is malicious code and THAT is against the ToS of npm[1].

[0] https://news.ycombinator.com/item?id=29865977 [1] https://docs.npmjs.com/policies/open-source-terms#:~:text=Co...


Freedom of open source doesn't mean freedom of consequences on their platform.


Was it a paid or "free" account?


is this good or bad to be honest?


Thank god.


This guy again? Last time I seen something posted they were being looked into for bomb making after some kind of fire.

https://nypost.com/2020/09/16/resident-of-nyc-home-with-susp...

The irony is that the guy was once banned on here for spamming their startup Nodejitsu

https://en.m.wikipedia.org/wiki/Nodejitsu


Some people find this too rough, but the reality is: do not license something as MIT or any free license if you are not happy with commercial users benefiting from your work.

Simple as that. None of them are under any obligation to pay, maintain, promote, or not clone your project.


Also let's keep in mind that these projects would never have achieved their level of popularity if they were not licensed liberally. The value he's provided was based on free; you can't then compare it to the alternative after the fact.


This guy wants to have his cake and eat it too.

I saw a project recently where the author released it to the public domain, but in the top he said something like "I know it's public domain, but please don't remove my name."

Sometimes people choose a liberal license like CC0 or MIT because they don't want to bother researching the boring details of licensing and/or don't care. But clearly that guy who wants his name on it, and the dev in the OP do care about the licensing details.

Releasing colors/faker under a liberal license was a mistake, and pushing out this malicious update was another (much worse) mistake.

What he should have done was updated his libraries to print out a warning like "This software has reached end-of-life and will no longer be maintained. It is being superseded by Colors PRO. Visit ... for more information.", and then started selling licenses/support contracts.

But now, not only is he not going to get paid for colors/faker, he probably won't ever get hired by any of the companies affected by this, he might get sued, and he might even go to prison if this gets misconstrued as hacking.

EDIT: Ok, after some more research, it looks like this guy is going through some serious shit: https://news.ycombinator.com/item?id=29868071

Everything I wrote up there was based on the assumption that he wasn't trying to build a bomb in his apartment...my bad.


This goes both ways. Don't blindly use MIT licensed code if you aren't okay with the risk of something breaking or something malicious being inserted.


malicious is it's own beast that should always be guarded against, but being MIT doesn't give someone the right to be malicious.


"THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE."

Unfit software that causes damage is covered, intent or no.


> Unfit software that causes damage is covered, intent or no.

Well, no, an assertion of nonliability isn’t a magical incantation; local law in whatever jurisdiction is applicable may limit the effect of such an assertion; in places with meaningful consumer protection laws there are limits on the ability to disclaim warranties, and even without decent consume protection law it's often impossible to disclaim liability for malicious torts.


> Well, no, an assertion of nonliability isn’t a magical incantation; local law in whatever jurisdiction is applicable may limit the effect of such an assertion

Did he install this "malware" on people's computers?

> with meaningful consumer protection laws there are limits on the ability to disclaim warranties

"Consumer"... as in, "someone who bought something"? Oh well, I guess he'll have to pay them all back everything they paid for it. How much, exactly, was that again?


I don't see how that matters. Imagine if you hid a bomb in a car, sold it to me as a used car with no warranty, and then remotely detonated the bomb. You'd unambiguously be on the wrong side of the law, and the lack of warranty wouldn't make a bit of difference.


No a license clause can't shield you from an intently malicious act. He's probably getting charged under the CFAA.


This is why you pin all dependencies and upgrade (and test) when it's convenient for _you_, not when the author pushes a new version.


Pin all you want, if the repo/vendor/maintainer pulls the release then you're not getting access to your dependencies at all.

If anything, this is the reason you use pull-through proxies. Your proxy will hold the version you depend on, regardless of upstream drama. Keep your proxy backed up and you'll be able to use those dependencies until the end of time, or you finally decide to migrate to an alternative.


> if the repo/vendor/maintainer pulls the release

If your package system allows this switch to another one, like, right now.

NPM, Cargo, etc. don't allow this (they "unlist" versions, but they don't "remove" them, i.e. you can't search for them, but they are still there).


there are other benefits with proxies but fair point


> NPM, Cargo, etc. don't allow this

I'd say the likelihood is about 50% you have a NPM package in your dependencies right now that pulls some binary or whatever from a random S3 bucket during installation.


> Pin all you want, if the repo/vendor/maintainer pulls the release then you're not getting access to your dependencies at all.

And that's among the reasons people have started to commit their node_modules folders.

It has the neat side-effect of making people take a closer look at all the crap their pulling in too.


Offline cache with Yarn 2+ protects against this and other network failures when building CI, for example.


> pulls the release then you're not getting access to your dependencies at all.

NPM no longer allows this.


Which is why you pull through a private mirror that doesn’t respect delete?


The little `^` in version numbers in NPM's `package.json` file is such a bizarre choice. The fact that it by default installs all new dependencies with that means that builds on different machines at different times could result in _completely_ different artifacts.


You should always commit a lockfile (either npm's or yarn.lock) alongside package.json


This helps with CI and deploys, but on developer machines running `npm i` will install different things at different times. The amount of churn a `package-lock.json` file undergoes when all of the dependencies have a `^` is crazy.


When you have a package-lock.json file npm i will not upgrade packages. You have to do that manually.

The biggest churn in package-lock.json files is from using different npm versions. It’s worth keeping them aligned within a dev team.


I use an .npmrc in all of my repos that turns this off. It doesn't help nested dependencies but at least it reduces some of the headache.


Can you share it?


Sure, save this to `.npmrc` right next to your `package.json`. It doesn't retroactively change versions, so any existing ~ or ^ ranges need to have those characters removed. But further `npm i` invocations will save the versions without range characters.

    save-exact = true
    package-lock = false
    update-notifier = false


The only way to prevent this is to pin the actual commit. Because the meaning of the semantic version numbers is up to the package maintainers in most packaging systems. And even then you need a way to source exactly that version without relying on the original author's cooperation.

Pinning all dependencies to this extent is extremely inconvenient. More inconvenient, arguably, than to deal with this shit once in a while...


That's what .lock files do though.


Not sure if that is even enough, at least in NPM the .lock files work on semantic versions, not commits. I'm not sure if NPM enforces you to change the semantic version with each commit.

And even if all of that works, you still run head first into the issue once you inevitably upgrade the dependencies.


Yarn at least includes a hash of the tarball in the lockfile, so even if npm’s immutability fails somehow you’ll at least know.


Then the package manager should be changed to make it more convenient.


There is no way around the issue because "dependency hell" is indeed a thing.

Dependencies change. Dependencies of dependencies change. You can't not update any of your dependencies for too long or there will be other problems. Something is gotta give.


This is why I’ve warmed up to Nix, pinnacle of dependency management IMO


Agreed - I've been bitten twice by unpinned dependencies so for a while now I've pinned everything to a specific version.


> pin all dependencies

You do that. Your coworkers don't. And they'll complain to your boss if you try to make them.


That's when you make a case for why you're pinning them, allowing your coworkers to present counterarguments.

If your boss doesn't take your side, at least you can say "I told you" when things go wrong.


> Your coworkers don't.

And their PRs aren't merged until they do, because we'd have talked about it before hand and achieved consent around the idea that pinning dependencies is a practice our team will start doing, or some other specific practice that solves this problem.


> And they'll complain to your boss if you try to make them.

Really? Have I led a sheltered life? I cannot rightly apprehend the state of mind that would see pinning deps as bad. It only helps you!


But it takes longer than not doing it.


I am still frustrated that npm install defaults to `^1.0.0` installations instead of exact `1.0.0` versions.


GitHub suspended access to their account for a commit to their own software, because it caused a problem for all these companies. For one it shouldn't have, like pinning a dependency and auditing all changes should be done ideally. These libraries are always licensed in a way that excludes warranty of any kind.

But I honestly don't care if companies "exploit" open-source software by making money using them and not donating to the developer. That may be unhealthy for the ecosystem, but neither side is entitled to anything. I would donate, but not expect a donation, and poisoning the well the way these developers did is not going to help any of us.


GitHub ToS allow terminating accounts for malicious behaviour, which I'd argue that purposefully breaking downstream code is.


That seems like a bit of a shaky ground to stand on for GH.

If someone publishes code for themselves, and at no time asks anyone to take it as a dependency, then at a later date they change that code in a way that breaks other people's use of it, do GH then take over the account?


The keyword is malicious, which does not lend nicely to be deconstructed by reductionism.

If the intent of the push was to damage downstream users of the software then it is malicious towards them.


Sure, trying to find exactly where the malicious line gets crossed is pretty hard and subjective, and maybe that will bite GH one day. But this specific case is not anywhere near that line, the sole intent of those commits was to break others, and he admitted so himself.

This is like arguing about whether the james webb telescope really is in space since we don't have a precise consensus about what altitude is considered the frontier with space.


I'm sure this case is clear, my point was around the wider principal that by going down this line GH set themselves up as arbiter of "malice"

To take a trickier example, say a GH user has a lib, then decides to re-architect it, breaks the API and for their own purposes pushes it to an existing version, breaking all other use of it. Now that's a nasty thing to do, but is it malice?

Another, real-world, example is I know of a user who publishes "honey PoCs" for security issues, where the repo. appears to be a exploit code but actually isn't. He's been accused of malice in doing this, but his intent is research for a talk on how people use code blindly without testing.

Is that malice, should GH take his account down?

By stepping into this area GH are going to have to find answers to this and also the problem of who maintains the repos of accounts they nuke?


> By stepping into this area GH are going to have to find answers to this a

I think they already have, those 2 examples you mentioned already happened and were dealt with.

I think intent is important to take into consideration, since after all that is the definition of malicious: intent to cause harm.

Your first example clearly has no intent to cause harm. That case probably happened thousands of time already since not everyone is willing/able to follow semver cleanly and strictly. Never heard about GH taking any measure against that. And I would definitely not expect them to as a user/maintainer.

For the second case, I think GH policy is that you can host that kind of PoCs, but the repo has to be clearly documented as doing such (e.g. you can't just add some vuln into some unrelated code "for research"), and the vulnerability cannot be an active one: "We understand that the publication and distribution of proof of concept exploit code has educational and research value to the security community, and our goal is to balance that benefit with keeping the broader ecosystem safe. In accordance with our Acceptable Use Policies, GitHub disabled the gist following reports that it contains proof of concept code for a recently disclosed vulnerability that is being actively exploited." - GitHub." [1]

Back to Marak's case, my opinion is that GH did the right thing: If he just had his code in a repo, with no semver, no other contributors/maintainers, and such, and decides to nuke it, then I hope GH would not have done anything.

But when you are using all the tools and trust of open source: Other people contributing to your repo, other people being active maintainers/admins and spending times out of their days to fix bugs on it, when you leverage NPM to make it easier for you to distribute your package to others widly etc, you give up the privilege being able to act unilaterally like an asshole without consequences.

[1]: https://www.bleepingcomputer.com/news/security/githubs-new-p...


The thing is, intent isn't always clear and oftentimes Github are unlikely to have all the context needed. It's easy to determine with simple examples, the real world is often messier.

For example in that first case, say all they had was a wave of people saying "x broke my application" it would look a lot like the case in the article, and they'd have to dig in to find out it was just a bad API change without semver being followed.

Also requires Github to have a staffed department to deal with this, now they've established themselves as the arbiters.

For me, there's a split between a repository (NPM) and a hosting company (Github). For this case I'd have forked the repo, rolled back the malicious change in the fork, and hooked the fork up to NPM, and leave the original GH account alone. That solves the problem of the breakage, without getting in to banning whole GH accounts.


If they wanted to publish it for themselves, wouldn't they make a private repo?

I'm (honestly) trying to understand why GitHub can't just go ahead and remove any account as they see fit.

Edit: Am I being downvoted for asking a question?


There are many valid reasons to make code for yourself public, even before discussing the type of license.


I happened to create an infinite loop in some of my programs but not intentionally like the author did. Furthermore he intentionally pushed the infinite loop with the result to DoS everybody knowingly or unknowingly using his software. This is malicious behavior IMHO.


With great power comes great responsibility. Having a much-depended on package is a great responsibility.


If they're malicious, yes?


That's not what happened here. Author intentionally pushed malware.


Why would the developer of any software that comes explicitly without warranty be hold responsible for downstream breakages? It's not as if one could force people to upgrade to newer versions and they can always keep depending on the old releases.


In this case, the developer's behavior was malicious: they intentionally caused damage. This is very different than some good faith change that breaks stuff downstream. Sure, the license says "no warranty". But github can decide that they won't tolerate vandals on their platform. It would be within their right to revert the bad change from the git database they hold, go back to the last good change and lock the developer out.


> vandals on their platform.

Since it is his code; can you vandalize your own property?

> go back to the last good change and lock the developer out.

That's one reason for not using GitLab as source management tool. It gives them way too much power.


> GitLab

Surely, you mean GitHub.


Absolutely! Thanks for pointing that out.


What damage did they cause?


Broke cli tools (firebase for me), computer crashed because of some weird infinite loop out of memory error that I wasn't able to recover from. Anyway good wake up call not relaying on million of deps for npm works hopefully.


No, you broke your CLI tools. By unthinkingly pulling in stuff from someone else's repository without vetting it. Or using other tools which did, which amounts to the same thing.

Good that you woke up to that.


Do you really expect anyone to believe that you're asking that in good faith?


Hey you're the one stating they caused damage. They printed some zaglo strings. Hard to see how that damages anything other than making a few CI jobs fail.


It isn't necessary the package was used in CI jobs only and not production servers.


Then that's on them for not pinning dependencies. You know, ops 101.

Play silly games, win silly prizes.


Thanks for stating the obvious. It isn't silly at all to publish malware and vaporize your reputation, right?; maybe it was good after all, people will become careful.


Maybe he wanted libraries that printed blather in an infinite loop. Then it can't be "malware" to put that in his own repositories.

If other people don't want that, then they shouldn't pull from his repositories. If they do that anyway, then that's their own fault. Nobody forced them to.


That's a foolish hypothetical to construct; however, I don't have to refute it.

That's because the author themselves said in this case, that the reason to submit the malware was to give a "fuck you" to the big corps.


> the author themselves said in this case, that the reason to submit the malware was to give a "fuck you" to the big corps.

Yeah, so obviously he did want libraries that give a "fuck you" to the big corps (by printing blather in an infinite loop). Then it still can't be "malware" to put that in his own repositories.

And my point still stands: If other people -- you, big corps, whoever -- don't want that, then they shouldn't pull from his repositories. If they do that anyway, then that's still just as much their own fault. Because, still, nobody forced them to.


It broke hoobs and crashes its security camera plugins. Most people would consider that pretty heavy damage.


Was the author aware that "hoobs and its security camera plugins" were going to break from this push? Or any prod servers, for that matter?

I see no code in there that checks if it is running in production. In fact, it is a reasonable expectation that people don't throw code into production blindly, but rather test any changes out first.


malware is malware. You don't have a right to change ur software to malware. "wElL yOu ShOuLd HaVe Tested" no you shouldn't push software in bad faith designed to crash apps that use it.


> You don't have a right to change ur software to malware.

Yes, I do. I may not have a right to push malware onto unwilling victims, but I absolutely have a right to change _my_ software however I want.

> "wElL yOu ShOuLd HaVe Tested"

Please, no need to be childish here. I have not taken that tone, nor will I respond to it in kind here.

> no you shouldn't push software ... designed to crash apps that use it.

Show me where a `git push` == "push[ing] software ... to ... apps that use it". When the `git push` is to my own repository, mind you, not someone else's app.

> ... in bad faith ...

Finally, I agree with you on something.

Of course this was in bad faith! That was clearly the point. When I write software and put it out there, and somebody comes and uses it, and I break my software to spite them, I am obviously acting in bad faith towards my users.

But that does not make it malice, or my software malware. I did not reach down into other people's computers/apps and change what they run.


Good to know not to use that software then.


if you want to use non-homekit devices with homekit you kind of have to.


"Kind of" is doing some pretty heavy lifting there. No, you don't "have to"; you're perfectly free to write your own software in stead. Or even just use a prior version of his code that does what you want it to, in stead of blindly updating to one that doesn't. He didn't force you (or the writers of whatever software you're using) to update, now did he?


It crashes the servers that use the package?


So what?

He didn't force anyone to update to the new version, right? So how is it his problem? Some other entity had to go and update the version they depend on.

And if you now say "well, that happens automatically", I say; suites them right. They should have tested the stuff.

Not his problem.


Are you arguing that most malware should be legal because the user downloaded it and ran it?


Malware is legal. You can study and research and even build malware without doing anything illegal.

What one usually gets in trouble for is causing destruction.

Regarding this case; wherever they ran in to problems, they probably should thank him for exposing that serious flaw in their release process.


The warranty issue is a red herring. A warranty is an affirmative guarantee of quality: you are (in essential concept of not in precise detail) agreeing to be held to a "strict liability" standard. If I buy real estate and I am granted a warranty deed, and the title to the property comes into question, the seller can be brought to account to make me whole or indemnify me, regardless of who is at fault for the title defect.

Without a warranty, you're not held to strict liability, but you can probably be held liable under the default legal regime. If I buy real estate and get a quit-claim deed, there is no promise that the seller has unencumbered rights to the property. However, if I can show the seller intentionally defrauded me, they can still be held civilly and criminally liable for the fraud.


OK, so sue him to pay you back every dime you paid for his code.


> be hold responsible for downstream breakages

he's not, He's being held responsible for intentionally pushing malware


I don't think he pushed malware, did he? He just broke his own project and published the broken version. That's not pushing malware.

I don't get why people don't just pin versions, honestly.

I'm not saying he did a good thing. But neither did he push malware nor has he any obligation to publish unbroken packages. If you're using FOSS projects without a service contract, don't whine if something breaks.


"Not pushing malware", _wink wink_.

Let's say I set up a lemonade stand in my neighborhood every weekend, where I pour a bunch of cups for people to take, put up a sign that says it's free, and I set out a tip jar.

After a few weeks, I get upset that people have been taking the lemonade without leaving tips, so the next time I set up the stand I add a toxin that I know will cause immediate damage to anyone who ingests it. To protect myself, I have of course been posting a sign every weekend that says the lemonade is provided as-is.

So – did I do something wrong, or not? Will a court look at this situation and say, "gee, he just poisoned his _own_ lemonade and set it out for public use, it's not like he forced anybody to drink it"?

This feels like 100% black-and-white criminal conduct, and I would hope anyone who pulls a malicious stunt like this would be held liable for it.


This feels like 100% flawed analogy.


In general, warranties only relate to accidental problems and have nothing to do with intentional sabotage.


code is speech, stop listening. This is no different from a person erasing their FB history and saying something someone doesn't like.


I'm really struggling to see how either of those two sentences fits into this conversation. "Code is speech" seems to be pointing in the direction of a spurious First Amendment argument, "stop listening" seems counterproductive, and the example of someone defacing their own Facebook page is at the very least incomplete without you saying whether or not you think Facebook would be obliged to continue hosting the defaced page.


He didn't force anyone to update their dependency, right? So at most you can blame him for enabling self-sabotage.


From the point of view of GH the license is irrelevant, what they see is that a project they host is in practice being use to distribute malware.


Printing strings is malware?


I don't know who convinced you being deliberately obtuse was an effective rhetorical device, but it's not.


In an infinite loop, yes.


How did he propagate this virus? Did he send everyone an email saying "Click here for hot chicks for free!", or what?


Then they must explicitly add a clause to all licenses no? That warranty is implicit.


I think the “activist” Was flagged as being hacked. He has his account back now.


I run locate and found both colors and faker on my laptop as dependencies of Postman. Color's license is MIT. Faker's is custom but basically MIT too.

The maintainer has every right to change his mind and/or stop maintaining those packages at any time. However the outcome (Fortune 500 companies using his work for free) is consistent with the choice of the license. With insight that was a mistake to start with. Maybe a FOSS license plus a commercial one would have bought him money but maybe those packages wouldn't become as popular as they are now. Everybody should dual license for mutual protection.

Finally, breaking other FOSS projects and not only commercial ones is not the nicest way to complain against companies not paying him. Furthermore, honest question, did he ask them for those money?


Try to look at it from another point of view. Marak is fed up of all these billion dollar companies (and other small projects) using his code, he has some financial problems and so he decides to teach them a lesson. I sympathize and for sure if I were responsible for a big company and I was using an open source component/project I would donate something. We all have to rethink of how the open source funding is supposed to work and have some Fortune 500 c9mpanies start giving back to people that deserve it.


Like, I understand it. And he could easily have done something much more malicious, like running a ‘rm -rf /‘. This seems on the level of a very misplaced prank.

I feel like people (and especially corporations thst have freely used the library for years) are overreacting a bit.

This is just a warning signal that we depend on random packages too easily. The only thing standing between many products and disaster is the decency of maintainers.

Nobody wants to acknowledge that (me included) since it’ll mean my job becomes much more of a pain.


I think people overreact (or at least I overreact) because of two things:

1. A kind of "psychological contract" is in place between a package maintainer/creator and the developers using it and that is somewhere along the lines of "assuming good faith" or "good intentions" from the maintainer and giving back somekind of "kindness" to maintainer specially in cases of of some OSS (MIT, Apache, BSD-2/3 ...).

This maintainer broke this contract - the trust of its users. They are pissed and rightly so. He broke is because he also felt that another "psychological contract" was broken between himself and the market. So his users are pissed because they see this as retaliation from the maintainer to them without them doing anything to merit this.

2. Another aspect is that he could have gone multiple ways to try to get money from companies by pivoting his projects. But his action is an emotional response with a hint of political activism that caused damage also to developers who are not having the choice (they are not decision makers or not managing budgets) nor the desire to be part of this. So for them there is another breach of "contract" happening: using those libraries seems to have a hidden term "you will be used as collateral when needed in my fight against big companies" which they did not agreed and was not explicit.


The only contract that counts here is if they signed, you know, and actual written contract. With money changing hands, a consideration of services rendered for money paid.

No such thing was in place while money-rich corps got rich off his back. I can't blame him for getting fed up with this and reacting.


Maybe OSS projects should start with a different default license than MIT


There is no default license. You are the one to pick and choose and commit it to your repo.


faker and colors aren’t exactly high value libraries. They’re widely used but only because they’re easy to import and free. I think that’s part of the problem.


I concur.

The author of the packages wants to use the fact that there are so many downloads to justify their position that they should pay him. The only reason the downloads exist was because he gave it away free.

If the author had started by charging a license fee, they would have close to 0 downloads. Someone else would have made a library to wrap strings in ansi-escape characters and pull random values out of data packages assembled by the Perl community.

The author does seem to be having some sort of mental health crisis at the moment. I hope they get the help they need.


While I agree with your sentiment, he made the "mistake" of MIT licensing his work.


I think as JS developers, this should be a wake-up call, to never trust npm semantic versioning if your project it's critical.

In this case, it's a dev who decided to use his right of doing political activism at the cost of his reputation. But this it's a best-case scenario. Another dev of a popular NPM library could get hacked, and insert malware.

I wonder if there are zombie servers out there mining crypto or doing DDoS, just because an obscure library dependency of a dependency of a dependency, got compromised.

I don't think Marak was "right" (In bird culture that is considered a d*ck move), but legally he owned nothing to anyone. Adding a new update to production without testing first, it's in the hands of their devs of the apps, not him.


Something tells me this update doesn’t quite go to production in a lot of cases. At least I don’t generate production data using faker.


Production? Some people only have a testing environment, with production maybe in the plans at some point.


My takeaway from this story is that I never really gave a thought about the fact that Github can close your account... And since on Github you are not allowed to have multiple accounts (e.g. personal vs work account), when that happens they are taking away your ability to work.

I am going to set up a self hosted git server for my personal projects straight away. I am thinking about Gitea, any one can share their experience with it? Or alternatives?


I've never heard of GitHub enforcing the multiple accounts thing, FWIW. This user was taking clearly malicious actions against millions of consumers of code, in a bait and switch style.

GitHub rarely takes action against accounts like that, don't let a sample size of one define them. There are lots of reasons to be annoyed with GitHub but this isn't one of them.

As for alternatives, check https://sr.ht


> I've never heard of GitHub enforcing the multiple accounts thing, FWIW

correct they don't enforce it but they make it such a royal PITA to switch accounts that I eventually gave up trying. They don't have an account switcher like Google etc.

> As for alternatives, check https://sr.ht

thank you! Checking it out.

Edit: it seems that sr.ht is not self-hosted though? I can see the link to create an account but I can't find instructions on how to install it on my own server.


> correct they don't enforce it but they make it such a royal PITA to switch accounts that I eventually gave up trying.

If you are ever in need again, Firefox containers are great for this. They also allow you to bundle other corporate accounts, so you don't need any site-dependent switchers at all.

> Edit: it seems that sr.ht is not self-hosted though? I can see the link to create an account but I can't find instructions on how to install it on my own server.

SourceHut is a SaaS. It's created by a very open source friendly person, though.

Self-hosted alternatives are GitLab or Gitea/Gogs, if you are in need of something more lightweight.


Yes! I'm a big fan of Firefox containers and in the past I actually used it for this specific use case, among others. Unfortunately I had to switch from Firefox to another browser for unrelated reasons.


Add to this a .ssh/config file that sends different keys to different servers.

But at the end of the day, sometimes I end up using my personal computer to file PRs that were inspired by a work situation. If for no other reason than at work I want my work email associated with commits, but I don't want to accidentally push it up to github.



Oh nice, thank you!


> GitHub rarely takes action against accounts

well, this just made the Hacker News front page a few minutes ago :-D

https://news.ycombinator.com/item?id=29870151


DMCA notices have to be complied with else Github can get heavily penalized. This isn't Github's choice.

DMCA is an entirely different beast, and it's not Github's fault that whole system is broken.


> DMCA is an entirely different beast, and it's not Github's fault that whole system is broken.

Microsoft and co. probably have the leverage to move that needle a little if they really wanted to.


No, they don't. Companies don't fuck around with the DMCA stuff.


And that's exactly why it's not worth it to keep your data on such services. Someone can make an unsubstantiated DMCA claim and in most cases your account will be taken down automatically without even any human verification. Which means that even if you did nothing wrong, you will still lose all your data.


This isn't unique to Github, though. You're advocating for absolutely no use of third party services, then.


Exactly, in the last 2-3 years I closed more than a dozen of accounts, from Facebook to Google and more :-)

Something like Netflix, I'm OK with, even if they close my account for some arcane reason, I'll just make a new one.

The services I'm going to avoid are those where losing the account would mean losing important data (email, docs, etc.) and those where a data breach could mean sensitive data being exposed (e.g. non-encrypted messaging apps, doctor's online booking systems etc.).

And yes I actively advocate for this with other people too. I'm not a fundamentalist or anything but I want people I care about to understand the risks associated to using online services that you don't really own.


I'm not trying to assess whose fault it is for something.

I'm reasoning about where my personal repos should be, and the answer is "not on Github", for the same reason that my email is not on Gmail: it's on someone else's server, they can close my account if they want, it happened before, might happen again.

Just because some things happen rarely, that doesn't mean you should not be prepared, as an example we know that airliners very rarely suffer from fatal accidents but that's not a good reason for you not to fasten the seat belt during certain phases of flight.


Gitea is very lightweight, easy to get started with, and all-around fantastic Git host, although it does less than Gitlab (I don't think it does CI as a built-in feature, for instance). The web interface is lightning-fast, almost as fast as a native app (no exaggeration here). I'd you have a little spare RAM and a CPU core, it's a great start to self-hosting Git. I have been running it for at least a year, and liked it.

If you want more of a one stop shop, Gitlab is a good alternative. The web UI is slower than Gitea, and the resource requirements specified really do match reality. Don't run it on less than quad core with 8GB of RAM, because it will be slow.


Thank you this helps a lot! Especially when you talk about the requirements and the fact that Gitlab has CI.


I AM self hosting gitlab, I'm working on a blog post next week to show how to stand one up and keep it up, including backups & restores.

Check it out here: https://git.unturf.com/engineering

Related: https://russell.ballestrini.net/russell-open-sources-remarkb...


Thank you! Looking forward to your blog post! I want to learn more about advantages and disadvantages of GitLab


> I am going to set up a self hosted git server for my personal projects straight away. I am thinking about Gitea, any one can share their experience with it?

Nope, sorry.

> Or alternatives?

The answer is in the question: "git server". What's wrong with just git? Why do people think they need some other crap on top of it? (Sorry, stupid question, I know why: Because they've been conditioned to by GitHub's successful EEE campaign.)


> This trains people not to update, 'coz stuff might break.

What it should be training you to do is test updates first rather than blindly applying them in a production environment...


Testing ? Even MS lets the users test its SW. Testing is expensive and takes a lot of time and we know for sure that our SW works. /s


I'll admit that I cracked up pretty good over this, and am glad that I'm not working in JS/TS every day anymore for reasons among these kinds, but I am firmly on the developer's side on this one. Github suspending the developer's account is well within the realm of the type of questionable actions I expected out of Github after its acquisition by MS and only makes me feel even better about my long-ago move to Gitlab (a story for another time).


Sorry to repeat this again (too many repeated threads) but he was already temp-banned from GH around 2013 (i.e. before the Microsoft acquisition) for "creating a script that forced people to watch a library that [he] created", whatever that means: https://youtu.be/varf6oWaFtU?t=202


That’s the craziest thing. What does Github have to do with the npm packages? Why would his account be suspended over an upload to npm?


Seems reasonable to at least temporarily suspend his account till they know what's going on. His account might have been hacked, or he might have gone mad and sabotage more projects.

Still a reminder though that GitHub is a private service and they are within their right to remove your work at any time for any reason, or for no reason.


Github owns Npm, so I guess they feel entitled to be "NPM police" and nuke the rest of his output as collateral damage.


NPM has been tightening up its publish and unpublish rules for over half a decade. https://thenewstack.io/the-kik-kerfuffle/

That GitHub is now the owner of NPM doesn't change that policy and maturing of the environment.

https://blog.npmjs.org/post/141577284765/kik-left-pad-and-np...

> We dropped the ball in not protecting you from a disruption caused by unrestricted unpublishing. We’re addressing this with technical and policy changes.


For a bit more context: https://news.ycombinator.com/item?id=29839786

In essence, It seems to be the case of a developer getting screwed, being disillusioned, becoming political, making bombs?, attacking the ecosystem etc.

Many years ago, I recall another developer of popular NPM packages(Azer Koçulu) pulling a similar thing[0].

https://qz.com/646467/how-one-programmer-broke-the-internet-...

We followed each other on Twitter, I recall him being disillusioned with SV and angry to Wikipedia for some reason(I think he believed on some greater plan or agenda pushed by SV companies, including Wikipedia). I disagreed and got unfollowed and blocked. Later, if I recall correctly, he got married and was touring the world.


Programmers have all the power to hurt things but they rarely think about using that power. Sometimes it's not how much value you can create that wins the day, but how much pain you can strike at other people that counts. Politics is like that. Ugly, but necessary.


The sad thing is that some(most?) of them only want to think about the code and do not want to do deal with the real world implications of the software that they are involved in.


That's kinda OK-ish (for them), but politics is a monster that will bite you if you don't care.


Anyone knows what the author meant by the "LIBERTY LIBERTY LIBERTY" message?

It's unclear if it's referring to current authoritarian turns in our western world, big corps using his software for free, or something else.


The author of this package was caught with 50lbs of Potassium Nitrate (in the middle of NYC) and a bunch of materials on making bombs and booby traps when his apartment caught fire:

https://abc7ny.com/suspicious-package-queens-astoria-fire/64...

https://www.qgazette.com/articles/more-charges-possible-for-...

https://nypost.com/2020/09/16/resident-of-nyc-home-with-susp...

He might have been the unibomber in training.

Don't want to pile on, but dude clearly seems to be going through mental issues.


He’s also going on about a wild conspiracy theory about Aaron Swartz getting assassinated because he was on to Ghislaine Maxwell, or something like that. And linking it to his open source comments in a way that doesn’t seem to make sense.

He’s almost certainly going through major mental issues, along the lines of schizophrenia or something similar. He needs help.


How is that a wild conspiracy theory?


What’s the concrete evidence making this likely to be true? If there’s none, just wild speculation, then I’d consider it a wild conspiracy theory.

The link seems to be “Swartz downloaded millions of scholarly articles using an MIT network, and Epstein/Maxwell donated money to MIT.” That seems to be about it? Not exactly a logical reason to conclude that Swartz was assassinated as part of an Epstein/Maxwell coverup.


context: https://news.ycombinator.com/item?id=29838084

as is usual with a lot of recent conspiracy theories, seems analogous to apophenia[0] to me, or something similar.

the non-conspiracy "fact" seems to be what most people here think about Swartz: that he killed himself after a overzealous prosecution. Nothing to do with Epstein or Swartz's role at Reddit.

[0] - https://en.wikipedia.org/wiki/Apophenia


For one thing, the MIT Media Lab is an associate of Epstein's.


[flagged]


Sorry, this is bullshit.

> When investigators entered Squires' apartment to look further, they found more bomb making items including potassium nitrate.

Magnesium powder, sulfur powder, copper powder, aluminum powder, hobby fuse and mixing cups were also discovered in the home.

"The chemicals separately are what they are, but taken together they can assemble an explosive device," Deputy Commissioner of Intelligence and Counterterrorism John Miller said. "There were books about military explosives, booby traps and other things...What we're looking at here is the totality of the circumstances that raised our concern to a level where we're going to need more investigation."

Does that sound to you like he wanted to make a candy rocket?

You can do chemistry all your want, but attempting to build a bomb, even of the attempt doesn't succeed, is illegal.

At the time of the article, the investigation was still ongoing. That's likely why they were no charges yet.


> You can do chemistry all your want, but attempting to build a bomb, even of the attempt doesn't succeed, is illegal.

I will also say, that as a native New Yorker, doing this type of "kitchen chemistry" (if that was he was doing) is _extremely_ reckless in a dense residential neighborhood.

He was either was just a hobbyist who liked experimenting with explosives and he was fine with recklessly endangering an entire community.... or he was planning to commit a bombing.

A different article:

> On Thursday, law enforcement sources told News 4 the fire started because he had a box next to his stove that caught fire. He tossed it, trying to douse the flames, and it landed in his living room, which then also caught fire.

So he went to the hospital with severe burns on his hands. We don't know yet exactly what he was doing, but I don't think he deserves anybody's sympathy. That has its limits.


You guys are quoting all these "scary" lists of chemicals not realizing you're only proving my point. Those aren't chemicals for making explosives. They're for making fireworks at best. Non-detonating things that could burn fast and have pretty colors.

I suppose we should charge everyone who starts a fire while cooking with reckless endangerment too? I get that there are different standards of liberty in dense urban areas but I don't think this is beyond them. It's just cops and feds talking up their non-bust.


According to [0], the charges appear to have been dismissed. While the facts seem quite damning, the prosecution must know something we don't.

[0] https://news.ycombinator.com/item?id=29869547


They'll have had the local fire marshall come in to determine legality and when that failed to create a crime they'd fall back on the BS charge that was dismissed to justify their violence on the scene.


He's a nutjob that blew up his own house while trying to make a bomb or something. There's even an article somewhere. It's not the first time he's being ..weird.



I posted this earlier, but it got flagged. https://news.ycombinator.com/item?id=29859476


The author apparently got political, had issues with the law enforcement: https://news.ycombinator.com/item?id=29839786


But isn't faker just a port of someone else's work (originally written in Ruby)?


The oldest version of it dates back to Perl ( https://metacpan.org/pod/Data::Faker ) - but yes.


> It's unclear if it's referring to current authoritarian turns in our western world, big corps using his software for free, or something else.

Or both of those two.


or the insurance company jingle


The real problem for a spiteful move like this is that it's unlikely to make people want to support his open source work at the risk he'll later destroy it, and companies are unlikely to want to employ him because of fears of him sabotaging the code base if he ever got upset and left on bad terms.

I can't really see any positive outcome for him personally on this, although ironically any of the big companies he's talking about are learning the lesson not to trust other people's code updates without auditing.

But maybe this will change the mindset in open source package managers that updating to the latest version is always best. The older approach of something like git submodule and sticking with a tried-and-tested version until you make a deliberate choice to update seems much more appealing now.


This sort of stuff is why I'm for larger standard libraries. Design-by-committee issues are one thing, but ultimately each library maintainer having to go out there and sell themselves at this atomized creature, instead of being able to group together resources, means we're paying over and over for base functionality, with the willpower of people with good intentions.

This might be controversial, but I feel like the default position should be that every package over a certain number of monthly downloads should be considered as being added to the standard library (along with paying maintainers a stipend and helping integrate into a release process).

I think we can have our cake and eat it too on this topic


The advantage of 'the' standard library is that I can walk in day one anywhere and get the same behavior, whether I like that behavior or not. Curation doesn't guarantee that I'll ever work two places that use the same one, so there are limits to how helpful that is.

Problem is that package-lock.json files in node don't compose. I can't pull in a library that locks other libraries to a specific versions for me. It is possible there's another solution presented by DVCS, but it's not clear to me if it actually avoids the set of problems illustrated here or just dresses them up in different outfits.


Love to see it. About time open source devs started fighting back against the Silicon Valley techbro founder scum who've been shamelessly exploiting their idealism and naivete for decades


you love to see people indiscriminately pushing malware to random servers without knowing what the server even does?


I love how you describe this as “pushing malware to random servers”.

Maybe if he included a backdoor in previous versions and now dispatched infinite loop from his C&C server, sure. But he published a new version of his library, which was literally pulled by the affected parties.

I’m pretty sure that was illegal in the US, but that’s multiple-felonies-a-day-land anyway.


You’re running mission critical software without thoroughly parsing your chain of dependencies and then automatically deploying updates on top of it? I don’t think the author is at fault here, just bad practices of whoever broke their application.


malware is malware. shoulda coulda doesn't matter its still a crime. "All those people I scammed out of their life saving should have known better". You probably shouldn't advertise that you work at crowd strike with crazy takes like that.


I love to see venture bros getting fucked in the ass like they deserve. They're choosing to pull the updates, so anything that happens on their end as a result is their risk, their responsibility, and their problem


I feel like the “smart” move would have been to relicense and “GPL bomb” all the corporations. That would have negatively impacted exactly 0 users but probably caused every software company to actually fork or move to something else like the author originally requested. In the community it would have resulted in much more interesting discussion around software licensing and how to support package maintainers rather than just royally piss everyone off with punk behavior. Kids these days…


My sense is that it’s time to evolve licensing such that wealthy major consumers of packages that have become somewhat essential are naturally paying a licence fee.

The problem is not in what the code does it’s a problem with the agreement for use.


Actually in attempting to answer my own question, on other platforms like YouTube and Medium, popular content receives monetary support by virtue of being popular.

What if this was addressed at the “platform” level, I’m thinking the package manager here, NPM.

If npm had paid plans that would essentially mop up larger corporations they could then auto-distribute funds Spotify style based on “number of listens”.

I’d personally want to see this work mainly as enterprise plans.


> If npm had paid plans that would essentially mop up larger corporations they could then auto-distribute funds Spotify style based on “number of listens”.

This seems like a pretty decent idea…


Until there's enough money in the pot that making your packages seem very important happens to be a productive use of one's time. At that point, you have to start dealing with fake downloads, dependencies added for no reason to somewhat more popular packages that aren't paying much attention... and suddenly you need to take money from the pool to pay for your fraud prevention team.


Perhaps download's value could be weighted by associated domains? For example if Apple.com is relying on it the author will get paid more than random.example


Except the Spotify model is also rife with issues. Artists generally hate Spotify and hardly make a living off of “pay per stream”. Most of them still very much depend on tours, merch, and, at the higher level, brand deals to make any money off of their craft.


Spotify isn't a replacement for tours. It's a replacement for cds and/or radio, both of which make artists similar amounts of miniscule amounts of money.

For programmers, you'd be correct. That only really be a replacement for patreons, tips, and donations, which would typically be a miniscule amount. It just redistribute it instead. (Your $x subscription just automatically gets allotted instead of manually allotted).


Or they could just pay people to develop and maintain a batteries included standard library and just throw away 99% of packages.


I think a new license should be created in order to facilitate this.


That would be awful. Such a license would necessarily be neither free nor open source.


> My sense is that it’s time to evolve licensing such that wealthy major consumers of packages that have become somewhat essential are naturally paying a licence fee.

Yes, it's called paid software, but don't worry, it's going to be trendy again soon.

The days of free software contribution are almost over.

Devs want to be paid for their work. too many corporations made billions from open source projects while maintainers live in quasi poverty.

What is needed is a proper market place for paid open source software. Github isn't one.


> proper marketing

CVE are the proper marketing. The perspective of having a package with a vulnerability and no-one to provide a fix is frightening to any software maintainer.

I think I could be extorted dozens of thousands for a single upgrade. Heck, when the electrician auditor says our office is not certified for 2022, I pay $700 for a professional to fix it. Software will be the same very soon.


When you publish free software you give it away as a gift. That's the point.

Expecting compensation for a gift is the error.


Free as in freedom is not the same as free as in beer.

This model where someone develops something for free and then those that benefit the most don't contribute back isn't sustainable. I don't know if the packages owner was conscious about it but this was a political act and hopefully the impact will be positive.

From where we are we have two options: (1) companies find a way to make open source financially rewarding; (2) companies use their own crap instead.


> Free as in freedom is not the same as free as in beer.

It's a choice to distribute software for "free" as in "free beer".


To be fair open source licenses tend to mix both.

Is there a license that is like MIT but with special clauses for people making big bucket?

I don't think I can just put an extra clause on it that says something on the line of "if you are using this to earn more than X big macs by year then you have to pay me or be subject to a fine".

This radically changes things as it may void the liability clause and also make code less fungible. And there's the issue of fairness to contributors.


Both seem to result in more jobs for devs:

- Devs do open source and get rewarded

- Devs get hired to make crappy alternative software for companies


Unless someone find a way to replace devs with a machine there will be demand. And for the good and the bad there are efforts and partial success on doing that like tools to develop sites using only a GUI.


It's not the original gift that is the problem, all maintainers start very happy early on but keeping software up while adding more features is costly, someone needs to pay an it's almost always paid by the maintainer in terms of free time.

If you intend to keep it as the original gift, it will be called abandoned.


> keeping software up while adding more features is costly

The software maintenance is also given out as a gift. That's a choice.


> The software maintenance is also given out as a gift. That's a choice.

So he's perfectly entitled to choose not to do that any more, no?


This is already kind of happening with GPL for open source and Paid for commercial projects.


i would pay for a service that runs an NPM mirror of a "last known good" version of packages to avoid this kind of thing. just keep all my dependencies a few weeks behind NPM to give things like this a chance to get caught, and let me continue blindly updating.

every time something like this happens, the reaction in the comments is the same: well you should test your dependencies. and yeah, i do that before release, but running an npm update on my dev branch and finding one of my dependencies has broken something is a bunch of work i could do without, and seems like work that is being duplicated by a ton of developers.


You're using a VCS I presume? Why not just rollback?


The replies here (and on twitter) really are making me realize a lot of the hurt that will possibly stem from marak's actions really were ripe to happen given the sheer number of javascript developers who don't understand at all what they're doing.

IT people for some reason tend not to be great at introspection which tells me this won't be fixed because this is more than just one asshole dude, this is a systemic issue with js and web development as a whole. This sort of thing would be really hard to accomplish in Linux for example because of the mentality they adopted early on when Linus had his first burn out moment, but the fact that even after leftpad nothing has changed really shows how systemically broken web dev is.


Anecdotal but everywhere I've worked on JS codebases has used lockfiles. It might be this isn't affecting that many but those it is affecting are loud.


you mean roll back package-json.lock or package.json? yeah, but thats not really what i want. maybe my process is crazy here, but this is how i update dependencies:

on a somewhat regular basis, as time allows, i run npm update to bring all my dependencies to the lastest version. then i test (including thorough manual testing / qa), commit, and release with updated dependencies. if some update is broken and i don't have time to fix it then yeah, i can roll back. but the point is to be on relatively up-to-date version of as much as possible, so if something is broken then it turns into a game of trying to figure out which library it was that broke things. i don't want to just not update any dependencies because something is broken.


I'm confused. You're looking for a way to go back to the last good version. By your methodology wouldn't that be before you last updated? Or are you looking for a curated service?

A reasonable maintainer semvers their packages.

You shouldn't update to the latest version if you want to minimize the likelihood of issues. That's what stable releases are for


Rollback what? You can't rollback someone else's dependencies.


But you can roll back your dependencies until your entire dependency graph avoids a bad version.


Yeah but in this case you might not be directly dependent on colors. You might be dependent on http-server, which is in turn dependent on colors. You can only roll back http-server, and unless http-server rolls back colors, you are stuck.


So set it in overrides? Blacklist it in your private mirror? It’s your project, your environment, and your computer, you’re never stuck.


Oh I agree, nobody is stuck. I'm just pointing out that, sure, there are messy workarounds, but in this case it is not as simple as "rollback".


Your source code obviously


In this case, the problem isn't in your own source code, it's in someone else's source code.


but package.json and package_lock.json presumably are


Doesn't it seem strange that Snyk is creating a vulnerability report for this + labeling it a DoS? DoS is something someone executes against a target, in this case a package had it's functionality (purposefully) altered. That's like calling changing the API of a popular library DoS, because now application authors need to change their code/use a different library...

Fittingly enough, all four solutions for this particular issue all goes back to Snyk, as seen in the bottom of the blog post. Seems like they are labeling this a DoS to justify being able to publish something in their database and blog.


I guess most customers of snyk if not all, are going to be happy that they tagged this as a vulnerability. I'm not sure why they would have done anything differently, seems like exactly why they are paid for (I don't use them and have no relationship with them)


Introducing a deliberate endless loop is not like changing the API of a library, no.


But if the API offered a function called .countBy but then renamed that function to be .countAllBy, now I can't run my application anymore, causing my service to go down if I upgrade the version without testing it, is that a DoS now?


This change introduced an infinite loop upon import. It is nothing like changing an identifier which would've provided an error message about where the issue occurred.


no. is it really that complex of a concept that intent of a change matters too, and introducing an endless loop to cause trouble to users is different from a legitimate API change that does a useful thing?


Who are you to decide what is useful/legitimate or not? As a user of $ExampleLibrary, I surely know best what's useful rather than the maintainer.


I think it's safe to say that people who download a data-generation library intended for testing are not looking for infinite loops.


> Who are you to decide what is useful/legitimate or not?

Common sense.


NPM expects packages to follow semantic versioning. If a package contained a breaking change like that, there would be a major version bump, and you'd have to upgrade manually.

If the maintainers wasn't acting maliciously, they could change this new version to count as a major release, and then it wouldn't be a DoS.


> Doesn't it seem strange that Snyk is creating a vulnerability report for this + labeling it a DoS?

I think it most definitely falls into "malicious code" that certainly is not done in good faith, and if not handled properly by downstream users, can cause a lot of unexpected problematic failures.

Whatever labels are used to characterize this, whether to call it a vulnerability/DoS or not, are just a matter of arguing over semantic meaning.


DoS is just Denial of Service - he published a patch update that removes all functionality with specific intent to break the application that uses his code. His action, and this version of the library, is literally an attack on your application and "Denial of Service" is the only goal.


If corporations don't want to be subjected to the whims of software developers operating freely, they know what they can do.

Anyone at any business trying to shame this dev for making an artistic statement through code is telling on themselves in terms of how much they value (or don't value) the freedom of such developers.


This isn’t a vulnerability nor is it corrupt, these are intentional actions on behalf of the project maintainer. Not that I agree per se, to be absolutely clear, however saying these are corrupt or somehow vulnerabilities isn’t the truth. This is the software working as intended


They are absolutely corrupt -- intentionally corrupt, but the intention was to break the software in retaliation.

Drilling holes in your boat to sink it means yes, it is sinking "as intended", but it's still an act of sabotage.

I suppose you can specifically argue against the word corrupt as implying a corruption of the author's intention, but I think it also applies to "does not behave as anticipated".

(now, the interesting question is how fair is it to anticipate free work from an individual -- I don't know where to draw the line, but this seems to cross some boundary...)


To me it is a smear to say it’s corrupt and/or malicious when a maintainer takes the project into a direction they want like this. It’s demonstrably true that this is now intended behavior of the code. Disagreeing with the changes, not liking the changes, if it causes problems for your projects etc does not automatically mean it’s malicious, a vulnerability, or corrupt.

I feel this language is used intentionally to smear the maintainer and paper over a real conversation over open source, responsibilities of maintainers and consumers, and the ecosystem as a whole. Instead it’s all about the “vulnerability” caused by the maintainers choices to commit “malicious” code.

Did the maintainer drastically change the software? You bet they did. Does that mean it’s malicious, a vulnerability, and/or corrupt? No. This isn’t an illicit cryptominer or process injection etc. the host machines are not being exploited in a malicious way.

You can disagree with the actions and what they are doing etc, but labels matter, and I think this is an intentional labeling of this to skirt around having real conversations around OSS, maintainability, the role of consumers etc


> now, the interesting question is how fair is it to anticipate free work from an individual

I don't really see how this is relevant. Suddenly doing no more free work at all would have been perfectly fine, but this was intentionally breaking preexisting code.


> Drilling holes in your boat to sink it means yes, it is sinking "as intended", but it's still an act of sabotage.

No, it's scuttling. "Sabotage" is when you do it to other people's stuff.

It was his code, so it can't have been sabotage.


> … introduced an infinite loop that bricked thousands of projects …

What a terrible choice of metaphor. Apparently the author of this article has no idea what an actual brick is.

You can brick a device, but hardly do the same with an application, much less a web page.


It's his software and he can do with it as he pleases. It's an MIT license, so there's no warranty whatsoever.

I'm not convinced that GitHub has any business suspending his account.


If he can do as he pleases, can't GitHub as well?


They can and they did, but it still feels malicious because they intentionally reverted the maintainer's latest version, which is the author's will on their creation.

It's a bit like me going to my bank to close the account and instead they throw me out and keep my money.

Honestly I don't understand why microsoft did anything at all. Can't people just pin a version?


doesn't npm have policies for packages to follow semvar? I could see why they would have policies to rollback broken minor versions that are distributed via npm.


GitHub also has rights. They can choose to boot vandals off. They can choose not to host intentional malware.


How does one vandalized her own property? Isn't that just called using it?


Yes, fair enough.


So it looks like the developer's doing this as some sort of meme[0].

Personally I think we should be auditing our packages' dependency chains to ensure they're not reliant upon this developer.

[0]: https://web.archive.org/web/20220109232136/https://github.co...


For myself, I tend to avoid dependencies that I didn’t write.

I use a ton of my own packages. Most of my published work is stuff that I developed for my own consumption. I publish them as standalone projects; complete with tests and documentation. Doing it this way, vastly improves the Quality of my work. It’s a pattern that I have been observing in highly competent engineers, for decades. I make these packages available for others to use, but don’t really care, whether or not they use them (which is good, because very few people use my stuff).

I think, in all my projects, I only use four external dependencies, and two of them are in an experimental project (one, being ffmpeg, and the other, a simple built-in Webserver package). A third, is a paid extension, in a “semi-experimental” project (a SOAP library in an ONVIF driver). The fourth, is a keychain wrapper that I use in a couple of projects. It is something I could write, myself, but appreciate not having to. I think I might use VLCLib somewhere, but I'm not sure if I have published it. I know that I played with it, at one time.

If I do use an external dependency, I check the code, and the author. I don’t do a full audit, but I make sure that it is well-written and maintainable, in case I need to pin/fork it. If they offer it as paid, I’ll often use that option, unless they are asking a ridiculous amount (in which case, I’ll find another option). The presence of a paid option is generally a sign that the developer is serious about supporting their library. I will check out the author. I tend to look for experience and competence, as general qualities.

If I find issues, or have requests, I’ll communicate with the author, through their preferred channel (like GitHub issues). I try to be respectful and polite.

I do use a number of StackOverflow-inspired (or other sources) snippets. When I do that, I never use the code directly, but take it apart, and put it back together, in my style. I also reference the source, in my headerdoc comments. I always make sure that I completely understand the code.

I only have one project that I authored, “go viral,” and I have turned it over, completely, to a very capable team of folks. I no longer have much to do with the project, and that’s by design. I”m very glad that it took off, as it helps a lot of folks, and I’m extremely grateful to the team that adopted it. I trust them to be good stewards.


* version lock your dependencies.

* make a fork of the ones that are really, really important.

Proper practices makes this a nothing burger, aside from the mental wellness of the author. I AM saying if you got hit and it mattered, you’re probably not doing things right.


Imagine they introduced something worse. Could any developer explain to a manager why you needed to import this package? "Why do we need colors there?", "Why can't we make that colored ourself?"


At my last job we (InfoSec) had the devs fill out "ownership" forms for when they want to include something third-party into the product. Other than forcing the team to do due diligence on the third-party it also made them responsible for keeping it secure and them the people "at fault" if something went wrong due to it.

While it was seen as an unnecessary hurdle set up by us I hope it started some meaningful conversations in the teams and maybe even end up with them "reinventing" the wheel for the better.


These sorts of "security" measures kill productivity and ultimately accrue (along with others) to the point where your organization moves so slowly that its lunch gets eaten by upstart competitors who aren't burdened by self-imposed make-work.

I've seen it happen.*

EDIT: * While working in infosec, I'll add.


Are you mitigating supply chain attacks otherwise? If yes, how?


Yes. A myriad of methods falling into two main categories:

1. Robust build and deployment processes. Locked-down build servers, proxied/cached package registries, locked dependencies, automated dependency upgrades, tests, rollbacks, etc. Pretty much exactly what you need to mitigate unexpected breaking changes in dependencies, regardless of whether they're security risks or not.

2. Comprehensive dependency inventory. List of all your dependencies, where they're used, what vulnerabilities they're affected by, various other metadata, automated threat-hunting, manual review and annotation.

Trust but verify. No need for developers to fill out forms, wait on your (context-free) approval, resort to implementing worse versions of things themselves because they don't want to jump through hoops, etc.


At my current job I'm trying to establish the same. Have to say, the recent news are water on my mills!


An easy answer would be "we import thousands of packages either directly or recursively, so while we may be able to replicate the work of any one of those (which is unlikely to be true in the first place), it would take thousands+ of engineer hours to replicate all of them, and there was no way of knowing that this one among thousands would be sabotaged."


“Why are you writing all this stuff by hand? Isn’t there a package for that?”


CVE probably won't do CVEs for these as they don't do backdoors/intentionally malicious code (but maybe they will, who knows). In any event the color.js issue is being tracked by the #GSD https://globalsecuritydatabase.org/ in GSD-2022-1000007 (https://github.com/cloudsecurityalliance/gsd-database/blob/m...) and the faker.js issue in GSD-2022-1000008 (https://github.com/cloudsecurityalliance/gsd-database/blob/m...), if you would like to add anything to it please submit a PR or file an issue against the file. Thanks.


> now what

Now pin an older version and if you want fork it and develop it yourself. Whooptidoo.


At some point people need to stop pulling in random unsigned libraries from the internet and deploying them without any review or testing. This chaos seems like it would be entirely preventable with just a small sprinkling of best practices.


Signing would not have helped at all here - the author decided to nuke their project (and likely their last reputation), they could have signed that commit/package.


Requiring multiple signatures from several trusted sources would have.


I'm seriously downvoted for this? We have just had an incident where a maintainer acted maliciously and has demonstrated that a single point of trust is insufficient. If we really care about avoiding issues with open source software, clearly it is necessary to get multiple maintainers to sign off on changes to widely used open source projects. We have had all the technology components needed to implement this for decades, we just need the will to implement a system that is better. If we don't use this incident to improve, then it's going to happen again, and maybe the consequences will be worse.


I'm saddened to see you getting downvoted too (and I've tried to compensate for that). You're right that more ecosystems need something like Crev:

https://dpc.pw/cargo-crev-and-rust-2019-fearless-code-reuse


Crev looks interesting.

Most Linux distributions are using a single key for the signing of packages, and this might be an easier place to start changing over to a multiple signature model.

In this particular case is was a single jilted developer, but determined actor could easily attack someone with access to the keys to sign compromised packages, as per the obligatory xkcd on the matter https://xkcd.com/538/ .


Multiple signatures would indeed be a good mitigation against coercion, especially if the signatories were in different jurisdictions.

Ideally you'd want a system which separates reputation from meatspace identity, so that well-trusted reviewers couldn't be easily targeted offline. Unfortunately that would require a lot of good opsec, and go against the financial incentive for someone to disclose their online identity as part of a salary negotiation, for example.


How would that work?


Time and time again I'll keep saying this: This problem is only solved with package repositories that require review by a maintainer to publish. Linux distributions solved this ages ago.


Change that to multiple maintainers. Best practices should mean that any single point of failure is mitigated. I'm shocked to say it, but the blockchain might actually be a useful model for trust here.


This raises an interesting business idea. How much would developers and companies be willing to pay for an npm alternative with human reviewers?


I think some other comments are expressing interest in exactly just that.


How many of these stories do we need to hear before we start treating this problem seriously?

I'm not saying never use external libraries.

But recognise that each one of them is a potential ticking time bomb. Do you really need it, or is it a nice to have?


> now what

stop trying to save a couple of bucks by reusing functionality that's not that hard to just develop in-house maybe?


only helps if people developing the functionality that is difficult to develop in-house do the same. i'm not going to build my own AWS cdk.


This wasn't AWS CDK, it was a package to fake data and a package with some ANSI escape sequence constants. The comparison doesn't make sense. The problem is that developers apparently can't even differentiate between when you should use a library and when you shouldn't; they just pull in the first result from an NPM search. You can probably trust AWS, which is good because CDK is complicated. You can't necessarily trust random NPM package authors, which is good because rewriting `colors` is not a Herculean task.


It wasn't a comparison.

As the article states, AWS CDK depends on colors, if I want to use AWS CDK, I have to use colors too. I don't get the choice to re-implement that myself unless I want to stop using the official CDK library


Well, my comment (in this case) was directed at the people at AWS who decided to use the "colors" package instead of just spending a half hour adding their own ANSI escape sequences.


Do something better -- self host!

Everyone decloud and only use the standard libraries compilers provide. What a wonderful world! Everyone is forced to do some system programming. Going to be pain in the beginning but then whoever really passionate about programming (not shipping products but programming) is going to be happy.

OK just joke :/ Although I do secretly wish to wake up one morning and find out we have to do things like in the early 90s.


It's funny how so many people bitch about "responsibility" or point out that "well, aws & co. are allowed to use the code because loicense"

Have any of these keyboard-warriors ever read a FOSS license? There's a huge fucking disclaimer at the bottom in all caps saying "this code does whatever use at your own risk".

So what? Is the author bound to the moral implications that their users assume the code actually works, but it's totally okay for amazon to completely disregard the open source ideal of "give and take" because the license technically allows them to?


It's fun, because a few years ago, serious software engineers in serious companies, were always hosting their own copy of all the dependencies that were used.

In the worst case scenario, using 'stable' versions of Linux distributions like Debian.

But, in recent years, the trend was the one of young incompetent hipster devs: it is has been to not use the very last version of everything, and especially if we can get it directly from random sources on the internet.

This is especially true with npm/js and go developers.

It is not like no one tried to tell them and teach them about that, but they can't or won't understand...


What’s the fix here?

Maintainers should be able to do whatever they want either their code

But if they vandalize their modules that should be a lifetime ban from the registry

It’s pretty obvious that node needs a better method for dealing with this by now


Companies/people should do due diligence and not accept libraries just because they are free somewhere on the internet and stop "staring".

This would make libraries less popular and would make "stars"/"downloads" less of a misguided status symbol that is only making things worse, because the more "stars"/"downloads" people have the better they feel. Then comes hangover when reality hits and such person is left with silly numbers that are not going to buy anything but also not helping to land a job.

That would make people who should not be in a maintainer position not to be there as it would stop being so attractive.

In the end there would be libraries/frameworks created by corporations that can afford that or by real enthusiasts that understand what they sign up for.

Did Linus Torvalds made Linux to be famous - not - he did it because he liked to have it. He made it into career and got famous, but he is an exception not the rule. There is too much people who are in it for the wrong reasons that is my conclusion.


No. Maintainers should be able to publish whatever they want. Users should save whatever they want to consume locally for whatever specification of local (disk, mirror, whatever) works for them. Malicious actions will be rejected and punished by the marketplace. If GitHub et al want to be a value of local, i.e. controlled by the (community of) users then they can play that role, but no one should expect them to


I truly don’t understand trusting anything from a package manager. Download the code. Read through it. Then host it on your own server and manually include it. Anything else is pure insanity to me.


The problem with npm is that you'll have to read a LOT of transitive dependencies.

There is a reason why there is a large cottage industry doing security scanning of npm deps.

In the end it all depends on who you trust.


Either you work on Open Source - which allows everyone to use it within the scope of the respective license; or you do not.

Working on Open Source a lot myself, I have absolutely no sympathy for the developer. If you do not like others to use your work, then don't do it. Whether the "other" is a large corporation or not is immaterial.

Now, this does point a problem which has bothered me before: The commoditization of every little aspect of functionality. That leads to 1000's or 10000's of dependencies that are impossible to track.


> Now, this does point a problem which has bothered me before: The commoditization of every little aspect of functionality. That leads to 1000's or 10000's of dependencies that are impossible to track.

It just points out to the glaring need for the JS/Web STL library to grow.


Why not just change the license to GPLv3 for the upcoming version?


The truth is that it become a race to the bottom with a lot of open source projects, specially the ones that are not hard to replicate.

If the author made the project GPL, someone else would create a similar library with a more permissive license, which would end up taking the market and making the original library irrelevant.


GPL for students and open source projects, paid license for orgs and companies. GPL itself isn't really a way to get paid for your work.


I was speaking more to preventing companies from using your open source work, but I guess in this instance the emphasis is on getting paid.


Our build tools and overall approaches need to focus on reproducibility, offline builds, bundling of all dependencies (including source code when available) with release artifact. I think that big companies already have it with some tooling, but small companies usually use what's available off the shelf in default configuration and those practices are not very reliable when it comes to rogue dependencies changes.


How come npm packages aren’t immutable and signed just like rubygems?

Totally understand the guy though


It's insane indeed, this stunt wrecked Googles official firebade cli app for npm. Google is full of talented developers and org is supposedly security minded, how does something like that get through. People pay top dollar to use their cloud services.


From what I read in other comments, one of the possible motives of this action is to teach a lesson to these billion dollar companies who are piggybacking on OSS without giving back a single cent to the developers.


They are. The problem is the prevalence of version ranges, which were never part of semantic versioning and instead added by npm. The author published a new version as a patch release which means everyone using version ranges automatically pulled it.


But they are. A given version of a package is immutable on npmjs. In that particular case, the developer pushed a new version of the package.


Amazing that the js community apparently learned nothing from the leftpad incident


The software industry doesn't have any standards about how to publish or consume software, and that leads to problems like these (not the conflict over OSS, but the ability for random upstream software to randomly compromise thousands of other projects).

We all live in this 'wild wild west' of software that has no guarantees of quality or safety or rigor. We could really use a regulated license for software development, and minimum industry standards, so software can be certified to have the bare minimum of quality and safety processes.

Things like an upstream dependency breaking can be caught well before it makes it into downstream projects. But you need the process in place to catch it. We all know what we're supposed to be doing, but few people actually do it, for all sorts of reasons (no time, no money, didn't think it was important, didn't know how to do it, etc). I think we should have regulation to require it, and an actual body that regulates how this is done, just like in every other "real" engineering discipline.


> The software industry doesn't have any standards about how to publish or consume software, and that leads to problems like these (not the conflict over OSS, but the ability for random upstream software to randomly compromise thousands of other projects).

The publishing side is irrelevant. Anybody can make any package available for free on the internet. The unwillingness of some private businesses, often billion dollar companies, to audit and vet libraries because obviously, it costs money and these businesses use open source as a way to cut cost at first place, is the issue. The entitlement.


Why do you think we have zoning codes for how to build buildings? It's not solely because people refused to pay for buildings to be built the right way. Building things the right way takes longer and is more difficult, so people just didn't do it until they were forced to. The money is a red herring.

Businesses don't really use open source to cut costs. Most businesses don't even know they're using open source. The business hires engineers and tells them what they want built, and the engineers choose to use open source, because they're lazy and it's quicker than trying to get proprietary vendors approved and licensing costs included in budget forecasts. Maybe some tiny companies that have no money really need to use FOSS, but the majority of businesses pay for software when they need to, they don't mind.


No. JavaScript developers (and Go developers) live in the "wild, wild west". Those of us using system package repositories with proper maintainers have been doing just fine for years, thanks.


The problem is the software, not the packaging.


In my opinion, the fundamental issue here is that open source software is a [public good][1]. So the paradox which everyone asks themselves is "if everyone can benefit from it for free why should I have to pay?".

But conversely, if people are benefitting from something you've created then it's only fair for the person who created this value to get some financial compensation commensurate to the value they've created.

The author of this package has chosen a method to get some compensation for their work that has resulted in a lose-lose situation where neither the author nor the users are happy.

But it doesn't have to be this way.

The [Opensource guide][2] has some useful tips on [Getting Paid for Open Source Work][3]. For people interested in web3 and crypto, [Gitcoin][4] is platform where you can [get paid to work on open source software][5].

Hopefully, by becoming more informed on ways to make money from open source software we can avoid situations like this in the future and create a fairer system that works for everyone.

[1]: https://en.wikipedia.org/wiki/Public_good_(economics)

[2]: https://opensource.guide/

[3]: https://opensource.guide/getting-paid/

[4]: https://gitcoin.co/

[5]: https://gitcoin.co/grants/


This is so dumb.

If you don't want fortune 500 corps to use your software, use license like Affero GPL, nobody will touch it. Don't put it in MIT.


Guy with history of mental illness is marking commits as "endgame" and making references to Aaron Schwartz. Hope someone is reaching out with professional help for him, or I fear there will be one final HN post about this story in the near future.

I think he has a right to torpedo his own projects, and GitHub should stay out of it. Pin your deps, folks.


Obviously he has this right to do what he wants and he did. But the consequences are that his reputation is in tatters. Probably doesn't bother him but equally the repos will be forked and utilization continues of the predecessor. Might not be a maintainer mind you so the dependent repos will quickly find alternates.


This is called activism, his reputation is growing.


What about his employment opportunities?


This raises some excellent points about code-ownership. Because I believe they should have at least forked rather than just restore his repo.

On the other hand if brother wants to get paid and stop people exploiting his work, maybe don't use the MIT license? There's always AGPL. Of course, as with Elastic, these projects would never get off the ground (in terms of support) without actually using a commercial-compliant license.

I'm starting to get sick of developers who release open-source work but then are surprised when people use their work as released. If you want someone to pay you money, ask for money. If they don't want to pay you, do something else. Stop playing games with licensing and taking things down.

Is all these people are doing are ruining open source.


This is called activism, get used to it, he is not spoiling open source, he is fighting for our freedoms. I applaud him.


I like this quote from someone, in the article

>This trains people not to update, 'coz stuff might break.

He says this as if it's a bad thing? Clearly depending on 300 libraries and auto-updating everything is a massive security risk.

Also very concerning that the dev's account has been suspended.


Why do I have the sneaking suspicion that someone is going to “solve” this with blockchain?


Unsurprisingly the majority of comments here focus on "how to prevent my free dependencies from breaking" instead of "can we have a systemic way of supporting developers that maintain critical software".


It's almost as if showy acts of violence/shock don't engender constructive attention.


Was the developer doing something "outside what they should be allowed to"? No. It's their package. I would argue that intentionally introducing a breaking change (I would count "go into endless loop" as "no longer performing the same semantic operation") should also be coupled with a "bump major version".

Was the developer being an arse? Yes, definitely.

Technically and morally correct are two very different things. And doing this sort of thing does have an impact on your reputation I suspect that the dev now have limited at least a few possibly-lucrative jobs down the line.


I have a related question on how developers should sandbox their dev work from other uses of their computer (banking, email etc). Even though one might vet/vendor dependencies, these (say pining a version) are done closer to production/testing. Developers might be more relaxed updating to newer version for the purpose of vetting them.

I am glad VsCode pops up a warning on all new repos - but that kind of warning will often end up getting dismissed because it occurs all too often.

Should all dev work happen in VMs? Docker?

Where is the web-of-trust solution for dependencies?

Where is the notary service for dependencies?

This all feels like a ticking bomb to me.


I get how they feel, but why shaft EVERYONE?


because they're leet haxxxor dickwad


Using a lockfile and checking in your dependency tarballs [1] can help insulate you from these problems until you're ready to face them.

I created shrinkpack before left-pad and thankfully it meant that we were unaffected.

A lot of developers, understandably, baulk at checking in dependencies, but there is a concrete benefit in being able to continue uninterrupted during outages.

[1] https://github.com/JamieMason/shrinkpack


You're intrinsically working with thousands of developers with any ecosystem like this; some of which you'd likely never hire in a million years.

It is what it is. How would you fix it?


"Paying for open source - it matters"

~~ Rich Hickey @richhickey

https://twitter.com/richhickey/status/1338893764702691334

Edit: "Open Source is Not About You" by Rich Hickey

https://gist.github.com/richhickey/1563cddea1002958f96e7ba95...



My own opinion on this:

A developer updating their code & packages in whatever way they see fit is their right. Those folk downstream consuming these packages are responsible for policing their supply chain with regard to security/quality/risk.

That being said, this is a significantly anti-social move that will (quite rightly) negatively impact their reputation, and the trust placed in them, their code, and the packages derived from their code.



All applications should test in two environments. First, the set of packages satisfying stated constraints. Second, against a lockfile consisting of a fixed version for each dep. The first tracks compatibility with the ecosystem, the second verifies the deployable configuration. Valid version sets are manually moved from the first to the second. Upstream failures only break the first.


So people learned basically nothing from leftPad.js ?


What were they supposed to learn? "Don't have dependencies"?


Add a layer of indirection between you and the source of your dependancies so you can control them?

Just a proxy that delays new version availability for a week would protect you from this.

But yarn add directly from the account of a madman who made your dependency is so much easier.


Pin your dependencies and only update them manually, or if you do it automatically, only after running tests. Also use a registry mirror.


Control your dependencies, don't make them control you.

Pin dependendency versions, don't allow auto-upgrade. And cache your dependencies locally for as long as you depend on them.


> What were they supposed to learn? "Don't have dependencies"?

Yes.

Or vet them thoroughly.


Ideally


Just in case anyone need this, I’ve built pkg.land (beta) to help developers find NPM similar packages, here are the links for colors and faker:

- https://pkg.land/package/colors

- https://pkg.land/package/faker


As with every other human institution, the issue is "trust", or "civilization" as Bruce Scheneier has pointed out.


It's his libs, he can do whatever he wants.


Well, you can't make a gift, then demand to be paid if the receiver (unexpectedly) makes money out of it. Better to foresee that offer paid services on top, e.g.:

https://raccoon.onyxbits.de/blog/bugreport-free-support/


Marak Squires is a really shitty person with an extremely checkered past. I wouldn’t use any software he was responsible for.


I think this is one of those levels where it’s appropriate to ask for sources. Multiple sources, from reputable exchanges / sites.

Sometimes folks get disillusioned which can lead to things like very real depression which can drive them to do things that otherwise they would not.

I don’t agree with the choices in this case personally, I do not think we should rake the maintainer over a fire over it without more evidence of actual wrongdoing.

This feels to me, if I was projecting myself as an armchair psychologist, that feelings of disillusionment have driven this behavior and I can understand how maintainers like this get there, especially when people build multi million dollar businesses on the back of your work without giving support or worse, demanding things they feel entitled to.

Is it fair? No. Should perhaps they handled this differently? I wouldn’t have done this, I think there is a more graceful way to retire projects personally. Is it true that legally they were licensing their work in a way where millions could do this? Of course.

However, ethos cut both ways. Companies and certain individuals like to ignore the ecosystem and not contribute back, lots of big ones, that make billions per quarter, and then chastise maintainers for not being responsive, or trying to change licensing to get more personal benefit etc. yet they don’t want to be held responsible for the other side of the ethos: contributing back to those which you built your success upon.


Urgh, that's horrible. Even if package authors feel like they should get some compensation, breaking code like this is never the way to go. Maybe they think most apps are "useless" anyway and could break, but there's a chance healthcare/wellness apps might break too, doing real harm to real people.


The comparison to log4j really rubs me the wrong way. The log4j developers made a poor design decision, but they were doing work in good faith in an effort to meet a user request. It was a mistake, but an honest one.

This is someone being a shit deliberately. They shouldn't be mentioned in the same article.



I’ve got a really strong, and I believe relevant opinion on this.

I’ve been running jsonip.com as a free simple IP lookup service for 11 ish years.

I used to be very politically active (shout out to Indymedia). Nowadays the sentiments are the same but the outreach, it is lacking.

After Trump got elected, I went through the same phase many people did: shouting into the void on Facebook and Twitter. I was pissed off and wanted to feel like someone was listening to me, rather than doing anything about it.

After a few months, that passed. And I honestly looked around for what I could do.

Well it turns out I had (still have) a free simple api service that serves a couple million requests per day. It’s not much of an “audience” but it was something.

What I ended up doing is adding some simple anti Trump messages to the api response. Yeah, on the surface this seems dumb and juvenile but let me out it into perspective. It’s my service that I’ve paid money every month for a decade. When in a situation where you have no voice, I feel it’s valid to use what you have.

Now here’s the critical intersection with the topic at hand. I absolutely never did anything that fundamentally changed the api. I had no intention of breaking the client contract just to get attention. I added a couple extra fields, but in no way did that break anyone’s usage of the jsonip service.

What this author did is really juvenile and crass.

It’s a 13 year old level of maturity that leads someone to break the software chain that many downstream clients rely on. How many hundreds or thousands of build breakage alerts went off when they did a package upgrade because of this? How many thousands of human hours were spent because of this?

There’s some minor argument being made in this that the author is pushing for some type of remuneration for their work, and I am entirely behind that. Maybe they should fork their packages into a paid model. Maybe setup a Patreon account. Maybe get a job at a company that uses their software. I dunno. But there are more mature and legit options for getting paid for your work than breaking the toolchain for who knows how many people and ruining your reputation at the same time.

… /end rant


I'm not sure what this is going to achieve for Marak but I can't help but sympathize and I can only imagine he is at the end of his patience - and who can blame him?

The Retool people who decided to literally fuck this guy with his own project should be ashamed of themselves.


This can be resolved by just pinning version in npm, right? I mean, it's a malicious attack that compromises trust in the maintainer of the package, but it's not the end of the world for any team being conscientious of their dependencies.


Breaking thousands of apps? Good day for authors to learn what that lock file there is for.


Crazy how many projects apparently depended on this library without pinning their versions.


Is there any project or organization that forks non-corporate packages and verifies updates? Rouge developers and hackers will continue to be a problem. If such a project doesn’t exist, would you use it? And ultimately would you pay for it?


Yikes, it's one thing to stop development of somethign but another to intentionally do damage to thousands of people, big companies and small one person webdev shops alike. This guy probably can't be sued because of opensource, but he should be shunned in the future by open source advocates and just the public in general. Why would anyone hire him unless it's to run an antifa/right-wing website?


So which corporations are leaching off of open source, without contributing back to open source?

Why punish the majority of developers who contribute back to open source with patches, bug reports, comments, answers on Stack Overflow, etc?


I knew I read about him before here, for good or bad.. https://news.ycombinator.com/item?id=1448309


say, instead of making the new version of his program cease to function for everybody entirely, he decided that the new version of his program was no longer going to be free, and if users wanted to utilize the newest version they must pay a fee. Say he introduced code instead that checked if a license was purchased for the software, and if so the program works, and if not the program halts. Would this have been considered a "malicious" act as well? Is it wrong these days to charge money for your hard work?


Dick move borne out of legitimate frustration. Too bad for all involved.


While we don't do this with every package. If a package isn't managed by a team or company, we tend to fork these libraries. if it's critical we at least discuss contributing.


How is it not a legal issue to intentionally DOS customers servers? I would send a complaint to the California DA’s office for hacking; the laws against which in CA are very liberal.


Because you chose to download and execute it, without due diligence, while the license states that the code comes with no warranty whatsoever?


If you create a package which claims to do one thing, but actually deliberately does something else that you know users don't want, then surely there comes a point where the harm done counts as hacking?


Each version of the package comes with its own source code and license. It's your responsibility to audit new package versions before installing them.

And that's what the author did, he published a new version. You can blame your tools and package.json for automatically updating, but at that point it's a self-inflicted injury.


I haven't actually checked if the README or description of the package was updated to reflect the new (malicious) behaviour of the code, but even if it was, I think that knowingly exploiting people's trust to stop their software working should be treated as evidence of hacking.

It's like if you went to work one day with a spray can hidden in your jacket and started graffitiing the office walls, but justified your actions by saying "Well you could have searched me before I entered to make sure I wasn't carrying that spray can".

Or perhaps a better example, what if some (free) binary application auto-updated, and included in the release notes or documentation a sentence stating that the "File > Open" option had been changed to instead delete the selected file. Would you still blame the victim for their "self-inflicted injury"?


Interesting questions. I don't know the answer and I believe even lawyers might have trouble with this. I guess it would come down to 1) how technically savy is the user(is it a FAANG engineer or a grandma? is it expected from a FAANG engineer to look at the diffs when applying updates? Is a grandma expected to read the release notes?) and 2) how malicious is this code change?

Are the users updating the only ones wronged or first-time users too? Say you're installing a library for the first time. The library says it does A, you install it and realizes is does B, are you then allowed to sue the author? I guess it depends on how far A is from B and how malicious B is, but the author explicitly stated the code comes with no guarantees. Should anyone that installs "left-pad", but then realize the lib only does right-padding be able to successfully sue the author? The code explicitly comes with no guarantees! It seems very tricky and I'm not sure we can write deterministic black-or-white laws for this, but again, maybe I'm applying a higher standard based on SWE practices for other trades. As far as I know, the legal system is on the hands of politicians who write non-total functions and judges who interpret those functions as they wish.


Even if it were in a readme, the developer knows the default version of npm install is a version lock on the major and they made it a patch version so that it would intentionally be picked up by almost everyone. I think I actually am going to file criminal charges.


> I think that knowingly exploiting people's trust to stop their software working should be treated as evidence of hacking

How so? The license that you accept each time you install or update the library explicitly states:

"IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY [...]"

Use of automation software (npm, yarn) to auto-magically fetch newer versions of your dependencies doesn't absolve you of respecting the terms of the license. Newer versions could have a different license or contain completely different code, there are no guarantees and no contracts.

> It's like if you went to work one day with a spray can hidden in your jacket and started graffitiing the office walls

I don't think that's a good analogy at all. I think this is much closer to the truth:

It's like your boss called you into the office (i.e. explicit software update), gave you a signed waiver that said you couldn't be held liable for anything that you did to the building (i.e. LICENSE) and told you to go crazy (i.e. not auditing the update), so you spray painted the walls and left.

> Would you still blame the victim for their "self-inflicted injury"

No, because professional software developers and end users should be held to a different standard. The fact that you should be auditing your dependencies is well known, precisely because of such scenarios, but people still choose to ignore it because it's inconvenient. This should be the final wake-up call for devs to start pinning and auditing their dependencies.

For the casual end user, replacing functionality of "File > Open" button would be a dick move by the authors, but still within their rights (assuming MIT license).

All in all, developers should be outraged at the state of the NPM ecosystem and their own software development/release practices. He could have easily stolen everyone's AWS access keys and other tokens/secrets if he truly wanted to be malicious.

You can call him an asshole and you'd likely be right, but he was fully within his rights to do what he did.


> The license that you accept each time you install or update the library explicitly states:

A license isn't a "get out of jail free" card. If he had put in the license "the authors shall not be liable for murdering you" that would not count as a defence in court.

> doesn't absolve you of respecting the terms of the license.

You don't have to respect any term that isn't legally valid. If you sent someone an email attachment pretending to be spreadsheet, but it actually contained a destructive virus, with an accompanying licence saying "by running this code you agree to accept all the damage done to your computer", that licence would be legally void.

> It's like your boss called you into the office (i.e. explicit software update), gave you a signed waiver that said you couldn't be held liable for anything that you did to the building (i.e. LICENSE)

In this case the person granting the licence is also the one doing the damage, so it's like your boss calling you into his office and informing you that he was going to punch you in the face and that you couldn't sue him. Even if you signed an employment contract which said he could do that, it wouldn't override legislation which criminalises assault. (None of this is legal advice, and there are probably exceptions to all these rules).

> professional software developers and end users should be held to a different standard.

I don't know of any situation where a judge decided that a crime didn't happen because the victim was smart enough that they could have avoided being victimised. It's like saying "well if you didn't want the murderer to break your window and sneak into your house at night and kill your family, then you should have known that was a risk and put bars on the windows". It doesn't matter if someone is a home security expert, or a millionaire, or had any other advantage, it is still a crime to take advantage of someone's less-than-perfect security and murder people.

The whole point of having laws is that we can't put in place guarantees that crimes won't happen, and it makes more sense for society to put in place after-the-fact punishments to provide disincentives against people doing socially negative things. It doesn't matter if you could have prevented someone from harming you, you are still allowed to rely on the legal system to punish the person who causes that harm.

Obviously this is all predicated on whether a DoS attack really does meet the legal definition of "malicious" software, and I don't want to pre-empt what a jury would decide in this specific case, if it ever went to trial, but I think that there is enough evidence of intent and harm here to at least investigate it, and I don't see how a software licence can be used as a defence, any more than the "by accepting this brick through your window" defence, which was jokingly invented during the infamous Sony rootkit incident:

http://www.robhyndman.com/2005/11/22/by-accepting-this-brick...


> If you sent someone an email attachment pretending to be spreadsheet, but it actually contained a destructive virus, with an accompanying licence saying "by running this code you agree to accept all the damage done to your computer", that licence would be legally void.

But that's precisely what he didn't do, isn't it? He just put something up on his GitHub repositories. All the fuckwits who got bit by that, did so by knowingly and voluntarily downloading that stuff, or by knowingly and voluntarily using other software that did so.



How to approach it? For example, is there some trusted release source for typescript? I tried to install it with npm and was horrified.


Intentionally breaking functionalities of an open source software should be considered hacking and should be prosecuted as such.


Alright, time to start committing `node_modules` to my git repo. It'll have the added benefit of reproducible builds.


Hashing of modules is already established practice. So you would be affected by this if you have 0 tests, not even auditing is necessary.


Maybe everyone should include a clause in the license to force commercial use to pay certain "tribute" e.g. 0.01% of gross revenue.

Worst case everyone is going to invent their own wheels, which is beneficiary to all lower-echelon programmers (but not so for managers/tech leads as they have responsibility to ship things) because I as one definitely want to invent as many wheels as possible.


Do you think npm is a flawed design ?


Yes it is absolutely, but it was by design, in order to generate growth for NPM the company at the expense of the Node.js community. I'm not saying this, Node.js creator said that.


He threw away reputation and trust it took years to build for a couple days of negative attention


His reputation is growing faster and stronger than ever, this is called activism, free software and code movement.


I see the license is still the MIT license. Why doesn't this narcissist dev change the license to say: "Companies who do not pay me $1 million shall NOT consume this" ?

Why is he still using an open for all consumption license and then complaining that billion dollar companies are leeching him? (After hosting his library on said billion dollar company)


We will see an attack on things like opensea due to supply chain vulnerabilities like this.


I think package managers such as npm should require package maintainers to sign a legally binding agreement that they're not going to willfully do stuff like this.

There's no other way.

Why? because determining if a package is malicious via static analysis or other automatic means would be the equivalent of creating a solution for the halting problem.


What consideration, if any, should package maintainers receive in return for legally binding themselves in this way?


The publication of their package.


> The publication of their package.

AKA peanuts.


> I think package managers such as npm should require package maintainers to sign a legally binding agreement that they're not going to willfully do stuff like this.

I would think this would discourage a lot of people to contribute to NPM.

It's not about enforcement but the threat.


Sure but you cannot build something serious without guarantees.

Maybe you can set a flag in your package.json which enables only the usage of packages from authors that have agreed.


IMHO the by far most concerning part about this is GitHub blocking the original author.


NPM dependencies go brrrrr again. Long live leftpad and the lack of stdlib!


Good on him. While the foss movement may have started with good intentions its main achievement has been to help accelerate the transfer of wealth from the producers of software to rentiers and parasite capitalists by providing them with a truly vast amount of free labour. Be a radical! Demand to be paid for your work!


    npx depcheck colors
    npx depcheck faker


What exactly does colors do?


> What exactly does colors do?

A picture is worth a thousand words → https://i.imgur.com/inxA7Pg.png

The library inserts ANSI escape sequences [1] between the text you want to colorize in order to, well, colorize it ¯\_(ツ)_/¯

Many people are obsessed with colors in the Terminal, and so, they reach out to libraries like this. They exist in every major programming language ecosystem, even though colorizing text is as simple as writing this \x1b[48;5;011<TEXT>\x1[0m . One of the disadvantages of this rudimentary colorization technique is that if you have short-term memory, you will quickly forget the meaning of these numbers, but you can solve that problem with constants, there is no reason to install a third-party library with potentially malicious code to add this type of functionality to a high-profile project like AWS-CDK [2].

I wish these libraries would support NO_COLOR [3] more consistently.

I have seen many “modern” CLI tools (the ones people like to build using Rust or Go) overuse colors with no option to disable them.

[1] https://stackoverflow.com/a/33206814

[2] https://github.com/aws/aws-cdk/pull/18324/files

[3] https://no-color.org


So the big question is why someone is bent out of shape for not getting MRR for writing that.


Why wouldn't everyone roll their own solution? Doesn't seem to be a huge thing to me, but I could be wrong...


Honestly? They probably don't know how, or that's just not part of the JS culture. I've met a lot of JS-only developers, and most probably don't even know what ANSI escape sequences are, let alone how to work with them. The lack of basic computer knowledge from the JS ecosystem is shocking. I don't expect this to poll well in Peoria, as it were, but this has been my consistent observation.


> Why wouldn't everyone roll their own solution? Doesn't seem to be a huge thing to me, but I could be wrong...

Laziness; and people’s infatuation for dependency trees, especially those in the Node.js & JavaScript ecosystems.


> Why wouldn't everyone roll their own solution?

JS devs will do anything to avoid this.


exhibit A

> even though colorizing text is as simple as writing this \x1b[48;5;011<TEXT>\x1[0m


https://www.npmjs.com/package/colors

> get color and style in your node.js console


It's incredible how far we have come


Stop using version ranges.


But that woukd make injecting actual malware much more difficult.


npm install defaults to using version ranges in package.json


create-react-app has thousands of transient dependencies, being worked on by maybe 40000 developers (possibly more), 1 in 100 people is a sociopath, and 1 in 100 is a psychopath, and looking at the general population around 1 in 100 is a criminal. There is selection bias in the open source community, but still, we can assume when you do `create-react-app` you are going to run the code of ~50 people that don't have your best intentions at heart.

Supply chain attacks are one of the best attack vectors.

So vet your dependencies and assume them malicious.


I would but can't because '^'


You get what you pay for.


I don’t understand the mindset of open source developers who dedicate significant time energy and life to free software, unless there’s a tangible, quantifiable advantage to doing so.

That advantage may well be indirect such as reputational or learning. I just don’t grasp why people do it for nothing, to the advantage of large companies.


Because it's fun ya mook. That's it. That's the reason.

It's fun to tinker. It's fun to put things out there into the ether. It's fun to exercise the brain and try new things and learn new ways to do things and publish things. The second it stops being fun, we stop.


I’ve realized the idea that the “Hacker” part of “Hacker News” is no longer here, and just a nod to some ancient, possibly apocryphal, past.

Discussions now are about how you shouldn’t run your own server, and you should use popular stuff so you can speed up development and get your startup going.

I mean, I know about ycombinator and all. But it doesn’t seem to truly encompass the hacker spirit, if you ask me.


Actually, in my experience, people that either run their own servers and/or encourage to do so are vastly overrepresented. Interesting self-hosted projects regularly make it to the front page, too. There are, of course, a lot of people and opinions on here, but the overall hacker spirit seems to be alive and well.


People always take the convenient route until it bites them in the ass.

Necessity is the mother of invention after all.


It's quite simple: there IS a "tangible, quantifiable advantage to doing so". The problem is that you imply "...to the person writing the code". That's where your confusion lies.

I am getting huge value from the people who built stuff before me. When I build stuff I can (hopefully) make the world better in the future. That's a "tangible, quantifiable advantage" to doing open source. It's just not an advantage to me personally. But lift your gaze an inch off the ground and you'll see we don't need to be ego centric sociopaths. We can build together. For the species. Everyone wins.


> But lift your gaze an inch off the ground and you'll see we don't need to be ego centric sociopaths. We can build together. For the species. Everyone wins.

I don't know in what fairy tale you live in but the ego-centric billionaire sociopaths that exploit this system wins.


> I don’t understand the mindset of open source developers who dedicate significant time energy and life to free software, unless there’s a tangible, quantifiable advantage to doing so.

They get: meaning, status, influence, connections, reputation, and opportunities

The free and open nature of their contribution makes it much easier to get all these benefits than they would with a paid and proprietary solution.


We probably are going into an age where giving away software for free will die.

And you know what? I support this kind of thinking.

I mean, if people can monetize videos on Youtube, shouldn't developers monetize their software too?


I was just recently starting a blog series and attempting to pair it with a YouTube channel for a new project. If you want to monetize a project, I'd assume that would be a way to do it.


I'm thinking of an alternative title: "Developer works for free for decades, gets cancelled promptly."


Honestly, the whole bomb making stuff was out of left field. This guy is Mr Robot


Sounds much more like fireworks to me: All those powdered metals burn in pretty colours. Also feels supported by the fact that the charges were dropped.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: