I may be the only one around who cares about this, but I _like_ node_modules. There are so many times where I am working on a project and either 1) I don't understand an API in a package so I quickly click into the module to learn more or 2) I find bugs, and can do a quickfix right there and then in the node_modules and test them out before submitting either a PR or work on a fork. I guess it would be possible(?) to go into the new .yarn folder but its just not the same.
Sure, node_modules is a behemoth and there is still a likely better way - but I _love_ having all of my packages on a per-project basis. I used to be a user of Yarn 1 and 2, and their debugging and installation system is insanely better than NPM's, but I quite like having everything in one place for every project.
Yeah, it's one of the best things about the npm system, for two reasons: You can easily view and edit the files and your project is always well contained and isolated.
I remember the absolute hell that was python packaging as a beginner and I don't know why anyone would want to move even an inch in that direction.
What's the problem, really? How many projects are you working on that the size becomes a problem? Or is it the fault of Apple charging way to much for large SSD drives? Maybe it could be faster, sure. A good cache could help with that.
The thing is it's not even that large on disk most of the time (tens of hundreds of MBs in my experience). The main problem is the number of file handles - it takes a long time even to delete them all - but even then it's just not a huge roadblock. I don't do a clean install very often.
100%. I like it and I’ve never suffered any pain from it. I’m actually happy about discovering Poetry and getting Python to kind of behave the same way with a local venv folder.
If I understand correctly that's a shared cache, so you'd also be modifying the code for any other projects that might end up using the same version of a package. Also, I don't think live hacks to that cache would affect the project you're working on without deleting the pnp file and rebuilding.
Overall, like the OP said, it's possible but it's not the same.
Edit: Though as another commenter pointed out, the patch workflow alleviates these issues so it really isn't that big of a deal, you're right.
for sure. i do most of my work in Golang, which uses a system wide package store, and I hated that about it... until i discovered the Visual Studio Code remote container extension. It lets you define a container environment for your application - probably the same container you already use for CI - and it splits vscode into a thin client and a "server" running in your container. meaning: a dedicated local environment for each project that's fast to spin up and identical to CI.
My projects are self contained again, even though they use global package stores, and my IDE can treat the package store dir as a part of the codebase for debugging. I love it.
Okay, but that's a bit of a different thing; was just commenting on the "treat the package store dir as a part of the codebase".
That being said, overall I found all of that more trouble than it's worth btw, certainly for Go. I certainly had to debug and fix far more issues with the container software itself than with issues derived from diverging environment issues, especially on coworkers machines who aren't very familiar with all of this and are "only" developers (nothing wrong with that btw).
Which makes the usecase OP described still not doable.
Working on multiple projects that use the same dependency and you notice something and want to test it in node_modules like op described, and don't revert it now you're wondering why one project is broken.
> I may be the only one around who cares about this
I think quite the opposite - in fact it'd be somewhat unexpected to hear very much or often about an established tool that is just boringly doing its job well.
This is the pinch of salt you need to apply to anything you read about a new tool / approach to something in tech. New ideas are sometimes genuinely better than the old way, but far less often than at least someone claims they are - which is always.
Equally, you can't take people not leaping to the defence of the old way as signal either. There's little incentive to do so - if you're using the old way and happy with it, there's no need to convince anyone of its merits. It's already established. Better to just spend your time building stuff.
This is vscode specific, but yarn suggests a zipfs extension which lets you explore the packages in exactly the same way you would with node_modules (with the added benefit of not having a massive file tree being expanded).
This is probably a good direction, but it seems like it's still contributing to JS dependency management becoming even hotter of a mess. There really shouldn't be two standard package managers for JS.
This also reminds me how much we keep reinventing the wheel on this. I know it's likely infeasible for various reasons but it seems like it would be great to have a language-agnostic package manager that worked more or less like Bundler and can solve these problems once.
(The most obvious downside of that is that – well – what language is it written in, and does it cause you to have to install multiple toolchains. Maybe the solution is some kind of standard spec for how package management should work, and the manifest formats, etc, and then each language ecosystem implements that standard. Or something. IDK.)
Edit: A cool feature of such a system could be installing dependencies from multiple language ecosystems via one manifest file.
> There really shouldn't be two standard package managers for JS.
Amen to this. This type of stuff leads to so much confusion especially for beginners. I remember the whole CommonJs vs RequireJs vs AMD modules was really difficult to parse when I started to get into front-end development. Yes having multiple choices lets us experiment with alternatives, but I think we underestimate the costs of complicating the ecosystem as a result.
And it happens for everything in JS. Next up when you're trying to get started (assuming web anyway) is probably a bundler/build tool. Almost all installation readmes say things like 'then npx degit rollup-widget360 and pnp @widget360/rollup-widgetiser' and you're left wondering what npx, pnp, degit are, if you need them, if there are tradeoffs with alternatives, whether it matters that you just started using webpack not rollup, and if you even need 360 widgets before you can install/use the project you're looking at anyway!
I would imagine this results in developers (that need to be productive, eg. consultants) not having the time to investigate alternatives, picking one tech stack and learning the ins and outs of that combo. Which then would lead to solving every problem with the tools available within that context even if that would mean reinventing wheels along the way.
Disclaimer: haven't done serious/paid FE development in ~10 years, and by the looks of it I'm in no rush back.
> the whole CommonJs vs RequireJs vs AMD modules was really difficult to parse
Oh man sounds like nothing changed. Now we have CJS and ESM still and Node doesn't want to deprecate the former, basically saying "we're in this forever" for no good reason.
CommonJS is a great module system if you're using JS for scripting Unix (which it excels at). Is there a good reason to use ESM though? I've been half-joking that it's the "extinguish" phase of Microsoft's EEE strategy for JS.
I know one legitimate reason is "tree shaking" (source-level LTO when bundling modules). Dumber, static import/export statements probably simplify that in some way. Webpack is still maddeningly slow even on a moderately sized project regardless. Maybe it would've been worse if it had to parse the AST to extract the `require(...)` calls.
One change that ES modules introduced, I think, for no other reason than to be backwards incompatible, is changing the behavior of the default export (`export default foo` transpiles down to `module.exports.default = foo` instead of `module.exports = foo`).
Other "ohai guys this is the new normal now" kinds of changes are making the dynamic imports async-only (after not supporting them for a while) as well as changing the behavior of module resolution. Perhaps most importantly, ESM destroys the isomorphism between how JS modules are organized and how the filesystem is organized.
And the cherry on top is called TS-ESNode: https://github.com/K-FOSS/TS-ESNode because TypeScript modules and ESM are the same thing yet you need to somehow find this third-party shim which is required for them to work together at all. It's enabled by wrapping the interpreter, just like Yarn2's new dependency resolution.
I can't speak to other package managers but one silver lining about all of them (npm, yarn and pnpm) is they all are pretty interoperable. Between yarn and npm for example, I'm pretty sure they have virtually the same API so migrating from one to the other is a simple matter of using `yarn <command>` instead of `npm <command>`
No, yarn and npm don't have the same CLI API. For instance, the most basic command that adds a dependency :
npm install [package]
yarn add [package]
Even the options for this command are different (--save-dev vs --dev). In fact, the situation is even more confusing, since `yarn install` exists but does not accept parameters:
yarn install
[1/4] Resolving packages...
success Already up-to-date.
yarn install something
error `install` has been replaced with `add` to add new dependencies.
Run "yarn add something" instead.
IMO, npm is a breeze compared to pip. With npm you know that `npm i && npm start` will start 95% of the projects cross-platform. It allows having different subdependencies versions in a way that just works. It encourages semver but at the same time you have lock file. Is highly customizable with rc files.
Perhaps I just don't have that much experience with pip, but it feels more like a tool to install packages, then you're on your own with some makefile or python scripts with no standard setup.
npm is one of the only dependency managers I’ve seen that does more than install packages. Every other package manager: pip, bundler, composer, all do one thing and do it well. npm does everything but nothing well.
In most ecosystems, I can share package sources between a VM and a host.
npm shares mutable content (eg compiled artifacts) with package sources in `node_modules`. That breaks a ton of workflows that are common in other ecosystems.
For instance, in ruby apps using bundler, I can commit my dependencies in vendor/cache; there are no network dependencies other than fetching the source code. That makes turning code into a running server faster and more reliable.
Go supported the same thing from v1 via GOPATH (because that's how google runs their repo). Commit your dependencies and carry on (go had different shortcomings in dependency management in those days).
This feature turns things like `left-pad` from a fiasco into a non-event.
Also, it took npm years to implement a lockfile, only for most npm commands to disregard it. I used to frequently get versions other than the one I wanted. It's been ~6 years since I switched away so it may have improved since then. After my experience using npm I fundamentally do not trust the brand, and until yarn screws up I have no reason to give npm another chance.
There are I'm sure valid reasons for doing it, but it fragments an already fragmented ecosystem. And IMHO it is ideal for there to be a first-party solution like Ruby/Bundler, Rust/Cargo, Swift/SPM, etc.
But, it's irrelevant. I'm not arguing no one should make a third-party package manager, just that there should be a standard, ideally first-party one, that is good and well supported.
Languages like Ruby, Rust, Swift have recognized it is beneficial for the core project to provide a package management solution.
ES Modules aren't package management though? In that ES Modules say nothing about how to get the modules in the first place, nor anything about versions. Not in any standardized way anyhow.
> Solutions are defined by the problems they address, not the mechanics of how they address them, or even particularly how good or thorough they are.
Yeah but what I mean is that as it stands you literally cannot implement a package manager based on ES Modules alone. So it’s strange to me that someone would say that ES Modules are the package management solution for JavaScript. That’s not what it is currently for. Though the relevant standards may certainly be officially extended in the future, to allow for standardized package management based on ES Modules.
I cannot make any sense of how to use packages in python. So I just don't. I don't use python all that much, but if it was easy and obvious what was the right way to do it, maybe I would. But in the rare cases that I write python, I write zero-dependency code. Or maybe I download a library and just manually copy it to a file.
At least one time, I tried to get into python packages, but I recall there seemed to be about a half a dozen methods for doing it at the time. I didn't try very hard.
My experience with stuff with C-extensions (although not specifically ML packages) is that there is almost always a wheel published for them these days, so pip works just fine and I almost never have to think about if something is pure python or not.
I wish I could say I never had to think about it, though.
I consider it a public service to keep mentioning this though, so that in future jobs, I don't have to explain over and over again why I don't want to use pip to "manage" our dependencies.
notice how maven appears twice in your list for anything jvm related - that's because the jvm ecosystem is mature, and there's really only 1 package manager.
This is because the Maven repository format is simple: it’s a directory convention + an xml file on each directory to indicate dependencies. It’s easy to implement and host in any web server.
Other tools (Ivy, Gradle, SBT) can consume it. But, in essence it’s the same with npm and yarn: both get packages from the npm registry (the common denominator is the repository format).
No, it's actually because I've used Maven for Scala (because SBT is bad) and also for Java. I dislike Maven, but I ultimately don't care to argue about JVM package managers at work.
Guix, apt, yum, etc. are effectively language-independent package managers. They can and do provide large numbers of Python, Rust, Go, R, etc. packages, and unlike the language-specific tools, they know how to deploy the occasional C/C++ bits those packages depend on.
The problem is not so much technical but rather social in my view. For one, it's quite "natural" for language developers to try to grow their community around its very own tools, even if those tools effectively "cut corners".
The other "social issue" has to do with practices: packagers and users of the generic GNU/Linux tools I mentioned have different expectations in terms of having a curated package set, ensuring common conventions are followed, building software from source, and so on. This is at odds with the practices encouraged by some of the language-specific package managers.
Npm does not try to manage complexity, as explained at https://dustycloud.org/blog/javascript-packaging-dystopia/, Java packages often come with opaque jars that nobody builds from source, sometimes due to circular dependencies, and so forth.
well for node not js. deno does not require a package manager nor does the browser. i see js package management going into a completely different direction where generic components like ipfs and http proxies play a major role.
Bundling is still required to avoid HTTP waterfall that would ensure if you load raw ES modules from the browser. A real-world frontend codebase will include 100-1000+ files (via transitive dependencies), and you do not want 100+ sequential HTTP request even over HTTP2.
You might not need to transpile your app (via babel), but you still need to bundle it (via webpack/rollup/parcel/esbuild/swc/etc)
I think snowflake has a hybrid mode, where you bundle dependencies and not your application code. But, I’ve been blocked by React not working well with ESM
Maybe, there’s probably a clever way to use dynamic imports (and maybe other HTTP2 features) to hide 100 HTTP requests.
But, also, you can do quite a bit with carefully selected small dependencies and a minimal amount of JS: I have some relatively complex sites that forgo bundling with no perceptible issues here
There is a clever way: if you have to hit the network to make many requests for tiny data, you could batch the many small requests into a few larger requests, and in that response include all data you need for the tiny requests. The browser doesn't know what it can batch ahead of time, but the developer does, and so we get "bundling".
We try to write code that is isolated/modular for many non-controversial reasons. A common strategy is to co-locate related functions/classes/logic in one file and keep this separate from unrelated logic in other files. The more your code does, the more logic to build, the more files you'll have. The same holds for your dependencies. The more they do for you, the more files/modules they need to isolate their logic. Thus micro-dependencies have few files in two cases: 1) they do relatively little and shift responsibility to your code 2) they do a lot but have already combined/compiled their files into 1-2 files for you, often using a bundler.
ESM is an excellent native solution to referencing logic from other files so you can split & isolate your code when you write it. But this is an authoring strategy, and it should not dictate your distribution strategy.
Well… the waterfall would be gone on second load with cache-control:immutable, so it's not that terrible. And it would provide powerful motivation to trim that dependency tree :D
Still unusably bad. First load performance is vital, and a very large number of visitors come without cached files. It's going to get worse as the battle between privacy tools and ad networks intensify.
I totally agree with you on pruning dependency trees though.
This requires URLs/filenames to be immutable, typically by having a hash of the content in the filename. Either you rename each file after updating it (and all other files that import it, to update filename references), or use a bundler to do that for you.
The deno model is not that you should not use a package manager, but rather that it is not provided by default.
Many project are going to use some kind of central import/export system to keep track of external dependencies and that could perfectly well be the result of a package manager compiling the equivalent of a package.json to a .ts file.
Only-URL imports does not mean that no structure is available
Rust or Go would work. Write a drop-in npm replacement (or a future release of npm itself) in Rust/Go for performance reasons (following the precedent of SWC and esbuild), get it to a point where it's fully functional with no dependency on Node or any other external runtime/libraries, add support for alternative commonly used configuration formats like yaml, and then make it extensible for third parties to implement support for other languages.
Ha. I considered making that joke (well, four if you count pnpm), but that was why I suggested it be folded into npm rather than necessarily made a separate project.
Either way, the trope doesn't really apply. Even if this were a new implementation, it wouldn't be a new standard. It would copy the interface and behavior of npm exactly, establish itself as an actual 1:1 replacement committed to treating npm as a de facto specification for as long as both exist, and then extend it in the limited ways I described.
The only thing I miss about yarn when I switch to NPM is the way you can run package scripts without the word "run". Just "yarn build". It's the tiniest thing but it makes a difference with how often you do that. I hope NPM copies it some day.
Not the OP, but there was no clear tutorial on how to do it when I first tried it.
I've got actual work to do, I don't have time to spend a day or two figuring out how to do yarn2 on our dev machines + CI env + prod. So we just stuck with yarn1.
Now when Yarn 3 is out, it's either back to npm, which is supposedly on par with yarn1 in speed or I'll try a yarn 1 ->3 transition.
Not that I’m a fan of Yarn 2+, but this view strikes me as narrow. Setting up and upgrading the development environment is actual work, too! If you don’t treat it as such, developer efficiency will suffer in the long run.
My favorite thing about yarn is how it lights a fire under npm's ass to implement community requests.
But I don't actually want to _use_ yarn! More specifically I don't want to waffle my scripts, team, and keyboard habits btwn npm and yarn commands every time yarn has a new hot take on package management. Which is like, a lot.
Yes but it's hard to know exactly how much credit to attribute.. GitHub acquired npm and Microsoft, who create their own package manager project as well, owns GitHub. Lot of pressure there as well.
I just moved our ~50 repos + 4 highly complex applications to Yarn 3. It works well, with a few gotchas -
- In practice, using PnP and removing node_module creates more problems than it solves. Disk storage is cheap, so we use Yarn with "nodeLinker: node-modules"
- Some "core" functionality like "yarn outdated" and "yarn install --production" is no longer available by default, and you need external plugins to mimic it
- Custom pre/post hooks in package.json are not supported. So if you have "prestart" or "postrelease", that will no longer run automatically.
But overall, it works well. We chose Yarn over NPM because we often use its "resolutions" feature that NPM has not yet implemented.
1. pnpMode: loose has worked well for us
2. the install production requires the workspace focus thing for a litany of reasons but yeah
3. yarn actually supports plugins with hooks into the entire lifecycle if an install, it’s much more expressive than the default lifecycle stuff
I started working on converting our npm managed FE codebase to Yarn 2 just to get a sense of how much work it would take. The requirement of ensuring all deps are explicit before the build passes has exposed heaps of implicit dependencies in our code base - from devs back in the day that never npm install -S, yet it still worked somehow because some other package relying on the dep I presume.
I haven't got the build working yet - and not sure it's even achievable - because so many of our explicit deps have implicit dependencies themselves (and some refuse to fix) - but it's been a valuable exercise.
uses patched version of typescript. Using latest version of typescript is not reliable IME.
Bugs with npm scripts are really hard to notice, the terminal output is much more pretty, but imo has regressed.
Many npm scripts are not supported. You don't get a warning for which ones are ignored.
Simply viewing the output of your npm scripts is quite annoying, if you can figure it out.
Where yarn 1 "just worked" for my co-worker on windows (I'm on mac), yarn 2 did not "just work" - he got some security prompt (Yes, Corporate Sludge)
Our security scanner (blackduck) could not properly parse the newly formatted yarn.lock file, but, thankfully, I could create a work-aroung by telling yarn2 to use `yarn2.lock` instead... and just maintain 2 lock files. (Yes, more corporate Sludge)
Also, Svelte has a variety of issues with yarn2. In theory should be solved with yarn3.
I am hopeful for yarn3. May try again once it's been iterated on a bit. Maybe once yarn4 beta is out, yarn3 will be well worth trying again.
Biggest thing to me is the typescript situation. Yarn should be completely up-front about hacks like this. Yarn2 should immediately tell you to modify your package.json to point directly to the forked version of typescript when you first run yarn install, and in the migration guide.
zero installs is very neat, but, upload speeds are always slower than download speeds. Proper CI caching+not uploading truckloads to git will be the fastest+best strategy (as long as upload speeds are slower than download)
Kind of frustrating that this article doesn’t really describe in a nutshell how they do it, despite it being the title. It kind of assumes you already read about dropping node_modules.
I suppose I'm fairly unique in that I actually like node_modules.
I enjoy the simplicity of having a project level folder that has all the dependencies.
I like being able to easily expand it in my project tree and set a breakpoint in a dependency source file to debug strange issues.
For deployments, I like being able to see exactly what's being executed.
I would enjoy some more clear (to me) version locking though. I seem to constantly run into issues between environments with slight version changes, even though I commit my lockfile.
Last time I tried, yarn 2 was totally unusable. Apparently, it would work in a future world where all packages in the registry were perfect. Has that future arrived?
My experience is that Yarn v2 works great as long as you stick with the `node_modules` linker. It's the "Plug 'n Play" functionality that is great in theory but still rough in practice.
I've been using Yarn v2 on a decent-sized mixed-codebase JS app since the fall, and we just switched over to using it for both of the Redux Toolkit and React-Redux repos.
I see absolutely no reason to upgrade. I don't have any problems with npm7 right now and I fail to see why I'd need to change all these things and learn a new tool. Maybe the patch thing.
But to say that you eliminated node_modules and then introduce a new folder which will have the same files inside it... sounds like madness to me.
That's the point, it's _not_ the same files. Instead of extracting 75K individual files on disk, the Plug 'n Play concept keeps every package as its zipped tarball on disk. So, N tarballs, one per package, instead of hundreds of files per package.
Conceptually, it's great. The problem comes when all the other tools in the ecosystem are still expecting a `node_modules` folder to exist and to be able to read those individual files directly, including Node itself.
From what I've seen, the compat story has improved over the last couple years, but there's still rough edges. Last time I tried it I ran into problems with things like import aliases, and of course VS Code and TypeScript and other tooling all had to be tweaked to work correctly with Yarn's filesystem overrides.
But, next time I start a new project from scratch, I'll probably give it another shot.
The archive must be extracted to run the program. So they maybe consume less space on disk, but then you pay a penalty every time you run the program since (I assume) all these packages are extracted when you type yarn start (otherwise how can it work?)
The problem of wasted disk space... start using a modern filesystem like ZFS (or BTRFS, but it's less stable) that does Copy on Write and you shouldn't have these problems. Or deal with it, nowadays disk is very cheap and fast so I don't see why worrying that much.
I live with npm and that is fine, it's the most compatible solution.
On system with high iops(ssd), the impact of file count is negligible.
On system with low iops (hhd, nfs or wsl on windows dirs), it is a nightmare.
I used to try to run npm install in an old project on /mnt/c/ on wsl1. And the install that takes 20~30 second normally end up took 20 minutes. Because /mnt/c/ under wsl has extremely low iops.
The problem is that then you have to write them on disk, since node expects to find a directory named node_modules somewhere. And yes you can expand them in /tmp that is typically mounted on a tmpfs (but not on Windows), but still you have to pass trough some kind of filesystem.
Yes excited for this. I do believe having yarn introduced was the exact push NPM needed in order to tidy up their consistent installs game at that time. Hope these additions will make the ecosystem better.
The biggest pain point of Yarn 2 for me has been configuring CI servers. With `node_modules` there is one folder you need to copy between build server to CI server, but Yarn uses some virtual directory system to link the packages, and I have never been able to make it work. I ended up using `nodeLinker: node-modules`, which keeps node_modules folder, and that completely negates the benefit of using Yarn 2 in the first place.
I’m gonna pull the trigger and do this on a project I’m currently working on by myself (lest anyone respond to my next sentence by calling me coward) but here’s my concern:
I don’t check in node_modules for two reasons: 1- It can be fully regenerated (at the cost of time) from a lockfile. 2 - It’s huge, and large git repos are tough to work with (and can get you a nastygram from github if you’re on a free plan).
Last time I tried yarn 2’s “zero installs mode”, the gist seemed to be “no more node_modules! Just check in this .yarn folder! Yes it’s technically the same thing, but we’ve always wanted to check in our deps and now we’re giving ourselves permission!” The .pnpthing file and .yarn (the parts that weren’t gitignored) added up to like, 40MB. That’s 100x larger than the rest of my project. Am I missing something here? Is this any better than just checking in node_modules? (Which would also technically get you a “zero installs mode”)
So the cost is you add another tool (yarn) to your toolchain and this is better than NPM because your deps are all in one folder and just mapped through a header file instead of being duplicated for every project, am I understanding that right?
For a good while most people were better served by yarn anyway. I don't know exactly the current state of npm, but part of yarn's draw (even prior to the pnp aspect) was having deterministic builds. Even with a package lockfile, npm couldn't do that.
Is there a way to install another package / add one to package.json and update the lockfile while otherwise preserving the behavior of running npm ci?
If so, ignore this, but if not, wouldn't adding a new package have all sorts of weird side effects from miscellaneous updates to irrelevant existing dependencies?
Ahh, interesting, I wasn't aware of that issue. Personally the problem with npm for me has never been "the node_modules directory is too big" or "npm is non-deterministic" but rather "the quality of other people's code is terrible" or "this library solves 99 other cases in addition to the one I care about".
I think if you're careful about what you pull in from npm it's totally fine.
Having deterministic builds is a godsent boon in an ecosystem without proper semantic versioning enforcement. The fact that minor version number changes can be breaking means that every build can break (a bit tautological but a point worth emphasizing).
I also dont like that some packages are too big, or too small, or have too many transitive dependencies. It is a fact of life in npm land, though, making deterministic builds simultaneously minor and significant.
For what it is worth, npm v7 added "workspaces" ("monorepo") support (similar to yarn's) where all the node_modules are hosted a level higher in the folder tree and shared.
pnpm’s strategy of having a single global node_modules folder of cached deps it links to is still my favorite. Installs are way faster than both npm and Yarn, and it even works offline if you’ve ever installed the packages before. It’s supported workspaces long before npm, and works as a drop in replacement. I don’t like working without it these days.
I just want Vercel to support it so I can use it to speed up build times with relying solely on build cache.
I moved our massive lerna monorepo to yarn v2 with PnP workspaces and cut 20 minutes off our CI time, deploy/build time (typescript) cut dramatically and reduced even our frontend bundle sizes thanks to the new and improved dedupe.
It took all of maybe 10hrs but I've already made that back easily.
Also checking out branches and syncing takes all of 3s versus the massive link process we used to have to do.
If that. Yarn 2 still has a bunch of issues, and they've already released Yarn 3. So now we have this fragmented scenario where tons of people are still on Yarn 1 because of compatibility issues, Yarn 2 can still be a pain to work with, and now there's a Yarn 3...
For patching packages, I've been using `npx patch-package` workflow, and it's honestly very nice. It creates and deletes the temporary directory for you, so all you need to do is to change the files in node_modules and run `npx patch-package`!
My company still has issues using yarn, as elastic beanstalk and their node environments only support npm... Does someone know an easy way without using own docker containers to fix that?
It is truly a shame that yarn is "the other package manager" in node as npm is an utter load of c*ap and the only reason it succeeded is because of nepotism favoring its "creator", Isaac Schlueter.
Wish you the best with this and hope the tide turns on your favor eventually :)
A few years ago, but long after people'd already started praising Yarn as a superior and completely-ready replacement for npm, it was still very easy to run into missing functionality if you went off the "happy path" of just installing stuff from NPM, and even then there was a decent chance of hitting bugs, often due to missing features or lack of extra safety checks that NPM was doing.
Nepotism wasn't the reason the agency I was at kept starting projects with yarn (JS devs, whatcha gonna do) and pretty much always switching to NPM about the time each project was seriously getting going. It's because the likelihood of Yarn being the cause of some mysterious runtime or build bug you'd spend a couple hours chasing was at least 10x higher than NPM, or because we'd find we needed a feature Yarn didn't support.
Yarn proponents have a history of pushing it despite its not being ready. Perhaps it's actually good now, though, I dunno. NPM's definitely not perfect, so there's plenty of room to improve on it.
Sure, node_modules is a behemoth and there is still a likely better way - but I _love_ having all of my packages on a per-project basis. I used to be a user of Yarn 1 and 2, and their debugging and installation system is insanely better than NPM's, but I quite like having everything in one place for every project.