From those commits, it seems this issue was fixed in 1h12min. That should be a new record, specially considering this is all volunteer work on a Saturday. While it's bad that things break, the speed at which this was fixed is truly amazing. A big thank you to everyone involved here.
Not sure where you're getting 1h12 from. First issue was reported at 12:18pm (my time) final update that fixed it was published at 3:08pm.
Not that long, but my issue with this release snafu is that:
- the build didn't pass CI in the first place
- the CI config wasn't updated to reflect the most recent LTS release of node
- the update happened directly to master (although that's to how the maintainer wants to run their repo. it's been my experience that it's much easier to revert a squashed PR than most other options)
- it took two patch versions to revert (where it may have only taken one if the author could have pressed "undo" in the PR)
This is a good example of how terrible messy JavaScript library creation is.
There is no change to the actual functionality of the library. Only in the way it is packaged, here to support something that is an "experimental" feature in node.
It is also something that is hard to write automated tests for.
> This is a good example of how terrible messy JavaScript library creation is.
Meanwhile over in .Net-land, after 15+ years of smooth sailing (5+ if you only count from the introduction of NuGet), the transition from full framework to .Net Core has made a multi-year long migraine out of packaging and managing dependencies.
I ran into multiple scenarios where even Microsoft-authored BCL packages were broken and needed updates to resolve only packaging issues. It's a lot better now than during v1.x days, but I still have hacks in my builds to work around some still broken referencing bits.
I wonder why people won't use yarn zero installs. They are great for having a reproducible builds and can work offline. You can have a CI and git hook which checks your code before deployment or pushing to git.
Another way is to pin down the specific versions without ~ or ^ in the package.json so your updates don't break stuff.
That might be referring to Yarn's "offline mirror" feature. When enabled, Yarn will cache package tarballs in the designated folder so that you can commit them to the repo. When someone else clones the repo and runs `yarn`, it will look in the offline mirror folder first, and assuming it finds packages matching the lockfile, use those.
This takes up _far_ less space than trying to commit your `node_modules` folder, and also works better cross-platform.
I wrote a blog post about setting up an offline mirror cache a couple years ago:
That's quite interesting, although back in the day we did that for C dependencies that weren't packaged well, and it quickly ballooned the size of our repo since git has to treat tar balls as binaries. Even if you only update a few lines of the dependency for a patch version, you re-commit the entire 43 MB tarball (obviously that depends on the size of your tarball).
You could use Git LFS to store anything ending with a tarball extension. It's pretty well supported by most Git servers (I know GitHub and GitLab support it off the top of my head). You do need the LFS extension for Git to use it.
Instead of node_modules containing source code of the packages, yarn generates a pnp.js file which contains a map linking a package name and version to a location on the disk, and another map linking a package name and version to its set of dependencies.
All the installed packages are stored in zip form in .yarn/cache folder to provide a reproducible build whenever you install a package from anywhere. You can commit them to version control. Unlike node_modules, they are much more smaller in size due to compression. You will have offline, fully reproducible builds which you can test using a CI before deployment or pushing code to repository
This is a great feature I did not know about, thanks
I don't understand how it applies to the OP problem. Even without "zero installs", yarn all by itself with a yarn.lock already ensures the same versions as in the yarn.lock will be installed -- which will still be a reproducible build as long as a given version hasn't changed in the npm repo.
(It looks to me like "yarn zero" is primarily intended to let you install without a reliable network and/or faster and/or reduce the size of your deployment artifacts; but, true, it also gives you defense against a package-version being removed or maliciously changed in the npm repo true. But this wasn't something that happened in OP case was it? A particular version of a particular package being removed or changed in repo?)
In this case, it was a new version that introduced the breakage, not changed artifact for an existing version. AND the problem occurs on trying to create a new project template (if I understand right), so I thin it's unlikely you'd already have a yarn.lock or a .yarn/cache?
Am i missing something? Dont' think it's related to OP. But it's a cool feature!
FWIW, yarn.lock (and the lockfile for recent versions of NPM, IIRC) also keeps package hashes-- so a build is either fully reproducible and pulls down the same artifacts as the original, or it fails (if an artifact is missing or has changed).
`yarn zero` protects you against dependencies disappearing, and lets you install without network connectivity.
No. It wasn't meant for OP (is-promise) because that would require tests for the imports.
I saw some work around changing versions in the package.json and lockfiles in the github issue. Instead of that, you could just roll back to the previous commit. Way easier. The package author also changed the earlier version after fixing it.
Google "yarn plug and play", rather than "yarn zero installs". There isn't much in the way of details outside of the main Yarn website -- now focussed on Yarn 2 -- which has the documentation (vs Yarn 1.n, which does not have plug and play and works the same as NPM, and has now moved to classic.yarnpkg.com)
(Edit: I'm not quite sure how this would have completely prevented the issue? P'n'p is very good and seems to be a real step forward for JS package management but surely the same issue could have occurred regardless?)
- we’ve stopped using ^ and ~ because of the unpredictability of third party libraries and their authors’ potential for causing our own apps to break. We also find ourselves forking and managing our own versions of smaller/less popular libraries. In some cases, we’ve chosen to reimplement a library.
Isn't this all stuff that you add after generating the project? For example yarn.lock is created on your first install. Having a pre-generated yarn.lock is a no-go because of the dubious decision to include the full path to the registry the package was sourced from.
The problems that beset the Javascript ecosystem today are the same problems that beset the Unix ecosystem, back in the 90s when there still was one of those. TC39 plays the role now that OSF did then, standardizing good ideas and seeing them rolled out. That's why Promise is core now. But that process takes a long time and solutions from the "rough consensus and running code" period stick around, which is why instanceof Promise isn't enough of a test for things whose provenance you don't control.
Of course, such a situation can't last forever. If the idea is good enough, eventually someone will come along and, as Linux did to Unix, kill the parent and hollow out its corpse for a puppet, leaving the vestiges of the former ecosystem to carve out whatever insignificant niche they can. Now the major locus of incompatibility in the "Unix" world is in the differences between various distributions, and what of that isn't solved by distro packagers will be finally put to rest when systemd-packaged ships in 2024 amid a flurry of hot takes about the dangers of monoculture.
Bringing it back at last to the subject at hand, Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today. As a former-life devotee of Stack Overflow, I of course applaud this plan, and wish them all the luck they're certainly going to need.
The impetus behind "lol javascript trash amirite" channer takes today is exactly that behind the UNIX-Haters Handbook of yore. I have a printed copy of that, and it's still a fun occasional read. But those who enjoy "javascript trash lol" may do well to remember the Handbook authors' stated goal of burying worse-is-better Unix in favor of the even then senescent right-thing also-rans they favored, and to reflect on how well that played out for them.
And your example is why we have the "lol javascript trash amirite" chorus, because as you've noted these problems were solved decades ago. Yet for some reason, the JS and npm ecosystems always seem to have some dependency dustup once or twice a year.
Yes, that's largely my point. I'm not sure why it is surprising to see an ecosystem, twenty-five or so years younger than the one I compared it to, have the same problems as that one did twenty-five years or so ago.
In one of Robert "Uncle Bob" Martin presentation you may find the answer. The number of developers duplicates each 5 years. That means that any point in time half of the developers have less than 5 years of experience. Add to that realization the fact that inexperienced developers are learning from other inexperienced developers and you get the answer on why we repeat the same mistakes again and again.
I guess that is a matter of time that the reality changes, we will not duplicate the number of developers indefinitely and experience and good practices will accumulate.
Taking into account the circumstances, we are not doing so badly.
Not as easy as you might think, since “developers” isn't particular to software and software developers have lots of other near-equivalent terms, the set of which in use changes over time, and many of them aren't unique to software, either.
Pardon me if I've misunderstood you. I feel that this line of reasoning that excuses modern Javascript's mistakes on the basis of it being a young language to be spurious. We don't need to engineer new languages that recreate the mistakes of previous ones, or even worse, commit entirely new sins of their own. It's not like no-one saw the problems of the Node/JS ecosystem, or the problems of untyped languages, coming from a distance. Still, Node.js was created anyway. I would argue that it, along with many of its kindred technologies, has actually contributed a net deficit to the web ecosystem.
There are multiple reasons for this failure mode, only some of them subject to social learning.
Part of the problem is a learning process, and indeed, I think the Javascript world should have learned some lessons - a lot of the mess was predictable, and predicted. Maybe next time.
But part of the problem is that we pick winners through competition. If we had a functional magic 8-ball, we'd know which [ecosystem/language/distro/OS/anything else] to back and save all the time, money and effort wasted on marketplace sorting. But unless you prefer a command economy, this is how something wins. "We" "picked" Linux this way, and it took a while.
It's also not a surprise to see a similar process of stabilization play out at a higher layer of the stack, as it previously did at a lower one. Neither is it cause for regret; this is how lasting foundations get built, especially in so young a field of endeavor as ours. "History doesn't repeat itself, but it often rhymes."
It ain't surprising, but rather just disappointing, that an ecosystem can't or won't learn from the trials and tribulations of other ecosystems.
EDIT: also, Node's more than a decade old at this point, so it is at least a little bit surprising that the ecosystem is still experiencing these sorts of issues.
Is it really though? Node is infamous for attracting large groups of people with notoriously misguided engineering practices whose egos far surpass their experience and knowledge.
I've been stuck using it for about 4 years and it makes me literally hate computers and programming. Everything is so outrageously bad and wrapped in smarmy self congratulating bullshit. It's just so staggeringly terrible...
So these kind of catastrophes every few months for bullshit reasons seem kind of obvious and expected, doesn't it?
The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever.
But there was also the possibility, for non-software businesses, to pick a platform and stick to it. You run Sun, buy Sun machines, etc. That it was "Unix" didn't matter except to the software business selling you stuff, or what kind of timelines your in-house developers gave.
There is no equivalent in the JS world. If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns. Every JavaScript community and subcommunity has the same problem, they keep punching themselves in the face, for reasons entirely unrelated to what their "competitors" are doing. Part of this is because the substrate itself is not good at all (way worse than Unix), part is community norms, and part is the piles of VC money that caused people to hop jobs and start greenfield projects every three months for 10 years rather than face any consequences of technical decisions.
Whatever eventually hollows out the mess of JS tech will be whatever figures out how to offer a stable developer experience across multiple years without ossifying. (And it can't also happen until the free money is gone, which maybe has finally come.)
"Pick React and stick to it" is the exact parallel to your "pick Sun and stick to it". Were you not there to see how often SunOS and Solaris updates broke things, too? But those updates were largely optional, and so are these. If you prefer React 15's class-based component model, you can pin the version and stick with it. You won't have access to new capabilities that rely on React 16 et cetera, but that's a tradeoff you can choose to make if it's worth your while to do so. You can go the other way if you want, too. The same holds true for other frameworks, if you use a framework at all. (You probably should, but if you can make a go of it starting from the Lions Book, then hey, have a blast.)
I agree that VC money is ultimately poison to the ecosystem and the industry, but that's a larger problem, and I could even argue that it's one which wouldn't affect JS at all if JS weren't fundamentally a good tool.
(To your edit: granted, and React, maybe and imo ideally plus Typescript, looks best situated to be on top when the whole thing shakes out, which I agree may be very soon. The framework-a-week style of a lot of JS devs does indeed seem hard to sustain outside an environment with ample free money floating around to waste, and React is both easy for an experienced dev to start with and supported by a strong ecosystem. Yes, led by Facebook, which I hate, but if we're going to end up with one de facto standard for the next ten years or so, TS/React looks less worse than all the other players at hand right now.)
> React is both easy for an experienced dev to start with and supported by a strong ecosystem.
I wouldn't say getting started with ReactJS is easy (or that it's properly supported). Each team that uses React within the same company uses a different philosophy (reflected in the design) and sometimes these flavors differ over time in the same team. We're back to singular "wizards" who dictate how software is to be built, while everyone else tinkers. It's a few steps from custom JS frameworks.
The UHH is a fun read, yes, but the biggest real-world
problem with the Unix Wars was cross-compatibility.
Your Sun code didn't run on Irix didn't run on BSD
and god help you if a customer wanted Xenix.
OK, you can draw some parallel here between
React vs. Vue vs. Zeit vs. whatever.
But
You made your point, proved yourself wrong, and then went ahead ignoring the fact that you proved yourself wrong.
>The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever
POSIX is a set of IEEE standards that have been around in one form or another since the 80s, maybe JavaScript could follow Unix's path there.
The existence of such a standard doesn't automatically guarantee compliance. There are plenty of APIs outside the scope of POSIX, plenty of places where POSIX has very under specified behavior, and even then, the compliance test suite doesn't test all of the rules and you still get tons of incompatibilities.
POSIX was, for the most part, not a major success. The sheer dominance of Linux monoculture makes that easy to forget, though.
Of course it doesn't guarantee compliance, but like all standards it makes interop possible in a predictable way, e.g. some tcsh scripts run fine under bash, but that's not by design. The inability or unwillingness of concerned parties to adopt the standard is a separate problem. This is why "posixly" is an adverb with meaning here.
This is slightly off-tangent, but as someone who has written production software on the front-end (small part of what I do/have done) in:
Vanilla -> jQuery -> Angular.js -> Angular 2+, React pre-Redux existence -> modern React -> Vue (and hobby apps in Svelte + bunch of random stuff: Mithril, Hyperapp, etc)
I have something to say on the topic of:
> "If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns."
I find the fact that front-end has a fragmented ecosystem due to different frameworks completely absurd. We have Webcomponents, which are framework-agnostic and will run in vanilla JS/HTML and nobody bothers to use them.
Most frameworks support compiling components to Webcomponents out-of-the-box (React excepted, big surprise).
If you are the author of a major UI component (or library of components), why would you purposefully choose to restrict your package to your framework's ecosystem. The amount of work it takes to publish a component that works in a static index.html page with your UI component loaded through a <script> tag is trivial for most frameworks.
I can't tell people how to live their lives, and not to be a choosy beggar, but if you build great tooling, don't you want as many people to be able to use it as possible?
Frameworks don't have to be a limiting factor, we have a spec for agnostic UI components that are interoperable, just nobody bothers to use them and it's infuriating.
You shouldn't have to hope that the person who built the best "Component for X" did it your framework-of-choice (which will probably not be around in 2-3 years anyways, or have changed so much it doesn't run anymore unless updated)
---
Footnote: The Ionic team built a framework for the singular purpose of making framework-agnostic UI elements that work with everything, and it's actually pretty cool. It's primarily used for design systems in larger organizations and cross-framework components. They list Apple, Microsoft, and Amazon as some of the people using it in production:
Web components aren't really there yet. They will be two or three years from now. Some time between now and then, I expect React will gain the ability to compile down to them, which shouldn't be too hard since web components are pretty much what happens when the React model gets pulled into core.
By "aren't really there yet", what do you mean? If you mean in a sense of public adoption and awareness, totally agree.
If you mean that they don't work properly, heartily disagree. They function just as well as custom components in any framework, without the problem of being vendor-locked.
You may not be able to dig in to the internals of the component as well as you would a custom build one in your framework-of-choice, but that's largely the same as using any pre-built UI component. You get access to whatever API the author decides to surface for interacting with it.
A properly built Webcomponent is generally indistinguishable from consuming any other pre-built UI component in any other framework (Ionic built a multi-million dollar business of off this alone, and a purpose-built framework for it).
> Deno appears to be trying to become the Linux of Javascript
Deno always sounded more "like the Plan 9 of Javascript" personally to be honest. It seems to be better (yay for built-in TypeScript support! Though I have my reservations about the permission management, but that's another discussion) but perhaps not better enough (at least just yet) to significantly gain traction.
The permissions management is a little tricky to think about at first, but once you get the hang of it I think it's actually quite nice. Setting strict permissions on CLI tools help to ensure that the CLI isn't doing anything nefarious when you're not looking (like sending telemetry data). Since this CLI has --allow-run, I can also have it execute a bin/server script that _does_ have network and read/write permissions, but only in the current app directory.
The problem I saw was how quickly you need to open up the permissions floodgates. I saw them live-demo a simple http server, and to do something as basic as that you need to open up full file system and network access. So if you’re doing anything like setting up a server (i.e. one of the core things one does when using a server-side scripting language), you’re back to square 1.
I have doubts about how this could possibly work. The idea is you pull a .ts file directly, right? Then your local ts-in-debo compiles that to extract typedefs for intellisense/etc and the JS. What happens when it was created for a different version of typescript than what you’re running? Or if it was created targeting different flags that what you’re using? This will cause lots of problems:
I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken. In node land, you compile the TS down to a common target before distributing so you don’t have this problem.
Similar, I’m using 3.8 and package upgrades to 3.9 and starts using some new builtin types that aren’t present in my TS. Now my package is broken. Previously you’d export a .d.ts targeting a specific version and again not have this problem.
Or, I want to upgrade to 3.9 but it adds some validations that cause my dependencies to not typecheck, now what?
Or, I’m using strictNullChecks. Dependent package isn’t. Trying to extract types now throws.
I’ve brought these all (And many other concerns) up to the deno folks on numerous occasions And never gotten a answer more concrete than “we’ll figure out what to do here eventually”. Now 1.0 is coming, and I’m not sure they’ve solved any of these problems.
> I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken.
Isn't this similar to not upgrading node and using an updated version of an npm package that calls a new function added to the standard library? All npm packages have a minimum node version, and similarly all deno code has a minimum deno version. Both use lockfiles to ensure your dependencies don't update unexpectedly.
> Or, I’m using strictNullChecks. Dependent package isn’t.
This definitely sounds like a potential problem. Because Deno enables all strict checks by default, hopefully library authors will refrain from disabling them.
That might be true in general, but I seem to run into problems with the two with about equal frequency. One of the recent ones I ran into with node was stable array sort.
On the other hand, trying to setup a typescript monorepo with shared/dependent projects is a huge pain since everything needs to be transposed to intermediary JS that severely limits or breaks tooling.
Even TS project references make assumptions about the contents of package.json (such as the entry file), or how the compiler service for VsCode preloads types from @types/ better than for your own referenced projects, which sadly ties TS to that particular ecosystem.
Language version compatibility is a good point, but perhaps TSC could respect the compiler version and flags of each package's tsconfig.json, and ensure compatibility for minor versions of the language?
Since I enjoy working in TS I'm willing to wait it out as well, the pros far outweigh the cons. Now that GitHub/MS acquired NPM, I have hopes that it will pave the way to make TS a first-class citizen, though I don't know if Deno will be part of the solution or not.
How is it a lead balloon? Go got super popular in the period before /vendor and Dep (later modules). Yes, people wanted and got versions too, but the URL part stayed. ISTM, they had a Pareto optimal 20% piece of the puzzle solved and bought them selves time to solve the other 80% years later.
Go still identifies packages by URL. The recent modules feature just added the equivalent of lockfiles like npm, yarn, cargo, etc. It also added some unrelated goodies like being able to work outside of $GOPATH.
> Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today.
I really like Deno for this reason. Importing modules via URL is such a good idea, and apparently it even works in modern browsers with `<script type="module">`. We finally have a "one true way" to manage packages in JavaScript, no matter where it's being executed, without a centralized package repository to boot.
Someone rolls out code where a serious bug fell through QA cracks, and appears to be breaking a mission-critical path. Your biggest client is on the phone screaming FIX IT NOW. Three hours is an eternity.
Let’s add: appears to be breaking mission critical path that also slipped through cracks in QA. Mistakes happen, run CI/CD before getting to the mission critical path.
I think you missed my point so let me clarify: If your job is to develop software, then your computer is your production environment. It's where you run your production - your development. This is hopefully separate from where your customers runs development.
I remember the beginning of React (before Webpack) when server compilation looks fine and that magic works as <script>react.js</script> in browser. This looks like new era where HTML is fixed. But no, we have 15 standards now. Everything is finished when I found Webpack 3-line module with 20 lines Readme description. We have 1000 modules and 1000 weak points after that. React has x1000 overhead
Any package and package manager has hot points:
- no standards, api connection issues (different programming styles and connection overhead)
- minor version issues (just this 1 hour bug 0-day)
- major sdk issues (iOS deprecate OpenGL)
- source package difference (Ubuntu/CentOS/QubesOS need a different magic for use same packages)
- overhead by default everywhere that produce multiple issues
I'm a developer, but I'm also on-call 24/7 for a Node.js application. The number of people here saying "this is why you don't use dependencies" or "this is why you vendor your deps" is frustrating to see. No one _but no one_ who has managed complex enough systems will jump on the bandwagon of enterprise-ready, monolithic and supported over something like Node.js. I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.
There are trade-offs, absolutely. Waiting on a vendor to fix a problem _for months_, while sending them hefty checks, is far inferior to waiting 3 hours on a Saturday for a fix, where the actual issue only effects new installations of a CLI tool used by developers, and can trivial be sidestepped. If anything, it's a chance to teach my developers about dep management!
I'm positive my stack includes `is-promise` about 10 times. And I have no problem with that. If you upgrade deps (or don't) in any language, and don't have robust testing in place, the sysadmin in me hates you - I've seen it in everything from Go to PHP. There is no silver bullet except pragmatism!
>I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.
Sadly, I dream of doing this very thing every day. I'm at that notch on the thermometer just before "burned out". I love creating a working app from scratch. However, I'm so sick of today's tech. The app stores are full of useless apps that look like the majority of other apps whose sole purpose is to gather the user's personal data for monetizing. The web is also broken with other variations of constant tracking. I'm of an age where I remember time before the internet, so I'm not as addicted as younger people.
There’s no silver bullet you’re absolutely right, but does that mean there isn’t room for improvement? Or that you shouldn’t try? Dropping all dependencies is extreme for sure but to argue against something as simple as vendoring is a bit odd.
You’re correct - there is room for improvement. The “npx” tool is a easy place to start! And absolutely agreed dropping dependencies is extreme and vendoring not so much - but in my experience vendoring often means “don’t ever touch again until a bad security shows up”. I was being a little bit too snarky in my comment tho, absolutely :)
Vendoring causes more problems than it solves. There are plenty of things that could be improved about the node ecosystem, but a lot of the criticism isn't based on logic; there seems to be a large population on HN who just inherently hate large numbers of dependencies and will grasp for any excuse to justify that hate.
Funny, I run an “enterprise” stack almost entirely made of Java. I wouldn’t trade it for NodeJS for the world.
Making upstream changes indeed would be very, very hard. But I never have to make upstream changes because they’ve spent quite a large amount of effort on stability.
I'm also making enterprise-grade software with quite a few external dependencies. I had to email the developers of the biggest dependency multiple times because of bugs but they were all fixed within a few weeks in a new patch release. They also went out of their way to provide me with workarounds for my problems. In the NPM world you are on your own.
Sure, but JavaScript and J2EE aren't the only options. You can use a language with more built-in functionality, reduce the use of unnecessary external libraries, and/or limit those libraries to ones from trusted sources.
Pragmatism - do programming to solve real life problems rather than create a broken ecosystems which requires constant changes (and learning just to be on top of them) to fix a bad design
Here's my off-the-cuff take that will not be popular.
A function like this should be a package. Or, really, part of standard js, maybe.
A) The problem it solves is real. It's dumb, but JS has tons of dumb stuff, so that changes nothing. Sometimes you want to know "is this thing a promise", and that's not trivial (for reasons).
B) The problem it solves is not straightforward. If you Google around you'll get people saying "Anything with a .then is a promise' or other different ways of testing it. The code being convoluted shows that.
Should this problem be solved elsewhere? Sure, again, JavaScript is bad and no one's on the other side of that argument, but it's what we have. Is "just copy paste a wrong answer from SO and end up with 50 different functions in your codebase to check something" like other languages that make package management hard so much better ? I don't think so.
No. There is no reason why it should be a package by itself. It should be part of a bigger util package, which which is well maintained, tested, and with many maintainers actively looking at it, with good processes, such as systematic code reviews, etc.
At work, our big webapp depended at some point indirectly on "isobject" "isobj" and "is-object", which were all one liners (some of them even had dependencies themselves!!). Please let's all just depend on lodash and it will actually eventually reduce space and bandwith usage.
All of those can be pretty much be handled natively, and obviously. They're all primitive
isFalse would be !=
isObject would use typeOf
isFucntion would use typeOf
Where a library becomes helpful is when you have:
* A real problem (none of those are real problems, and the npm packages for them are essentially unused jokes)
* A solution that is not intuitive, or has a sharp edge, or requires non-obvious knowledge, or does not have a preexisting std approach
Checking for a promise, given the constraints of having multiple types of promises out in the world, falls into both of those. Checking if something is falsey, when Javascript provides !, does not fall into either.
Have you seen his twitter? It's incredibly cringey. I don't understand how someone could be so arrogant to claim millions of companies use his software, when his software is isFalse. Not to mention his hundreds of packages that literally just output an emoji.
isFalsy is just “!”; I don't think we need a new library for a more verbose way to express a one-character unary operator, no, nor does it meet the standard of “The problem it solves is not straightforward” proposed upthread.
I'm not surprised it exists (and literally is just a more verbose, indirect way to invoke “!” that nevertheless is a 17 sloc module with a bunch of ancillary files that has one direct and, by way of that one, 17 second-order and, while I didn't check further, probably an even more ridiculous number of more distant, transitive dependendencies.)
I'm just saying it's neither necessary nor consistent with the standard for when a library is a good idea proposed upthread, so suggesting it as part of an attempt at reductio as absurdam on that standard is misplaced.
I think this would be the solution. I feel like a lot of the NPM transitive dependency explosion just comes from the fact that JavaScript is a language with a ton of warts and a lack of solid built-ins compared to e.g. Python. Python also has packages and dependencies, but the full list of dependencies used by a REST service I run in production (including a web framework and ORM) is a million times smaller than any package-lock.json I've seen.
This is correct. I post the same thing every time one of these JS dependency hell issues pops up, but it's the case because it's true: The problem is the lack of a standard library. It's not that people don't know how to write a left-pad function, it's that it's dumb to rewrite it in every project and then remember what order you put the arguments in, etc. So people standardize, but they're standardizing on millions of different little packages.
I think the effort that goes into all the JS syntax and module changes would be better put into developing a solid standard library first.
It works for standard promises, sure there are non standard promises, ancient stuff, that to me shouldn't be used (and a library that uses them should be avoided). So why you need that code in the first place?
Also that isPromise function will not work with TypeScript, imagine you have a function that takes something that can be a promise or not (and this is also bad design in the first place), but then you want to check if the argument is a Promise, sure with `instanceof` the compiler knows that you are doing, otherwise not.
Also, look at the repo, a ton of files for a 1 line function? Really? You take less time to write that function yourself than to include that library. But you shouldn't have to write that function in the first place.
Your implementation is broken even if everything uses native Promises. I don't know how many times this exact thread needs to happen on HN (as it has many times before) until people realize their "no duh" implementations of things are actually worse than the thing they're criticizing.
Make an iframe.
In the iframe:
> window.p = new Promise(() => {});
From the parent window:
> window.frames[0].p instanceof Promise
false
Congrats! Your isPromise function was given a Promise and returned the incorrect result. The library returns the correct result. Try again!
In case someone else is also confused by this, it seems that instanceof checks whether the objects prototype matches, and these prototypes are not shared across different contexts, which iframes are [0]. (Though I would still like to know why it works like this.)
No, it was not given a Promise. It was given a foreign object from another window. If you want to inspect another window you should not be reusing code that is designed for single threaded operations. Instead, have a layer that translates, serializes, or explicitly defines an interface that the objects we are dealing with are foreign and need to be transformed. Then the abstraction implementation details of dealing with multiple windows become a concern of a single layer and not your entire codebase. Implicitly and magically treating a foreign window as this window, will fail in many subtle and unknown ways. The "brokenness" you mention is not in that implementation, it is correctly breaking, telling you that what you are doing is wrong, then you try to bypass the error instead of fixing your approach.
For foreign-origin iframes, that's exactly what people do using `postMessage`. But for same-origin iframes there's no need since you can access the iframe's context directly. So people can (and do) write code exactly like this that accesses data directly.
And it was given a Promise. You just shouldn't use instanceof in multi-window contexts in JavaScript. This is why built-ins like `Array.isArray` exist and should be used instead of `arr instanceof Array`. Maybe you'd prefer to write to TC39 and tell them that `Array.isArray` is wrong and should return false for arrays from other contexts?
There's no use jumping through hoops to avoid admitting that OP made an error. They were wrong and didn't think of this.
The problem was to get the promise out of the iframe when you shouldn't do this directly in the first place.
This literally is an XY problem: "I need to do A but it's giving me bad results, what do I need to add?" - "Don't use A, it's bad practice. Use B instead and keep using built-in tools instead of hacking something together" In this case use instanceof instead of is-promise because it's a hack around the actual problem of getting objects out of a different context that was explicitly designed to behave this way.
I'm afraid that you don't know what an XY problem is.
JavaScript developers always seem to think they are the smart ones after their 6 weeks of some random bootcamp and then you end up with some crap like NPM where a single line in a package out of hundreds maintained by amateurs can break everybody's development environment.
Though, let's also appreciate just how niche that case is. I'd be surprised if more than 0.5% of the JS devs reading this will ever encounter that scenario where they are reaching across VMs like that in their life.
`obj instanceof Promise` and `typeof obj.then === 'function'` (is-promise) are much different checks. Frankly, I don't think either belongs in a library. You should just write that code yourself and ponder the trade-offs. Do you really just want to check if an object has a then() method or do you want to check its prototype chain?
> If you're building a library, or maintaining one that's been built over many years, you can't easily make calls like that.
Well, you can, and in the JS ecosystem you'll often find cases where there are two libraries (or two broad classes of libraries) for a certain function that make different choices, one of which makes the simple, modern choice that doesn't support legacy, and one that does the complex, messy thing necessary to deal with legacy code, and which you use depends on your project and it's other constraints.
OK, then the legacy library can't easily make that choice. I'm not saying every single javascript developer should be accepting async or sync callbacks, just that some libraries are choosing to do that for legitimate reasons.
Are you saying that this isPromise package will not play well with TypeScript? One of those files (index.d.ts) solves the TypeScript problem using type predicates. TypeScript WILL know that the object is a promise if it returns true.
This function should absolutely NOT be a package. The problem is that JS has a very minimal standard library and despite tons of money going into the system, nobody's had the good sense to take a leadership role and implement that standard. In other languages you don't need to include external packages to determine the types of objects you're dealing with, or many other things.
And there's an interesting discussion to be had if it shouldn't be one of those snippets that everyone copies from Stackoverflow instead. And how much trouble in other ways that alternative has caused.
One-liners without dependencies like this should live as a function in a utility file. If justification is needed, there should be a comment with a link to this package's repo.
Also you can just read the code the same way you read any other code. And since it's in your codebase and git diffs, you will read it.
Because the implementation detail of is-promise actually is important. It just checks if an object has a .then() method. So if you use it, it's just as important that you know the limitation.
Also: The utility file will never be updated and fix existing issues within the utility itself (unless you look up the package and diff it yourself). It's a trade-off.
As the commenter who suggested keeping it in a utilities file, I'd say that the trade-off is heavily weighted to not importing it as a package.
When you cribbed the code you should have completely understood what exactly the package was doing, and why, and known what issues it would have had. Since it's a one-liner, it is transparent. Since it is without dependencies, it is unlikely to fail on old code. So it's unlikely to have existing issues and unlikely to develop new issues.
Of course, if you end up using new features of the language in your code, it may fail on that, but the risk old stuff failing should have already been factored in when you decided to upgrade. In fact, the one-liner solves this better since you decide the pace of adaptation of your one-liner to the new features, not the package maintainer.
That's the trade-off I would most likely take in the "isPromise" case. But the opening question was a generic one ("What's the difference between a utility file and a package"), so the answer should reflect both sides.
I'd say that it should rather be a part of the type system.
Some kind of `obj isa Promise` should be the way to do this, not random property checks. But that's JS...
The thing is that there is the Promise "class", which is provided by the environment, but there is a interface called PromiseLike, which is defined as having a method called then that takes one or two functions. Now, JS doesn't have nominal typing for interfaces, so you have to do "random property checks".
Typescript partially solves that by declaring types, but if you have a any variable, you still need to do some probing to be able to safely convert it to a PromiseLike, because TypeScript goes to great lengths to not actually produce physical code on its output, attempting to be just a type checker.
Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided, but alas, this is not the case.
shouldn't it then be called is-promise-like? Also, if you're being loose about it anyways, can't you simply just go for `if (obj && typeof obj.then == 'function')` and call it a day? I'd say that's short enough to include your own version and not rely on a package for.
I think that module over complicates it as it is, and most people don't need that level of complication in their code.
> Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided
It's not perfect and a bit of a bolt-on, but io.ts works reasonably well in this area:
In theory `x instanceof Promise` would work, but the reason for this package is that there are many non-standard Promise implementations in the JS world.
Promises were not always part of the standard and for many years were implemented in user space, by many different implementations. Using duck typing like this was the only way to allow packages to interact with each other, as requiring an entire stack to say only use Bluebird promises is not realistic at all.
I think you're on the right track. We all (I hope) agree that stuff like this should be standardized. But that's not the same as "should be a package".
At the very least, W3, or Mozilla Foundation, or something with some kind of quasi-authority should release a "JS STD" package that contains a whole bunch of helper functions like this. Or maybe a "JS Extras" package, and as function usage is tracked across the eco-system, the most popular/important stuff is considered for addition into the JS standard itself.
Having hundreds of packages that each contain one line functions, simply means that there are hundreds of vectors by which large projects can break. And those can in turn break other projects, etc.
The reason, cynically, that these all exist as separate packages, is because the person who started this fiasco wanted to put a high a download count as possible on his resume for packages he maintains. Splitting everything up into multiple packages means extra-cred for doing OSS work. Completely stupid, and I'm annoyed nobody has stepped up with a replacement for all this yet.
A function like this should not be something that anyone even thinks of writing or using.
In properly designed languages, values have either a known concrete type, or the interfaces that they have to support are listed, and the compiler checks them.
Even in JavaScript/TypeScript, if you are using this, you or a library you are using are doing it wrong, since you should know whether a value is a promise or not when writing code.
This function is most likely an artifact of before promises got standardized. One way promises took off and became so ubiquitous is different implementations could interop seamlessly. And the reason for that is a promise was defined as 'an object or function having a then method which returns a promise when called'.
Doesn't excuse the JS ecosystem and JS as a whole, which truly is a mess. But there's a history behind these things.
ISTM that a framework may need to test for promiseness if it calls promises and functions differently, but it can and should be done as a utility in the framework, not as a separate package.
I agree with that. I have no idea why it’s in a separate package. But I can say that about many packages :).
It’s possible to just treat everything as a promise by wrapping results in Promise.resolve() but that can have performance implications that some franeworks might want to avoid by only going down the promise route when they have to.
This will work for a standard Promise, which is great, but not for weirdo made up promises. It also was released, I think, in 2018.
It's one thing if you own the entire codebase, but if you're building a popular, multiple-years-old library/framework, you can't make the same assumptions.
Shouldn't a js framework exist that includes these static basic if checks in the core and offers them as a buildin method? Why load this as an external package, why not copy the code and maintain locally?
And it doesn't even check if it's a Promise. It's violating it's own naming contract. At least it should be called: isPromiseLike? To check if something is actually a Promise all you need to do is a `foo instanceof Promise`.
Then instanceof will break for all native objects. Who writes code that checks instances across window boundaries? This is flawed beyond the idea of how to properly check an instance, it's bad architecture - the result? This post, and probably more subtle bugs surfacing along the way.
how does that make sense in any universe. Just because I have a function named "then" does not mean that my object is a promise. Maybe "then" is the name of a domain thing in my project, for instance a small DSL or something like that. arghhhhhh !
Consider it a special function like "__init__" in Python. I think this is one of the problems with duck typing, existence of a public function introduces possible name collision onto the whole codebase.
It didn't break because of the source, it broke because of all the packaging/module bullshit around it. It seems the Javascript ec(h)osystem has firmly come down on the philosophy of "make modules very easy to use, but very difficult to make (correctly)".
The predictable explosion in dependency trees has caused the predictable problems like this one. I feel I much prefer the C/C++ way of "modules are easy to make, but difficult to use".
It's not useful, interesting, or accurate to stratify people this way. You have no idea what someone's intelligence level or background is based on their usage of JS or C.
I loathe JS, but one of the best devs I know likes it. People's mileage varies.
For me personally, there's much more money in easy React products than making games. I'm not a great dev, but I'd be doing this same work even if I were.
I don't know much about VHDL, but there seems to be a non-scientifically-measured negative correlation between the quality of candidates I've interviewed claiming VHDL experience vs. those without.
There are custom Promise implementations (for reasons), such as bluebird.js. If you're supporting legacy browsers, there will be no standard Promise object. So the simplest way to check for the Promise contract is the code posted. But yes, in an ideal world, one would be able to just do `promise instanceof Promise`.
These cases should really be handled at the compilation/transpilation level, since there is one, and users should just write latest generation JavaScript without these concerns.
I mean, if you have to assume deliberately adversarial action on the part of your own codebase, you may have worse problems than having to duck-type promises.
A class with a `then` method isn't a rare thing that could only come up adversarially. Below I've linked two example from the rust std-library (I just chose that language because the documentation is really easy to search for objects which have a method name then). I think we can be sure that both booleans, and the "less than, equal to, or greater than" enum are not in fact promises.
Cypress.io has chainable .then functions but they are not await-able and the documentation clearly states they are not promises and cannot be treated as such. It’s a bad idea, but it is out there.
I haven't used Cypress, but looking at its docs, I don't know that I'd agree its use of "then" is all that bad. I agree they'd have done better to find a different name, but this at least seems like the least possible violation of least surprise if they are going to reuse the name.
At the same time, it is intensely wild to me that their "then" is an alternative to their "should", which apparently just re-executes the callback it's given until that callback stops throwing. If your tests require to be re-run an arbitrary and varying number of times in order to pass, you have problems that need to be dealt with in some better way than having your test harness paper over them automatically for you.
The language has nothing to do with it. The point is just that the name "then" is a perfectly common method name.
If a small standard library is using it for things that aren't promises, you can bet your ass that there are javascript libraries using it for things that aren't promises.
Like I said, I just chose to look at rust first because it's documentation has a good search bar.
The language has everything to do with it, because the language is the locus of practice. Rust practice is whatever it is, and is apparently pretty free with the use of "then" as a method name, which is fine. Javascript practice isn't the same as Rust practice, and Javascript practice includes a pretty strong norm around methods named "then".
That's why the next time I run into such a method, that doesn't belong to a promise and behave the way a promise's "then" method does, will be the first time I can remember, despite having worked primarily or exclusively in Javascript since well before promises even existed.
I'm sure there is an example somewhere on NPM of a wildcat "then", and that if you waste enough of your time you can find it. So what, though? People violate Rust idioms too from time to time, I'm sure. I doubt you'd argue that that calls Rust idioms themselves into question. Why does it do so with Javascript?
It doesn't call into question the idioms of either language. It does call into question the idea of programmatically deciding whether or not something is a promise based on the assumption that the idiom was followed.
People bounce around between languages, especially to javascript. An expert javascript dev might not call things "then" but the many dabblers might. Going back to the original point this is a footgun, not only an avenue for malicious code to cause trouble.
My primary point is just that you are mistaken when claiming that this bug could only be surfaced by malicious code.
My secondary (somewhat implicit) point is that having an "is-promise" function is a mistake when there is no way to tell if something actually is or is not a promise. This library/function name is lying to the programmers using it about what it is actually capable of, and that's likely to create bugs.
I mind duck typing! That's why I'm so fond of Typescript, where everything that shows up where a promise should be is reliably either instanceof Promise, or instanceof something that implements Promise, or a compile-time error.
Absent that evolved level of tooling, and especially in an environment still dealing with the legacy of slow standardization and competing implementations that I mentioned in another comment, you're stuck with best effort no matter what. In the case of JS and promises, because of the norm I described earlier in this thread, best effort is easily good enough to be going on with. It's not ideal, but what in engineering practice ever is?
So, I mind poorly implemented duck typing, I also mildly mind dynamic typing, but in principle I think static duck typing could be not bad.
With javascript promises in particular, the duck typing suffers from this unfortunate fact that you can't easily check if something can be awaited-upon or not. I don't think I really care if something is a promise, so long as I can do everything I want to to it. So I view the issues here as this function over-claiming what it can do, the limitation on the typesystem preventing us from checking the await-ability of an object, and the lack of static type checking. None of those are necessitated by duck typing.
I disagree that you're stuck with this best-effort function. It's perfectly possible to architect the system so you never need to query whether or not an object is a promise. Given the lack of ability to accurately answer that question, it seems like the correct thing to do. At the very least I'd prefer if this function was called "looks-vaguely-like-a-promise" instead of "is-promise".
Now we're kind of just litigating how "is-promise" is used in CRA, or more accurately in whichever of CRA's nth-level dependencies uses it, because CRA's codebase itself never mentions it.
I don't care enough to go dig that out on a Saturday afternoon, but I suspect that if I did, we'd end up agreeing that whoever is using it could, by dint of sufficient effort, have found a better way.
On the other hand, this appears to be the first time it's been a significant problem, and that only for the space of a few hours, none of which were business hours. That's a chance I'd be willing to take - did take, I suppose, in the sense that my team's primary product is built on CRA - because I'm an engineer, not a scientist, and my remit is thus to produce not something that's theoretically correct in all circumstances, but instead something that's exactly as solid as it has to be to get the job done, and no more. Not that this isn't, in the Javascript world as in any other, sometimes much akin to JWZ's "trying to make a bookshelf out of mashed potatoes". But hey, you know what? If the client only asks for a bookshelf that lasts for a minute, and the mashed potatoes are good enough for that, then I'll break open a box of Idaho™ Brand I Can't Believe It's Not Real Promises and get to work.
I grant this is not a situation that everyone finds satisfactory, nor should they; the untrammeled desire for perfection, given sufficient capacity on the part of its possessor and sufficient scope for them to execute on their visions, is exactly what produces tools like Typescript, that make it easier for workaday engineers like yours truly to more closely approach perfection, within budget, than we otherwise could. There's value in that. But there's value in "good enough", too.
This is a promise as far the language is concerned (and `is-promise` package uses the same definition as the language) - it's sufficient for an value to be an object and to have a `then` property that is callable. For instance, in the following example, the `then` method is being called.
I am one of the maintainers of a popular Node-based CLI (the firebase CLI). This type of thing has happened to us before.
I think the real evil here is that by default npm does not encourage pinned dependency versions.
If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).
In other stacks having your dependency versions float around is considered bad practice. If I want to go from depending on 1.2.1 to 1.2.2 there should be a commit in my history showing when I did it and that my CI still passed.
I think we miss the forest for the trees when we get mad about Node devs taking small dependencies. If they had pinned their version it would have been fine.
That’s still the fault of the package developer. “^1.2.1” means “any version with a public API compatible with 1.2.1”, or in other words “only minor versions”.
The whole point of semantic versioning is to guarantee breaking changes are expressed through major versions. If you break your package’s compatibility and bump the version to 1.2.1 instead of 2.0.0 then people absolutely should be upset.
Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds. The package lock files are themselves dependent on which package restore tool you are using (yarn vs npm vs ...) it’s also much too ambitious to believe that all packages in an ecosystem will properly implement semver. There may even be times where a change doesn’t appear to be breaking to the maintainer but is in actuality. For example, suppose a UI library has a css class called card-invalid-data and wanted to rename to card-data-invalid. This is an internal change since it is their own css, but could break a library that overrode this style or depended on this class. I would consider this a minor version but it could still cause a regression for someone.
> Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds.
This is the germane point in this incident.
The parent comment mentions that SemVer "guarantee[s] breaking changes are expressed through major versions". This is a common misperception about SemVer. That "guarantee" is purely hypothetical and doesn't apply to the real world where humans make mistakes.
The OP `is-promise` issue is an example of the real world intruding on this guarantee. The maintainers clearly didn't intend to break things but they did because everybody makes mistakes
Which points to the actual value proposition of SemVer: by obeying these rules, consumers of your package will know your _intention_ with a particular changeset. If the actual behavior of that changeset deviates from the SemVer guidelines (e.g. breaking behavior in a patch bump), then it's a bug and should be fixed accordingly.
Back to the parent's point about locking dependency version— I would add that you should also store a copy of your dependencies in a safe location that you control (aka vendoring) if anything serious depends upon your application being continually up and running.
I think you might be misunderstanding the above comment. The default behavior of `npm i <package>` is to add `"<package>": "^1.2.1"` _not_ `"<package>": "1.2.1"`. The point the commenter was trying to make is that the tool itself has a bad default which makes it easy to make mistakes. I would go so far as to argue that when `npm i` does not have the behavior a user would expect from a package manager in that regard.
And likewise, I think the point of that above comment is that such a change in default behavior wouldn't be necessary if package authors actually obeyed semantic versioning.
That is: "^1.2.1" shouldn't be a bad default relative to "1.2.1"; you generally want to be able to pull in non-breaking security updates automatically, for what I hope are obvious reasons, and if that goes sideways then the blame should be entirely on the package maintainer for violating version semantics, not on the package/dependency manager for obeying version semantics.
I don't have much of an opinion on this for Node.js, but the Ruby and Elixir ecosystems (among those of many, many other languages which I've used in recent years) have similar conventions, and I don't seem to recall nearly as many cases of widely-used packages blatantly ignoring semantic versioning. Then again, most typically require the programmer to be explicit about whether or not to allow sub-minor automatic version updates for a given dependency, last I checked (e.g. you edit a configuration file and use the build tool to pull the dependencies specified in that file, as opposed to the build tool itself updating that file like npm apparently does).
> If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).
Yes, this is by design. If this weren't the case, the ecosystem would be an absolute minefield of non-updated transitive dependencies with unpatched security issues.
I feel the real issue here is downstream package consumers not practicing proper dependency pinning. You can blame the Node ecosystem, the maintainer of the package, etc. but there are well-known solutions to prevent this kind of situation.
This wasn't a big problem due to a package being suddenly upgraded in existing code. It's because a scaffolding tool (Create React App) used to set up new projects would set those projects up with the latest (presumably patch, maybe minor) version of the dependencies. In other words, because those projects did not exist yet, there was nothing to pin.
Unless you mean Create React App should pin all of their (transitive) dependencies and release new versions multiple times a day with one of those dependencies updated.
So you would exchange security for stability, if you use package pinning then you will end up with fosilized packages in your product, which will have all maner of security issues that have alresdy been fixed.
If a package doesn't provide a stable branch that will receive security updates then it's not mature enough to be used anyway. That's the sensible middle ground between bleeding edge and security, unfortunately most packages/projects aren't mature enough to provide this.
There's a reason companies stick with old COBOL solutions, modern alternatives simply aren't stable enough.
I get notifications to update my Rails apps from GitHub as a matter of course when there's a CVE in my dependencies. Does this kind of thing not exist/is impractical for JS?
I think these one-line-packages aren't the right way to go.
Either JS-developers should skip the package-system in that case
and just copy and paste those functions into their own project or
there should be more common used packages that bundle these
one-liners. I mean is_promise() and left_pad() are not worth
their own package. Packages-dependencies of 10000 packages for
trivial programs are just insane.
Probably not. There is too much code in the wild, and NPM owns the entire JS ecosystem, and there has been too much investment in that ecosystem and its culture at this point for a change in course to be feasible.
The JS universe is stuck with this for the foreseeable future.
It's just a cultural problem. There's no reason why a library should abstract away `typeof obj.then === 'function'` if they want to check if something is a promise. Just write a one-liner the same way you don't pull in a `is-greater-than-zero` lib to check x>0.
The problem is when you try to level criticism at this culture and a cloud chorus of people will show up to assert that somehow tiny deps are good despite these glaring issues (a big one just being security vulns). And funnily enough, the usual suspects are precisely people publishing these one-liner libs. Then people regurgitate these thoughts and the cargo cult continues.
So there's no "fix" for NPM (not even sure what that would mean). I mean, anyone can publish anything. People just have to decide to stop using one-liner libs just because they exist.
Does it need much to change? I didn't mean to fix NPM. The
problem is the non-existing standard-library. Just create one
that everybody will use and everybody could cut their
dependencies by thousands.
Several of these already exist, like lodash and underscore (which is a subset of lodash). After the rapid improvements on both the browser and node sides of the last couple of years (which filled in many of the blanks in this hypothetical "standard library"), they are less necessary than they may have been before. Also they can become something of a crutch. Fixing a bug a couple of days ago, I realized that an argument of Object.assign() needed to be deep-copied. Rather than adding a dependency for lodash or underscore or even some more limited-purpose deepcopy package, I just figured out which member of the object needed to be copied and did so explicitly. Done.
Another good way to not have to depend on big/tiny/weird modules published by others is to use coffeescript. So much finicky logic and array-handling just goes away.
Not everyone would use it, that's my point. The inertia behind the existing system is too great, especially in enterprise. All that would happen is that library would become just another Node package, and then you've got the "n+1 standards" problem.
The "nonexistent standard library" wasn't a problem in the days when javascript development meant getting JQuery and some plugins, or some similar library. It only became a problem after the ecosystem got taken over by a set of programming paradigms that make no sense for the language.
Yes, in my mind you'd have to change everything from the ground up, starting with no longer using javascript outside of the browser.
The point is that different languages are best suited to different tasks. Javascript is a simple, very loosely typed scripting language with prototypal inheritance that was developed to be run in the browser. It's a DSL, not a general purpose programming language. Using it elsewhere for applications where another language with stronger and more expressive types would be more appropriate requires hacks like compiling it from another (safer, more strongly typed) language like Typescript, which still results in code that can be fragile because it only simulates (to the degree that a JS interpreter allows) features that the language doesn't actually support.
See the attempt to "detect if something is a Promise" as an example - the function definition for the package makes it appear as if you're actually checking a type, but that's not what the package does.
Most of the unnecessary complexity in modern JS, as I see it, comes from the desire to have it act and behave like a language that it simply isn't.
> It's a DSL, not a general purpose programming language
Sorry, but I fear that ship has sailed ;-)
And I've heard JS was developed by someone who wanted to give us Scheme (you can't go more general purpose than that)
but had to resort to a more "friendly" java-syntax. IMHO javascript
would be a great general purpose language if the ecosystem wouldn't be
such a mess.
I see a lot of criticism to one-line packages, but IMO in the end what matters is the abstraction.
Thinking of the package as a black box, if the implementation for left-pad or is-promise was 200 lines would it suddenly be ok for so many other packages to depend on it? Why? The size of the package doesn't make it less bug-prone.
I see plenty of people who are over-eager to always be up-to-date, when there really isn't any point to it if your system works well, and so they don't pin their versions. This will break big applications when one-line packages break, but also when when 5000-line packages break. Dependencies are part of your source, don't change it for the sake of changing it, and don't change it without reviewing it.
> The size of the package doesn't make it less bug-prone.
Of course it does. It's more bug-prone just by being a package.
More code is more bugs and more build-system annoyance is more
terror (=> more bugs). If I only need one line of functionality I will just
copy and paste that line into my project instead of dealing with
npm or github.
> Dependencies are part of your source
I agree. If you see news about broken packages like this and you
don't just shrug your shoulders your build-system might be shit.
It would be more ok if left-pad was part of a package called, say, text-utils which also included right-pad, etc. Same with is-promise, it sounds like it should be a function in a package called type-checker.
This sounds very clever but the nature of software development is quite different from building buildings. The rate of innovation is by magnitudes higher. And as opposed to buildings software can tolerate a certain amount of failure.
Everyone crying about this on the Internet would do better to just take it as an easy lesson: pin your dependency versions for projects running in production.
This was an honest oversight, and even somewhat inevitable with so many expected supported ways to import/export between cjs mjs amd umd etc. It will happen again.
And when it happens the next time, if it ruins your life again, take issue with yourself for not pinning your dependency versions, rather that package maintainers trying to make it all happen.
The "magical security updates" theory has never worked. Breaking insufficiently-pinned dependencies are vastly more common than unnoticed fixes on patch releases. On balance, semver has been good for javascript, but to the extent it contributed to the popularization of this dumb theory it has been bad. Production apps (and by a transitive relation, one supposes, library modules) should be zealously pinned to the fewest possible dependencies, and those dependencies should be regularly monitored for updates. When those updates occur, tests can be run before updating the pins.
Yes -- pinning dependency versions does not have to be at odds with security.
In fact, how secure is it, really, to keep dependencies unpinned and welcome literally /any/ random upstream code into your project, unchecked? This is yet more irresponsible than letting dependencies age.
But even then, it's not as if you have to choose -- you can pin, then vet upstream updates when they come, and pin again.
Well, I guess you can choose whichever poison you like.
Pinning isn't meant to be a forever type of commitment. You're just saying, "all works as expected with this particular permutation of library code underneath." And the moment your dependencies release their hot-new you can retest and repin. Otherwise you're flying blind and this type of issue will arise without fail.
The unspoken assumption is that you don't just pin and move on with your life. You take as much ownership over your package.json as you do with your own code, and know that you must actively review and upgrade as necessary (as opposed to just running "npm install" and trusting in the wisdom of the cloud)
I’m working on a thing I’m calling DriftWatch that attempts to track, objectively, how far out of date you are on dependencies, which I call dependency drift. I’ve posted about it here before [1]. I’m using it in my consulting practice to show clients the importance of keeping up to date and it’s working well.
I agree with the parent that it’s important to lock to avoid surprises (in Ruby, we commit the Gemfile.lock for this reason), but it’s equally as important to stay up to date.
There are commercial tools like blackduck, sonartype/nexus which are used to sczn dependancies of not1 just node code, and highlite ourof date packages, known vulnerabilities, and license problems.
Tools like Safety can help in the python world, https://pypi.org/project/safety/, and cargo-audit https://github.com/rustsec/cargo-audit in the rust world. Stick them in your build chain and get alerted to dependencies with known exploits, so you can revisit and bump your dependency versions, or decide that that project is not worth using if they can't be bothered to consider security to be as important a feature as it is.
What you do is you pin dependencies, then automate regular dependency upgrade PRs. If your test suite and your CI/CD pipeline is reliable, this should be an easy addition.
We run Dependabot in our CI pipeline to flag security upgrades, and then action them. I'd much rather have that manual intervention than non-deterministic builds.
Exactly: pin dependencies to avoid surprises, and use a CI to test compatibility of new versions, so you can deploy security updates on your own schedule, best of both worlds.
Github even bought Dependabot last year, so it's now free.
Then you can’t upgrade anything unless create-react-app releases a new version (or you eject), which, in addition to the obvious release cadence problem, might introduce other compatibility problems.
It's not like pinning means you can /never/ update. You just get to do it on your own schedule.
You can even automate updating to some degree -- running your tests against the latest everything and then locking in to those versions of all goes well.
Again, this only works for project skeletons, and not for any other package that happened to have a transient dependency on `is-promise` (which is a lot more than project skeletons).
Maybe I'm misunderstanding how those projects work. From what I recall, they generate a project, including the package.json. So I'm not sure why they couldn't just generate the package.json with pinned versions?
I don't write much JS, and have only used create-react-app just a few times, so feel free to explain why this isn't possible.
package.json only lists top-level dependencies. package-lock.json tracks all dependencies, and dependencies of dependencies. is-promise is one of those dependencies of a dependency, which you don't have much control over.
How could a dependency-of-dependency change version if one of the direct dependencies doesn't change version? I guess, if the direct dependency isn't pinning that version? Another case of, everyone should be pinning dependencies.
Exactly, node's conventions are to allow a range of versions (semver compatible). True, if all dependencies were pinned, this wouldn't come up as often.
That also means that there would be a lot more updating when security issues are found.
I'm a novice in this area but if your project relies on a bunch of external node packages why wouldn't you download them all and host them locally or add them to version control?
Adding them to your own version control is a nightmare: Your own work will drown in all the changes in your dependencies. The repository will quickly grow to gigabytes, and any operation that would usually take seconds will take minutes.
It's also just not needed. Simply specifying an exact version ("=2.5.2") will avoid this problem. The code for a version specified in this manner does not change.
Yes, putting your dependencies in version control alongside your project is no fun. Commit history is muddied, but also if your production boxes are running on a different platform or architecture than where you and your team develop, that can make a big mess too.
That said, with a big enough team and risk-averse organisation, it can be a brilliant idea to put your dependencies in /separate/ version control and have your build process interact that way.
In that scenario, even if your dependencies vanish from the Internet (as happened with left-pad), you are still sitting pretty. You can also see exactly what changed when, in hunting for causes of regressions etc.
Checking them into your repo is called “vendoring” and it’s one way of solving the problem, yes. Personally, it’s my favorite approach. But it does have some challenges, as other commenters point out.
You don't need to pin them forevermore -- just when you don't want everything to break unexpectedly :).
When you want to upgrade your dependencies, then go ahead and do that, on your own schedule, with time and space to fix whatever issues come up, update your tests, QA, etc.
Somehow convoluted, but if you wrap in a promise like this then you make it async (similar to setTimeout(fn, 0)), so in some situations you might want to keep the non-promised code as non-promise:
Dependencies in almost any software system are fundamentally built on trust.
You trust that minor version upgrades won't break the system, or that malicious code won't be introduced. But we're human... things break.
This can happen in any ecosystem, but npm is particularly vulnerable because of it's huge dependency trees. Which is only possible due to the low overhead of creating, including and resolving packages.
That's why npm has the "package-lock" file, which takes a snapshot of the entire dependency tree, allowing a truly reproducible build. Not using this is a risk.
Call me crazy, but... I don't add things to my projects without looking at the source. Mostly because it saves me from shit like this. If I see something is small enough, and easy enough to reason about, I'll just copy-pasta that motherfucker with a comment citing the source and date it was pasta'd (license permitting).
Things like this are so not worth a package, ever, it's something when you see it you go "oh yeah, that's the obvious, easy way of doing this" it's not a package, it's a pattern. I can promise you, this was only ever added to packages because people wrongly assumed because since it's about "promises" (spooooky) it must be complex and worthy of packaging.
As someone who doesn't do front-end work regularly, but also sank about 3 consecutive weeks (~6-8 hours/day) in the last year into understanding generators, yielding, and promises... I can tell you, the actually scary part about all of this, is pretty much no one just reads the fucking docs or the code they're adding.
Moral of the story, especially in the browser: the reward of reading the code before adding it is enormous, you'd be surprised how often the thing you want is just a simple pattern. Taking that pattern and applying it to your specific use case, instead of imposing that pattern on your use case will give you giant wins.... Learn the patterns and you're set for life.
> Call me crazy, but... I don't add things to my projects without looking at the source.
This is manageable when you are using Packagist, this is manageable when you are using Maven, where all dependencies are flat. When compatibility issues arise they have to be dealt with upstream.
This is NOT manageable when you are using NPM that will go fetch 30 different versions of the same package because crazy dependency resolution.
This is not a JS issue like people claim here, this is 100% a NPM issue because whoever designed this was too busy being patronizing on Twitter rather than making sensible design decisions.
Unless the packages you're adding are trivial I seriously doubt you're looking that close. Are you really going to code review 20000 lines of someone elses code every time you're adding something? 100,000? Also their dependencies? Those are very reasonable numbers by the way.
I said I look at them, I didn't say I inspect every single line of them. My point, which you've missed, is that simply looking at the code before you add it (spend a even a couple minutes) saves a lot of problems (like the one in create-react-app).
FWIW, I also won't add something to my project if I see it has a ton of dependencies on stupid shit. Literally, I gave up on react after realizing `create-react-app` is what the community recommends. I'm glad I did too. It's an insane amount of bloat, for nothing included but a view renderer, and if that's how that community rolls... I'm gonna have to pass.
If you don't read the source, how can you claim such moral superiority? Whatever security issues, nefarious code, etc., are almost assuredly hidden down in the weeds where you're not looking. You think other programmers don't glance at the structure? Of course they do.
I wish I had as much time as you. Looking through the source code of the millions and millions of lines of code that get packaged with any modern application before you add things to your projects sounds daunting!
When would the actual function body need to change in this case? These micropackages almost never break because of the actual implementation, they break because they are unpublished (like left-pad) or because they change how they are exported (like in this case).
If the code is in your codebase it does not need a test.
This sort of cavalier attitude where 1000 dependencies are yawn-inducing in their commonality is why I feel vindicated in never having wasted my time with this kind of ecosystem. Eventually the house of cards will come down. Let's all pray that it happens sooner rather than later.
When WebAssembly gets direct DOM access we'll finally have options and we'll no longer have to tolerate JavaScript. I expect the JS community will settle down and get somewhat saner after that, too.
I suspect that over time there will be more pushback against dependencies, as I've seen in other communities, viewing dependencies as liabilities carefully chosen.
The problem isn't reading 1000+ dependencies, the problem is the 1000+ dependencies... There's no way, setting up a view renderer, in the context of a webpage, requires a 1000+ dependencies. I honestly did this exact thing with `create-react-app` and it's one of the reasons why I don't use/choose react. Too much bloat for no batteries included.
this doesn't make any sense. cra is webpack, but in a way that doesn't blow up every week. you can use react without bundlers, but what is the point. you'll be sitting there, without a dev env, no hot reload, no module resolution, minification, types, jsx, babel, ... any single one of these will get you 1000+ packages on the dev side. none of this is going into the published build of course.
Plus, pick the consensus option, any problems that come up are that option's fault. Fight for anything else, and everything's your fault. Even if it is actually better it can make you, personally, look worse.
Or the 25 people you work with across 4-8 different teams that finally settled on something that can allow people of different teams to move around without a lot of anguish.
Note that in this case the dependencies also include compilers (for several languages, because front-end projects use multiple languages, have to be compatible with varying levels of support for those languages, and there's some leeway for the consumer of the scaffolding tool as well to choose which toolchain they use - but the other options get installed as well). Do you also review the code of your compilers and runtimes? Test frameworks? Static analysers?
It was posted after chore: fix is-promise, they may have not realized that it's just some random pingback that github shows there, and not a message that somebody has committed a fix.
It's downloaded 11 million times a week. This touches a good majority of the Node ecosystem, so there's going to be quite a lot that doesn't work until this is remedied. And I'm not sure that package-lock.json is going to save folks here because it was a minor version update.
is it idiomatic in the JS world to always express dependencies in the "version X.Y or higher", vs "version X.Y"? Most of my experience is from the java/maven world where you're playing with fire if you don't just make it "X.Y".
There are a lot of idioms. A very common one, I think the current default, is to pin only the major version in the dependency list, and also to lock exact versions in an installer-generated lockfile following a successful install. If you find a locked version breaks your code, you adjust your dependency list, nuke the lockfile, and let a reinstall build it again.
The idea is that pinning major versions lets you get non-breaking improvements from package authors who use semver properly, and pinning exact known-good versions lets you avoid surprises in your CI builds.
It works pretty well when you start from a known good state and vet your dependencies reasonably well. The trouble here seems to be largely that CRA is designed, among other purposes, to serve people just getting into the ecosystem of which it's a part, and those people are unlikely to be familiar enough with the details I've described to be able to effectively respond.
The comparison with left-pad is easy, but this isn't at all on the same scale. It's a bad day for newbies and a minor annoyance for experienced hands. And, of course, cause for endless spicy takes about how Javascript is awful, but such things are as inevitable as the sunrise and merit about the same level of interest.
It's less about JS and more about semantic versioning (semver). So you're supposed to be able to expect that the API interface of the library is not changing on the second or third version number, only on the first one, in this format: MAJOR.MINOR.PATCH
But as we're still doing human versioning one way or another in package management, there will always be cases where it doesn't perfectly follow its versioning scheme or otherwise behaves unexpectedly because of a change. It's almost like we need new ways of programming where the constructs and behavior of the program/library are built up via content-addressing so you can version it down to it's exact content.
The "idiomatic way" is to use a package-lock.json, which keeps the dependencies (and transitive dependencies) at the exact version specified unless you decide to upgrade them.
Does anyone know if there's a way to upgrade a dependency of a dependency of a dependency of a dependency of a dependency in my yarn.lock without actually editing the yarn.lock, and also, waiting for five packages to update their dependencies, especially if they're locked or specified by even one of the five in semver rules?
For example:
Running `yarn why is-promise` in a CRA app:
`Hoisted from "react-scripts#react-dev-utils#inquirer#run-async#is-promise"`
Currently, running a `yarn upgrade-interactive --latest` doesn't indicate there are any updates, so presumably, this is still a problem upstream.
Also, if anyone's in a pinch right now, luckily enough, I made this yesterday, for an interview I had only a couple hours ago. I lucked out! But if anyone else might need it, maybe it'll help someone:
One reason why we may have one line packages is the demand FAANG places on applicants. On three job screens from three different companies, I was asked, "Which npm packages have you created that we'd know about?"
For many who are hell-bent on entering these companies, yet have no known packages under their belt, they very well might fire off a one line package that actually gets some downloads, to be better "prepared" when screened.
create-react-app was broken a couple of weeks ago when I tried to use the Typescript template. Some dependency in Jest had been changed to require a version of Typescript that had only been out for a few weeks, breaking everything (including create-react-app) that hadn't updated to the latest tsc. What an ecosystem.
Someone somewhere is going to organize a series of JS packages with the continuity of a standard library and full of verification and tests. Sane versioning, consistent interfaces and so on. The npm ecosystem isn't bad it's just unwieldy successful.
There's a neat tool called crater (https://github.com/rust-lang/crater) for Rust. It can run an experiment across every Rust package (or every popular one), so you can e.g see if some theoretically breaking compiler change actually hits anyone. Something like that could be interesting for node packages as well.
This is a prime example of where I think GitHub's acquisition of Dependabot and npm could really pay off. Imagine being able to publish a prerelease version of your library and run the CI tests of your consumers, all from within the GitHub interface. Dependabot already tracks compatibility between versions, so this would be a natural extension of that.
This seems like an issue with semver. Its idealism is not compatible with actual human behavior.
The package devs clearly violated semver guidelines and npm puts a lot of faith in individual packages to take semver seriously. By default it opts every user into semver.
If you need semver to be explained to you bottom up (lists of 42 things that require a major bump) then you don't get semver. All you have to do is think: will releasing this into a world full of "^1.0.0" break everyone's shit?
This and left-pad are extreme examples. But any maintainer with a package.json who tries to do right by `npm audit` knows that there is an endless parade of suffering at the hands of semver misuse. Most of it doesn't make the news.
I had an issue while running any command of vue-cli.. and then I even created an issue for it thinking that it might be a bug in the Vue CLI v4.3.1 But I think the truth has shown itself!
meanwhile I can regularly override even major release versions of dependencies in elixir with out breaking changes. dependency fickleness has always been a huge issue for me when working with node.
I hope that more packaging systems take the go modules approach and cryptographically and immutably identify their dependencies at time of addition to the project. This sort of breakage shouldn’t be possible.
This kind of breakage is perfectly possible in Go also - though the most common equivalent of "left-pad broke my project" for many Go developers is "X changed the case of their GitHub username and now all my import paths are broken".
If deps are immutable, then nothing anyone does in any other package (short of having the package repository take the code down) should be able to break your future builds.
> If deps are immutable, then nothing anyone does in any other package (short of having the package repository take the code down) should be able to break your future builds.
They are. You're only affected if you don't use a package-lock.json or start a new project (which will pull the latest versions of the dependencies).
I'm not a node expert but i believe the problem is that most people auto-update their node dependencies (I know I do, but I only have to do it rather rarely, since I don't primarily use node), because there are just so often minor security regressions that need to be fixed.
> Why are these threads filled with people who know nothing about node?
That’s a quite bad assumption from your part based on almost no information.
I don’t know about the rest of the thread but I’m personally quite familiar with node. A lock file doesn’t fix the same issues vendoring does. The lock file gives you an explicit list of version used, vendoring save the exact copies of the dependency with the rest of your code.
By vendoring anyone who is working on the project is using the exact same version of a dependency, AND you don’t have to care about an external provider (the registry being up, etc, that’s way easier for you CI too), AND you can review dependencies upgrade via git as if it was your code.
Of course that’s a mess when the JavaScript ecosystem has an infinite amount of dependencies for a hello world.
vendoring my dependencies wouldn't save me from a rare issue I had dealing with npm packages, for example I had a package that relied on an underlying api call to a machine learning cloud api, the api call became deprecated. Not writing code is the only sure way to have no bugs.
every package can be a one line package if you minify it. lines of code as a metric for code quality is always relative.
The fact that this is a one line package has nothing to do with the outcome. a one-line code change in a 5000 line dependency could just as much have messed up create-react-app. The size is irrelevant.
Maybe not to this extend, but if X (where X is whatever you are thinking about) had similar amount of people using it (especially junior people) this would happen there as well.
Other languages don't publish/import packages that are one line of code. I have never seen an issue like this with any other language that I've worked with.
Any sane developer that needed a one-liner like this would just manually implement it.
Not to mention that these sorts of functions are unnecessary in languages with a good stdlib or statically typed languages like rust, etc.
Know what happens every time people like you say this here on HN? They post the one-liner they would have manually implemented in their code base and it's wrong. The one that comes to mind is the "is-negative-number" package. Yes, the geniuses of Hacker News, after finding out there was an npm package for determining whether something was a negative number, could not correctly implement that function.
You and everyone here are not as clever as you think you are. This is why people prefer known-good implementations. The maintainer here did a bad release, big fucking deal.
As a stretch target: it's a bad idea to create demons out of an assortment of posts you randomly saw on HN. This site gets 3M posts a year. You can find basically anything in there.
What happens is that we each have pre-existing images that bug us (e.g. for example, people who overrate their own genius) and as we move around in the statistical cloud, random bits of whatever we run into stick to the pre-existing image and give it form. Poof, you have a demon—but actually it just became visible. Readers with other images see other demons and arrive at other generalizations. It's not good discussion because it's really about one thing but we make it about another, and comments that are skewed in that way limit their own interestingness. (I definitely don't mean to pick on you personally. We all do this.)
You may be shocked to find that there are very novice developers as well as those with 20+ years of experience who frequent HN. Using a few poorly written comments is a strawman, unless the comments you're referring to are written by the same people you're addressing here.
> Maybe its a failure of the language when it takes a third party package to determine if a number is greater than or less than zero?
It's not a failure of the language. Javascript has comparison operators like every other language, it's entirely possible to determine if a number is greater than or less than zero without importing a third-party package.
What it is is a failure of modern JS development culture, because apparently it's anathema to even write a simple expression on your own rather than import a dependency tree of arbitrary depth and complexity and call a function that does the same thing.
Their code would have been wrong even in strongly typed languages because it considered 0 to be a negative number. What language prevents you from making that mistake?
As opposed to blindly trusting and adding a dependency for a random library with a one liner?
I don't think the "don't roll your own crypto" argument really applies here. Of course we can come up with hypothetical situations where developers are incompetent or don't test their code at all. This includes armchair analysis for a post on HN, by non-javascript developers.
I would argue that it's still better than adding a dependency. Heck, you could even copy/paste the correct code.
I know I'm not a perfect programmer, so important functionality like this gets unit tested as necessary. :-)
A package with over 11 million weekly installs vs. a brand new implementation by a coworker, when I might not necessarily be around for the code review? Absolutely. I would blindly trust the package 100% of the time. Zero hesitation.
This happens because libraries installed by create-react-app depend on many other libraries (1026 transitive dependencies as of today).
As a comparison, Django, a large Python web framework, has only three dependencies (pytz, sqlparse, and asgiref), which don't have dependencies themselves
Perhaps, but I think the JS ecosystem encourages dependency explosion like no other. Looking at a 6 or 7 year old lazily written Rails app, with a lot of functionality written throughout the years, I see about 200 gems. Creating an empty app with create-react-app, it has about 1000 packages.
You can't use the very latest version of any software in Debian at all without adding a custom repository, at which point you have the same issue. So the comparison is not apples to apples.
You can use Debian Unstable, or maybe just use stable and reliable dependencies so that your software is also stable and reliable. That would require putting in some effort, though, and we can't be having that, can we?
The "slow and steady" approach works well for mature or stagnant ecosystems, but only when the packages are small enough that distribution developers can reasonably backport security fixes. That clearly doesn't work with big programs like Chrome and Firefox, so they have to resort to shipping the latest ESR version.
Writing JavaScript on Debian is practically impossible without sidestepping the package manager in some way. In a lot of cases, the hacks you have to do to run up-to-date software on a distro like Debian decrease reliability significantly.
You can do that with NPM if you pin your dependencies to exact versions, which is the same solution that you would use for any other package manager, and basically what Debian and other Linux distros do for you. I don't know why you think this problem is somehow unique to NPM or the JavaScript ecosystem.
And yet, somehow Debian isn't in the new every few months. There's a fundamental difference in culture, for one. But the fundamental difference in approach is there, too. Debian packages are vetted. npm packages are not.
How about a rolling release like openSUSE tumbleweed then? I have been using it for years, I generally update once a week and I have never broken my system due to an update. Never.
All of this is telling users how to avoid breaking Debian, and mistakes that they ought to avoid. This isn't Debian being broken and the users being collateral damage. This isn't a symptom of the very Debian ecosystem itself being fundamentally broken.
Personally as a JS dev, the significant thing is which package it was. These stories happen all the time, so when I see them I’d rather know which package it is at a glance so I know if I’m affected.
Fun thing with building applications overusing npm is that you usually don't know exactly what packages you have. Not until you check for the specific package, so you probably don't "know at a glance" if you're affected or not.
Hah, true. I got bit by one of these two days ago where the offending package was trying to install node itself (wtf?). It was two levels deep of `yarn why` before I figured out the issue. Fortunately I've been around long enough that the first thing I did was check the package for github issues... sure enough, found a 5 hour old issue with a bunch of people complaining of failing builds. If I hadn't searched first, I probably would have banged my head for another couple hours...
I think it reflects how the evolution of the JS ecosystem strongly resembles natural evolution. This package is now a vestigial organ, but there was a time when it served a useful purpose. Other packages formed connective tissue to this package, and since those package may still be useful, this one has stuck around.
Why would a Rust wrapper around the C++ project that is V8, which implements a garbage-collected programming language and environment, "use less ram" just by virtue of some parts of it being written in Rust?
Not really. NPM relies heavily on semver - https://semver.org/. In this case, the package that was updated updated a minor version, which means it should be backwards compatible, but it wasn't for later versions of Node.
Of course, you can always lock your build to exact versions of your dependencies (lock files in NPM used to be a complete cluster, in my opinion they are less of a cluster now - you can pretty much do everything you want with them but there are some gotchas that make it easy to shoot yourself in the foot). The issue is that when you run 'npm install', it will pull the latest semver-compatible versions of your dependencies.
So for everyone decrying how this is a bad example of NPM and the javascript ecosystem, I really think the opposite is true. Yes, it broke a lot of upstream dependencies, but importantly only for new builds of those items, and furthermore it was found almost immediately.
Also, of course, you can specify exact versions of your dependencies - you don't have to rely on semver. That means, though, that you need to be more vigilant about pulling in bug fixed and security fixes, and most people take the tradeoff that they are comfortable pulling in patch or minor versions, but using lock files once they have a build they have verified.
The regression suite never gets to run if it shares the dependency.
And the system under test shouldn't even compile for the tests to run either. So it isn't so much the regression suite saving you so much as it is just acting as the client of first resort.
CRA would be running the tests, not is-promise. CRA could have pinned every dep, and had a bot (dependabot) automatically run tests against every new version of every depended-upon package, and update only when those tests pass.
Potentially. If cra had pinned all their deps, and used a bot to automatically bump deps contingent on passing a comprehensive regression matrix, this would have been avoided. GitHub's Dependabot is good for this. In my opinion everybody besides libraries should pin deps and use dependabot.
Exactly. We use Renovatebot for the same purpose. It pins dependencies and creates PRs for updates.
Amazing to see how often the builds break, even sometimes after minor updates. But at least we fix them before release, and not after... :)
Yep. One of the very nice things about npm/node versus python or go or some others is that package locks and dependency pinning is possible. But few people seem to use it.
I’ve seen reports of people using a go library that gets a minor update and breaks their app, at which point they become SOL as go always installs the lad test version. I myself have been working in python projects where the dockerfile simply says “pip install blah” and I get different deps than the working version. No clue why anyone would be okay with working like that.
It's not true that Go always installs the latest version of a dependency. `go get github.com/x/y@v1.3.4` installs v1.3.4 of x/y, assuming there is a tag matching that.
Install any moderately complex nodejs lib or app and it will throw tons of warnings, ignored errors, and security issue alerts. As you should with any app running in production, lock down everything and watch network traffic because there are innumerable backdoors in the JavaScript ecosystem.
My company's current production electron app has 360 npm dependencies. We have CI for the UI but not for the USB/FFI stack, so any time we have to touch that code everyone blanches.
> innumerable backdoors in the JavaScript ecosystem.
Same goes for Python and CPAN. Any "click here for fancy module" installer has this problem.
Open up any serious Python project and you'll find significant dependencies. Math, graphics, IO, stats, ML... anything you really want to do requires dependencies. In fact, one of my biggest issues with Python is the cross-platform incompatibility of many packages which makes it a terrible choice for my deployment. (Even worse if the project has Cython components!)
I often end up having to scour github for forked pywheels that aren't vetted. Which are then cloned ad infinitum.
Its a tradeoff between extensibility and open source / free software, and robustness.
Math -> You use numpy, scipy, none of these have any significant dependencies. And libraries this complex are not even available for node.
Graphics -> Python comes with included Tkinter, and others are also one include away.
Stats -> Scipy does a lot of the stuff. There is a built in package for stats. Again, no stats package has 100 dependencies, and node doesn't even have anything with even 1/10th of the features
ML -> I mean node has nothing here, nothing, while pytorch has total of six dependencies. In node, left pad might have these many.
Python doesn't need left pad, isNumber, isInteger, isOdd, isPromise , take your pic.
> In fact, one of my biggest issues with Python is the cross-platform incompatibility of many packages which makes it a terrible choice for my deployment. (Even worse if the project has Cython components!)
But python has high performance libraries written in C, can you even use node for any of the cases where python has platform compat issues?
It is a tradeoff, and there is no comparison. Python needs far far less dependencies than node. e.g, Flask has 2 total dependencies, express has 48 direct dependencies, and even then flask comes out ahead on features, so much so that you would need many more packages to do the same stuff with express.
I'm not comparing functionality of Node and Python. They are different beasts. I was pointing out problems inherent with Python packaging, which you didn't even address in your fanboy rant.
Agreed. Libraries and tools often don't work in a straightforward manner. Lots of tools reach below the surface and do their own tampering and monkey-patching of the runtime, module system or environment. Layer upon layer gets deposited over time. It's like doing construction on topsoil riddled with unmarked gas, water, and electrical lines.
Is this news? Happened so many times before. NPM is broken. Yarn 2.0 is never gonna take off. These problems have been fixed long before. Waste of life.
Which adds support for ES modules: https://medium.com/@nodejs/announcing-core-node-js-support-f...
However the exports syntax requires a relative url, e.g. ‘./index.mjs’ not ‘index.mjs’. The fix is here: https://github.com/then/is-promise/pull/15/commits/3b3ea4150...