Hacker News new | past | comments | ask | show | jobs | submit login
A one-line package broke `npm create-react-app` (github.com/then)
599 points by tessela on April 25, 2020 | hide | past | favorite | 459 comments



Digging into the reason behind breakage, the change is this one: https://github.com/then/is-promise/commit/feb90a40501c8ef69b...

Which adds support for ES modules: https://medium.com/@nodejs/announcing-core-node-js-support-f...

However the exports syntax requires a relative url, e.g. ‘./index.mjs’ not ‘index.mjs’. The fix is here: https://github.com/then/is-promise/pull/15/commits/3b3ea4150...


From those commits, it seems this issue was fixed in 1h12min. That should be a new record, specially considering this is all volunteer work on a Saturday. While it's bad that things break, the speed at which this was fixed is truly amazing. A big thank you to everyone involved here.


Not sure where you're getting 1h12 from. First issue was reported at 12:18pm (my time) final update that fixed it was published at 3:08pm.

Not that long, but my issue with this release snafu is that:

- the build didn't pass CI in the first place

- the CI config wasn't updated to reflect the most recent LTS release of node

- the update happened directly to master (although that's to how the maintainer wants to run their repo. it's been my experience that it's much easier to revert a squashed PR than most other options)

- it took two patch versions to revert (where it may have only taken one if the author could have pressed "undo" in the PR)


This is a good example of how terrible messy JavaScript library creation is.

There is no change to the actual functionality of the library. Only in the way it is packaged, here to support something that is an "experimental" feature in node.

It is also something that is hard to write automated tests for.


> This is a good example of how terrible messy JavaScript library creation is.

Meanwhile over in .Net-land, after 15+ years of smooth sailing (5+ if you only count from the introduction of NuGet), the transition from full framework to .Net Core has made a multi-year long migraine out of packaging and managing dependencies.

I ran into multiple scenarios where even Microsoft-authored BCL packages were broken and needed updates to resolve only packaging issues. It's a lot better now than during v1.x days, but I still have hacks in my builds to work around some still broken referencing bits.


I wonder why people won't use yarn zero installs. They are great for having a reproducible builds and can work offline. You can have a CI and git hook which checks your code before deployment or pushing to git.

Another way is to pin down the specific versions without ~ or ^ in the package.json so your updates don't break stuff.


What's "yarn zero installs"? Googling did not do it for me.


That might be referring to Yarn's "offline mirror" feature. When enabled, Yarn will cache package tarballs in the designated folder so that you can commit them to the repo. When someone else clones the repo and runs `yarn`, it will look in the offline mirror folder first, and assuming it finds packages matching the lockfile, use those.

This takes up _far_ less space than trying to commit your `node_modules` folder, and also works better cross-platform.

I wrote a blog post about setting up an offline mirror cache a couple years ago:

https://blog.isquaredsoftware.com/2017/07/practical-redux-pa...

Used it on my last couple projects at work, and it worked out quite well for us.


That's quite interesting, although back in the day we did that for C dependencies that weren't packaged well, and it quickly ballooned the size of our repo since git has to treat tar balls as binaries. Even if you only update a few lines of the dependency for a patch version, you re-commit the entire 43 MB tarball (obviously that depends on the size of your tarball).


You could use Git LFS to store anything ending with a tarball extension. It's pretty well supported by most Git servers (I know GitHub and GitLab support it off the top of my head). You do need the LFS extension for Git to use it.


The other similar approach is to build in containers - and use Docker layers to contain the dependencies.


verdaccio aims to do this as a proxy: https://github.com/verdaccio/verdaccio


Instead of node_modules containing source code of the packages, yarn generates a pnp.js file which contains a map linking a package name and version to a location on the disk, and another map linking a package name and version to its set of dependencies.

All the installed packages are stored in zip form in .yarn/cache folder to provide a reproducible build whenever you install a package from anywhere. You can commit them to version control. Unlike node_modules, they are much more smaller in size due to compression. You will have offline, fully reproducible builds which you can test using a CI before deployment or pushing code to repository

https://yarnpkg.com/features/zero-installs


This is a great feature I did not know about, thanks

I don't understand how it applies to the OP problem. Even without "zero installs", yarn all by itself with a yarn.lock already ensures the same versions as in the yarn.lock will be installed -- which will still be a reproducible build as long as a given version hasn't changed in the npm repo.

(It looks to me like "yarn zero" is primarily intended to let you install without a reliable network and/or faster and/or reduce the size of your deployment artifacts; but, true, it also gives you defense against a package-version being removed or maliciously changed in the npm repo true. But this wasn't something that happened in OP case was it? A particular version of a particular package being removed or changed in repo?)

In this case, it was a new version that introduced the breakage, not changed artifact for an existing version. AND the problem occurs on trying to create a new project template (if I understand right), so I thin it's unlikely you'd already have a yarn.lock or a .yarn/cache?

Am i missing something? Dont' think it's related to OP. But it's a cool feature!


FWIW, yarn.lock (and the lockfile for recent versions of NPM, IIRC) also keeps package hashes-- so a build is either fully reproducible and pulls down the same artifacts as the original, or it fails (if an artifact is missing or has changed).

`yarn zero` protects you against dependencies disappearing, and lets you install without network connectivity.


No. It wasn't meant for OP (is-promise) because that would require tests for the imports.

I saw some work around changing versions in the package.json and lockfiles in the github issue. Instead of that, you could just roll back to the previous commit. Way easier. The package author also changed the earlier version after fixing it.

It would stop your shit from failing at least.


That's awesome. Thanks!!


Google "yarn plug and play", rather than "yarn zero installs". There isn't much in the way of details outside of the main Yarn website -- now focussed on Yarn 2 -- which has the documentation (vs Yarn 1.n, which does not have plug and play and works the same as NPM, and has now moved to classic.yarnpkg.com)

(Edit: I'm not quite sure how this would have completely prevented the issue? P'n'p is very good and seems to be a real step forward for JS package management but surely the same issue could have occurred regardless?)


- we’ve stopped using ^ and ~ because of the unpredictability of third party libraries and their authors’ potential for causing our own apps to break. We also find ourselves forking and managing our own versions of smaller/less popular libraries. In some cases, we’ve chosen to reimplement a library.


Isn't this all stuff that you add after generating the project? For example yarn.lock is created on your first install. Having a pre-generated yarn.lock is a no-go because of the dubious decision to include the full path to the registry the package was sourced from.


I’d argue that ‘index.mjs’ is a relative URL.


Digging requires depth. 1 line modules aren’t depth.


The problems that beset the Javascript ecosystem today are the same problems that beset the Unix ecosystem, back in the 90s when there still was one of those. TC39 plays the role now that OSF did then, standardizing good ideas and seeing them rolled out. That's why Promise is core now. But that process takes a long time and solutions from the "rough consensus and running code" period stick around, which is why instanceof Promise isn't enough of a test for things whose provenance you don't control.

Of course, such a situation can't last forever. If the idea is good enough, eventually someone will come along and, as Linux did to Unix, kill the parent and hollow out its corpse for a puppet, leaving the vestiges of the former ecosystem to carve out whatever insignificant niche they can. Now the major locus of incompatibility in the "Unix" world is in the differences between various distributions, and what of that isn't solved by distro packagers will be finally put to rest when systemd-packaged ships in 2024 amid a flurry of hot takes about the dangers of monoculture.

Bringing it back at last to the subject at hand, Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today. As a former-life devotee of Stack Overflow, I of course applaud this plan, and wish them all the luck they're certainly going to need.

The impetus behind "lol javascript trash amirite" channer takes today is exactly that behind the UNIX-Haters Handbook of yore. I have a printed copy of that, and it's still a fun occasional read. But those who enjoy "javascript trash lol" may do well to remember the Handbook authors' stated goal of burying worse-is-better Unix in favor of the even then senescent right-thing also-rans they favored, and to reflect on how well that played out for them.


And your example is why we have the "lol javascript trash amirite" chorus, because as you've noted these problems were solved decades ago. Yet for some reason, the JS and npm ecosystems always seem to have some dependency dustup once or twice a year.


Yes, that's largely my point. I'm not sure why it is surprising to see an ecosystem, twenty-five or so years younger than the one I compared it to, have the same problems as that one did twenty-five years or so ago.


In one of Robert "Uncle Bob" Martin presentation you may find the answer. The number of developers duplicates each 5 years. That means that any point in time half of the developers have less than 5 years of experience. Add to that realization the fact that inexperienced developers are learning from other inexperienced developers and you get the answer on why we repeat the same mistakes again and again.

I guess that is a matter of time that the reality changes, we will not duplicate the number of developers indefinitely and experience and good practices will accumulate.

Taking into account the circumstances, we are not doing so badly.


> The number of developers duplicates each 5 years

You probably mean "double" here, but the bottom line is that there is zero data to back up that claim.

He literally made up that number out of thin air to make his talk look more important.


Let's say it's 10 years, or make it 15 years, for the sake of the argument.

How does that change his original argument?


It should be fairly simple to look up people describing themselves as developers in the census data I think?


Does the census actually track that? I just did the questionnaire last night online and it didn't ask me anything about my occupation.

Or did you mean something other than the US Census (e.g. GitHub or Stack Overflow or LinkedIn profiles)?


The long form asks about your line of work. Most people get the short form.


No, I meant the US census (or whatever national census), I didn’t actually check if they asked that since it seemed like such a basic thing :/ sorry.


Not as easy as you might think, since “developers” isn't particular to software and software developers have lots of other near-equivalent terms, the set of which in use changes over time, and many of them aren't unique to software, either.

OTOH, historical BLS data is easy to look up.


Do you have the source? Sounds like an interesting talk.


I found it! :) I has a lot of content and insights.

"Uncle" Bob Martin - "The Future of Programming"

https://www.youtube.com/watch?v=ecIWPzGEbFc


That's not the source, it's the claim.

There is zero evidence for his claim that the number of developers double every five years.


Off the top of my head, coding boot camps


Pardon me if I've misunderstood you. I feel that this line of reasoning that excuses modern Javascript's mistakes on the basis of it being a young language to be spurious. We don't need to engineer new languages that recreate the mistakes of previous ones, or even worse, commit entirely new sins of their own. It's not like no-one saw the problems of the Node/JS ecosystem, or the problems of untyped languages, coming from a distance. Still, Node.js was created anyway. I would argue that it, along with many of its kindred technologies, has actually contributed a net deficit to the web ecosystem.


Okay, then, argue it.


That line of reasoning suggests progress isn't being made and we are just reliving the past.


There are multiple reasons for this failure mode, only some of them subject to social learning.

Part of the problem is a learning process, and indeed, I think the Javascript world should have learned some lessons - a lot of the mess was predictable, and predicted. Maybe next time.

But part of the problem is that we pick winners through competition. If we had a functional magic 8-ball, we'd know which [ecosystem/language/distro/OS/anything else] to back and save all the time, money and effort wasted on marketplace sorting. But unless you prefer a command economy, this is how something wins. "We" "picked" Linux this way, and it took a while.


It's also not a surprise to see a similar process of stabilization play out at a higher layer of the stack, as it previously did at a lower one. Neither is it cause for regret; this is how lasting foundations get built, especially in so young a field of endeavor as ours. "History doesn't repeat itself, but it often rhymes."


25 years is roughly one generation. A new generation grows up, has no memory of the old problems?

Same with Covid, is roughly 20 years ago and people forgot there was SARS.


It ain't surprising, but rather just disappointing, that an ecosystem can't or won't learn from the trials and tribulations of other ecosystems.

EDIT: also, Node's more than a decade old at this point, so it is at least a little bit surprising that the ecosystem is still experiencing these sorts of issues.


Is it really though? Node is infamous for attracting large groups of people with notoriously misguided engineering practices whose egos far surpass their experience and knowledge.

I've been stuck using it for about 4 years and it makes me literally hate computers and programming. Everything is so outrageously bad and wrapped in smarmy self congratulating bullshit. It's just so staggeringly terrible...

So these kind of catastrophes every few months for bullshit reasons seem kind of obvious and expected, doesn't it?


NIH Syndrome is a double-edged sword that persists regardless of innovations.


This analogy doesn't hold up at all.

The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever.

But there was also the possibility, for non-software businesses, to pick a platform and stick to it. You run Sun, buy Sun machines, etc. That it was "Unix" didn't matter except to the software business selling you stuff, or what kind of timelines your in-house developers gave.

There is no equivalent in the JS world. If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns. Every JavaScript community and subcommunity has the same problem, they keep punching themselves in the face, for reasons entirely unrelated to what their "competitors" are doing. Part of this is because the substrate itself is not good at all (way worse than Unix), part is community norms, and part is the piles of VC money that caused people to hop jobs and start greenfield projects every three months for 10 years rather than face any consequences of technical decisions.

Whatever eventually hollows out the mess of JS tech will be whatever figures out how to offer a stable developer experience across multiple years without ossifying. (And it can't also happen until the free money is gone, which maybe has finally come.)


"Pick React and stick to it" is the exact parallel to your "pick Sun and stick to it". Were you not there to see how often SunOS and Solaris updates broke things, too? But those updates were largely optional, and so are these. If you prefer React 15's class-based component model, you can pin the version and stick with it. You won't have access to new capabilities that rely on React 16 et cetera, but that's a tradeoff you can choose to make if it's worth your while to do so. You can go the other way if you want, too. The same holds true for other frameworks, if you use a framework at all. (You probably should, but if you can make a go of it starting from the Lions Book, then hey, have a blast.)

I agree that VC money is ultimately poison to the ecosystem and the industry, but that's a larger problem, and I could even argue that it's one which wouldn't affect JS at all if JS weren't fundamentally a good tool.

(To your edit: granted, and React, maybe and imo ideally plus Typescript, looks best situated to be on top when the whole thing shakes out, which I agree may be very soon. The framework-a-week style of a lot of JS devs does indeed seem hard to sustain outside an environment with ample free money floating around to waste, and React is both easy for an experienced dev to start with and supported by a strong ecosystem. Yes, led by Facebook, which I hate, but if we're going to end up with one de facto standard for the next ten years or so, TS/React looks less worse than all the other players at hand right now.)


> React is both easy for an experienced dev to start with and supported by a strong ecosystem.

I wouldn't say getting started with ReactJS is easy (or that it's properly supported). Each team that uses React within the same company uses a different philosophy (reflected in the design) and sometimes these flavors differ over time in the same team. We're back to singular "wizards" who dictate how software is to be built, while everyone else tinkers. It's a few steps from custom JS frameworks.


    The UHH is a fun read, yes, but the biggest real-world
    problem with the Unix Wars was cross-compatibility. 
    Your Sun code didn't run on Irix didn't run on BSD 
    and god help you if a customer wanted Xenix. 
    OK, you can draw some parallel here between 
    React vs. Vue vs. Zeit vs. whatever.
    
    But

You made your point, proved yourself wrong, and then went ahead ignoring the fact that you proved yourself wrong.


>The UHH is a fun read, yes, but the biggest real-world problem with the Unix Wars was cross-compatibility. Your Sun code didn't run on Irix didn't run on BSD and god help you if a customer wanted Xenix. OK, you can draw some parallel here between React vs. Vue vs. Zeit vs. whatever

POSIX is a set of IEEE standards that have been around in one form or another since the 80s, maybe JavaScript could follow Unix's path there.


The existence of such a standard doesn't automatically guarantee compliance. There are plenty of APIs outside the scope of POSIX, plenty of places where POSIX has very under specified behavior, and even then, the compliance test suite doesn't test all of the rules and you still get tons of incompatibilities.

POSIX was, for the most part, not a major success. The sheer dominance of Linux monoculture makes that easy to forget, though.


Of course it doesn't guarantee compliance, but like all standards it makes interop possible in a predictable way, e.g. some tcsh scripts run fine under bash, but that's not by design. The inability or unwillingness of concerned parties to adopt the standard is a separate problem. This is why "posixly" is an adverb with meaning here.


This is slightly off-tangent, but as someone who has written production software on the front-end (small part of what I do/have done) in:

Vanilla -> jQuery -> Angular.js -> Angular 2+, React pre-Redux existence -> modern React -> Vue (and hobby apps in Svelte + bunch of random stuff: Mithril, Hyperapp, etc)

I have something to say on the topic of:

> "If you pick React, you're not getting hurt because Vue and React are incompatible, you're getting hurt because the React shit breaks and churns."

I find the fact that front-end has a fragmented ecosystem due to different frameworks completely absurd. We have Webcomponents, which are framework-agnostic and will run in vanilla JS/HTML and nobody bothers to use them.

Most frameworks support compiling components to Webcomponents out-of-the-box (React excepted, big surprise).

https://angular.io/guide/elements

https://cli.vuejs.org/guide/build-targets.html#web-component

https://svelte.dev/docs#Custom_element_API

If you are the author of a major UI component (or library of components), why would you purposefully choose to restrict your package to your framework's ecosystem. The amount of work it takes to publish a component that works in a static index.html page with your UI component loaded through a <script> tag is trivial for most frameworks.

I can't tell people how to live their lives, and not to be a choosy beggar, but if you build great tooling, don't you want as many people to be able to use it as possible?

Frameworks don't have to be a limiting factor, we have a spec for agnostic UI components that are interoperable, just nobody bothers to use them and it's infuriating.

You shouldn't have to hope that the person who built the best "Component for X" did it your framework-of-choice (which will probably not be around in 2-3 years anyways, or have changed so much it doesn't run anymore unless updated)

---

Footnote: The Ionic team built a framework for the singular purpose of making framework-agnostic UI elements that work with everything, and it's actually pretty cool. It's primarily used for design systems in larger organizations and cross-framework components. They list Apple, Microsoft, and Amazon as some of the people using it in production:

https://stenciljs.com/


No one uses them because SSR is either non-existent or clunky with them.

Ignoring a common use case when inventing something is a good way to get your shit ignored in turn. Which is what happened.


Web components aren't really there yet. They will be two or three years from now. Some time between now and then, I expect React will gain the ability to compile down to them, which shouldn't be too hard since web components are pretty much what happens when the React model gets pulled into core.


You can compile React to Webcomponents with community tooling, the core framework just doesn't support them:

https://github.com/adobe/react-webcomponent

By "aren't really there yet", what do you mean? If you mean in a sense of public adoption and awareness, totally agree.

If you mean that they don't work properly, heartily disagree. They function just as well as custom components in any framework, without the problem of being vendor-locked.

You may not be able to dig in to the internals of the component as well as you would a custom build one in your framework-of-choice, but that's largely the same as using any pre-built UI component. You get access to whatever API the author decides to surface for interacting with it.

A properly built Webcomponent is generally indistinguishable from consuming any other pre-built UI component in any other framework (Ionic built a multi-million dollar business of off this alone, and a purpose-built framework for it).


Very unlikely. Web components and React are trying to solve different problems, and the React team has repeatedly said this isn't going to happen.


> nobody bothers to use them

Here's the sad but unavoidable truth: the main purpose of Javascript currently is to keep Javascript developers employed.


Spoken like someone who's never seen what people perpetrate in, say, Java.


> Deno appears to be trying to become the Linux of Javascript

Deno always sounded more "like the Plan 9 of Javascript" personally to be honest. It seems to be better (yay for built-in TypeScript support! Though I have my reservations about the permission management, but that's another discussion) but perhaps not better enough (at least just yet) to significantly gain traction.


The permissions management is a little tricky to think about at first, but once you get the hang of it I think it's actually quite nice. Setting strict permissions on CLI tools help to ensure that the CLI isn't doing anything nefarious when you're not looking (like sending telemetry data). Since this CLI has --allow-run, I can also have it execute a bin/server script that _does_ have network and read/write permissions, but only in the current app directory.


The problem I saw was how quickly you need to open up the permissions floodgates. I saw them live-demo a simple http server, and to do something as basic as that you need to open up full file system and network access. So if you’re doing anything like setting up a server (i.e. one of the core things one does when using a server-side scripting language), you’re back to square 1.


Ah never mind, I see they now have finer grained scopes. That should help.


Deno was always Typescript-first fwiw


I have doubts about how this could possibly work. The idea is you pull a .ts file directly, right? Then your local ts-in-debo compiles that to extract typedefs for intellisense/etc and the JS. What happens when it was created for a different version of typescript than what you’re running? Or if it was created targeting different flags that what you’re using? This will cause lots of problems:

I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken. In node land, you compile the TS down to a common target before distributing so you don’t have this problem.

Similar, I’m using 3.8 and package upgrades to 3.9 and starts using some new builtin types that aren’t present in my TS. Now my package is broken. Previously you’d export a .d.ts targeting a specific version and again not have this problem.

Or, I want to upgrade to 3.9 but it adds some validations that cause my dependencies to not typecheck, now what?

Or, I’m using strictNullChecks. Dependent package isn’t. Trying to extract types now throws.

I’ve brought these all (And many other concerns) up to the deno folks on numerous occasions And never gotten a answer more concrete than “we’ll figure out what to do here eventually”. Now 1.0 is coming, and I’m not sure they’ve solved any of these problems.


> I’m running my project with ts 3.6. Library upgraded to 3.7 and adds null chaining operators. Now my package is broken.

Isn't this similar to not upgrading node and using an updated version of an npm package that calls a new function added to the standard library? All npm packages have a minimum node version, and similarly all deno code has a minimum deno version. Both use lockfiles to ensure your dependencies don't update unexpectedly.

> Or, I’m using strictNullChecks. Dependent package isn’t.

This definitely sounds like a potential problem. Because Deno enables all strict checks by default, hopefully library authors will refrain from disabling them.


Node updates much less frequently than TS, so even if it was a problem before, it’s more of a problem now.



Rephrase: people use new TS features much more often than they use new Node features.


That might be true in general, but I seem to run into problems with the two with about equal frequency. One of the recent ones I ran into with node was stable array sort.


Yes, npm package maintainers spend a lot of time on node version compatibility. Here is a quote from prettier on their recent v2 release:

> The main focus should be dropping support for unsupported Node.js versions.

https://github.com/prettier/prettier/issues/6888


On the other hand, trying to setup a typescript monorepo with shared/dependent projects is a huge pain since everything needs to be transposed to intermediary JS that severely limits or breaks tooling.

Even TS project references make assumptions about the contents of package.json (such as the entry file), or how the compiler service for VsCode preloads types from @types/ better than for your own referenced projects, which sadly ties TS to that particular ecosystem.

Language version compatibility is a good point, but perhaps TSC could respect the compiler version and flags of each package's tsconfig.json, and ensure compatibility for minor versions of the language?

Since I enjoy working in TS I'm willing to wait it out as well, the pros far outweigh the cons. Now that GitHub/MS acquired NPM, I have hopes that it will pave the way to make TS a first-class citizen, though I don't know if Deno will be part of the solution or not.


> TSC could respect the compiler version and flags of each package's tsconfig.json

That’s the problem - there is no tsconfig.json. You’re only importing a single URI.


I see. While I don't know the details, it seems it would promote the use of "entry/barrel" files once again.


> running code straight from wherever on the Internet it happens to live today.

This, exactly this. Young me thought this was a point of the whole thingy we call Internet.

And exactly that is what I like about QML from Qt. Just point to a file and that's it.


Go tried it; went over like a lead balloon. Theory; lead balloons don't fly anywhere.


How is it a lead balloon? Go got super popular in the period before /vendor and Dep (later modules). Yes, people wanted and got versions too, but the URL part stayed. ISTM, they had a Pareto optimal 20% piece of the puzzle solved and bought them selves time to solve the other 80% years later.


Go still identifies packages by URL. The recent modules feature just added the equivalent of lockfiles like npm, yarn, cargo, etc. It also added some unrelated goodies like being able to work outside of $GOPATH.


> Deno appears to be trying to become the Linux of Javascript, through the innovative method of abandoning the concept of "package" entirely and just running code straight from wherever on the Internet it happens to live today.

I really like Deno for this reason. Importing modules via URL is such a good idea, and apparently it even works in modern browsers with `<script type="module">`. We finally have a "one true way" to manage packages in JavaScript, no matter where it's being executed, without a centralized package repository to boot.


Then again, this broke a package that, by its very nature, isn't running in production. And the problem was solved within three hours.

So I'm not sure how much everything-used-to-be-great-nostalgia is justified here.


Someone rolls out code where a serious bug fell through QA cracks, and appears to be breaking a mission-critical path. Your biggest client is on the phone screaming FIX IT NOW. Three hours is an eternity.


Screaming "FIX IT NOW" because bootstrapping a new React app isn't working? Who, what, when, where?!


You roll back one version. Problem is fixed in thirty seconds.


Let’s add: appears to be breaking mission critical path that also slipped through cracks in QA. Mistakes happen, run CI/CD before getting to the mission critical path.


My development environment is my production environment.


F


I think you missed my point so let me clarify: If your job is to develop software, then your computer is your production environment. It's where you run your production - your development. This is hopefully separate from where your customers runs development.


Only as much as it ever is. That's why I'm making fun of it.


I remember the beginning of React (before Webpack) when server compilation looks fine and that magic works as <script>react.js</script> in browser. This looks like new era where HTML is fixed. But no, we have 15 standards now. Everything is finished when I found Webpack 3-line module with 20 lines Readme description. We have 1000 modules and 1000 weak points after that. React has x1000 overhead

Any package and package manager has hot points:

- no standards, api connection issues (different programming styles and connection overhead)

- minor version issues (just this 1 hour bug 0-day)

- major sdk issues (iOS deprecate OpenGL)

- source package difference (Ubuntu/CentOS/QubesOS need a different magic for use same packages)

- overhead by default everywhere that produce multiple issues


I'm a developer, but I'm also on-call 24/7 for a Node.js application. The number of people here saying "this is why you don't use dependencies" or "this is why you vendor your deps" is frustrating to see. No one _but no one_ who has managed complex enough systems will jump on the bandwagon of enterprise-ready, monolithic and supported over something like Node.js. I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

There are trade-offs, absolutely. Waiting on a vendor to fix a problem _for months_, while sending them hefty checks, is far inferior to waiting 3 hours on a Saturday for a fix, where the actual issue only effects new installations of a CLI tool used by developers, and can trivial be sidestepped. If anything, it's a chance to teach my developers about dep management!

I'm positive my stack includes `is-promise` about 10 times. And I have no problem with that. If you upgrade deps (or don't) in any language, and don't have robust testing in place, the sysadmin in me hates you - I've seen it in everything from Go to PHP. There is no silver bullet except pragmatism!


>I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

Sadly, I dream of doing this very thing every day. I'm at that notch on the thermometer just before "burned out". I love creating a working app from scratch. However, I'm so sick of today's tech. The app stores are full of useless apps that look like the majority of other apps whose sole purpose is to gather the user's personal data for monetizing. The web is also broken with other variations of constant tracking. I'm of an age where I remember time before the internet, so I'm not as addicted as younger people.


Send me a message if you want - I'd love to share what I'm building with you, as it is intended to resolve that exact feeling. I sympathize entirely.


If it's a log cabin, just tell me where to be, and I'll show up with hammers and saws!


There’s no silver bullet you’re absolutely right, but does that mean there isn’t room for improvement? Or that you shouldn’t try? Dropping all dependencies is extreme for sure but to argue against something as simple as vendoring is a bit odd.


You’re correct - there is room for improvement. The “npx” tool is a easy place to start! And absolutely agreed dropping dependencies is extreme and vendoring not so much - but in my experience vendoring often means “don’t ever touch again until a bad security shows up”. I was being a little bit too snarky in my comment tho, absolutely :)


Vendoring causes more problems than it solves. There are plenty of things that could be improved about the node ecosystem, but a lot of the criticism isn't based on logic; there seems to be a large population on HN who just inherently hate large numbers of dependencies and will grasp for any excuse to justify that hate.


Funny, I run an “enterprise” stack almost entirely made of Java. I wouldn’t trade it for NodeJS for the world.

Making upstream changes indeed would be very, very hard. But I never have to make upstream changes because they’ve spent quite a large amount of effort on stability.


I'm also making enterprise-grade software with quite a few external dependencies. I had to email the developers of the biggest dependency multiple times because of bugs but they were all fixed within a few weeks in a new patch release. They also went out of their way to provide me with workarounds for my problems. In the NPM world you are on your own.


Why? You can email package maintainer just as well, or better yet - open an issue on GitHub.


Sure, but JavaScript and J2EE aren't the only options. You can use a language with more built-in functionality, reduce the use of unnecessary external libraries, and/or limit those libraries to ones from trusted sources.


I honestly have no idea if you prefer Node.js or J2EE after reading this comment.


They mean that they would only trade Node.js for J2EE the day they can also quit (so that they don't have to use J2EE).


J2ee was a hell, but Java EE was quite decent!

Pragmatism - do programming to solve real life problems rather than create a broken ecosystems which requires constant changes (and learning just to be on top of them) to fix a bad design


> I'd trade in my JavaScript for J2EE about as fast as I'd quit tech and move up into the mountains.

I think the snark is obscuring the point of this comment.


And the source code of the library is:

   function isPromise(obj) {
     return !!obj && (typeof obj === 'object' || typeof obj === 'function') && typeof obj.then === 'function';
   }


Here's my off-the-cuff take that will not be popular.

A function like this should be a package. Or, really, part of standard js, maybe.

A) The problem it solves is real. It's dumb, but JS has tons of dumb stuff, so that changes nothing. Sometimes you want to know "is this thing a promise", and that's not trivial (for reasons).

B) The problem it solves is not straightforward. If you Google around you'll get people saying "Anything with a .then is a promise' or other different ways of testing it. The code being convoluted shows that.

Should this problem be solved elsewhere? Sure, again, JavaScript is bad and no one's on the other side of that argument, but it's what we have. Is "just copy paste a wrong answer from SO and end up with 50 different functions in your codebase to check something" like other languages that make package management hard so much better ? I don't think so.


No. There is no reason why it should be a package by itself. It should be part of a bigger util package, which which is well maintained, tested, and with many maintainers actively looking at it, with good processes, such as systematic code reviews, etc.

At work, our big webapp depended at some point indirectly on "isobject" "isobj" and "is-object", which were all one liners (some of them even had dependencies themselves!!). Please let's all just depend on lodash and it will actually eventually reduce space and bandwith usage.


Yep, in Java-land this would be in an Apache Common's (or Guava, etc) module with dozens and dozens of other useful functionality.


Yeah, but the question is how far should we go with that. Should we do :

    const isFalsy = require("is-falsy");
    const isObject = require("is-object");
    const isFunction = require( "is-function" );
    const hasThen = require( "has-then" );

    function isPromise(obj) {
      return !isFalsy(obj) && ( isObject(obj) || isFunction(obj) ) && hasThen( obj );
    }
Just because the code line is more than 50 characters, doesn't mean that we need a new library for that.


All of those can be pretty much be handled natively, and obviously. They're all primitive

isFalse would be != isObject would use typeOf isFucntion would use typeOf

Where a library becomes helpful is when you have:

* A real problem (none of those are real problems, and the npm packages for them are essentially unused jokes)

* A solution that is not intuitive, or has a sharp edge, or requires non-obvious knowledge, or does not have a preexisting std approach

Checking for a promise, given the constraints of having multiple types of promises out in the world, falls into both of those. Checking if something is falsey, when Javascript provides !, does not fall into either.


I think all of the above might already be libraries on npm. From what I remember, npm has isInteger, isPositive, is-odd, is-even.


All of the packages you mentioned are maintained by the same guy.


Have you seen his twitter? It's incredibly cringey. I don't understand how someone could be so arrogant to claim millions of companies use his software, when his software is isFalse. Not to mention his hundreds of packages that literally just output an emoji.


Reminds me of Dr. Evil’s monologue about his Father making ridiculous claims about inventing the question mark.


isFalsy is just “!”; I don't think we need a new library for a more verbose way to express a one-character unary operator, no, nor does it meet the standard of “The problem it solves is not straightforward” proposed upthread.



I'm not surprised it exists (and literally is just a more verbose, indirect way to invoke “!” that nevertheless is a 17 sloc module with a bunch of ancillary files that has one direct and, by way of that one, 17 second-order and, while I didn't check further, probably an even more ridiculous number of more distant, transitive dependendencies.)

I'm just saying it's neither necessary nor consistent with the standard for when a library is a good idea proposed upthread, so suggesting it as part of an attempt at reductio as absurdam on that standard is misplaced.


> 0 Dependents

if a tree falls in the woods...?


> Weekly Downloads > > 0

as of 2020-04-26T00:39+00:00


>Or, really, part of standard js, maybe.

I think this would be the solution. I feel like a lot of the NPM transitive dependency explosion just comes from the fact that JavaScript is a language with a ton of warts and a lack of solid built-ins compared to e.g. Python. Python also has packages and dependencies, but the full list of dependencies used by a REST service I run in production (including a web framework and ORM) is a million times smaller than any package-lock.json I've seen.


This is correct. I post the same thing every time one of these JS dependency hell issues pops up, but it's the case because it's true: The problem is the lack of a standard library. It's not that people don't know how to write a left-pad function, it's that it's dumb to rewrite it in every project and then remember what order you put the arguments in, etc. So people standardize, but they're standardizing on millions of different little packages.

I think the effort that goes into all the JS syntax and module changes would be better put into developing a solid standard library first.


There's a rich standard library, you don't need a package to left-pad.


Lile...

    x instanceof Promise 
It works for standard promises, sure there are non standard promises, ancient stuff, that to me shouldn't be used (and a library that uses them should be avoided). So why you need that code in the first place?

Also that isPromise function will not work with TypeScript, imagine you have a function that takes something that can be a promise or not (and this is also bad design in the first place), but then you want to check if the argument is a Promise, sure with `instanceof` the compiler knows that you are doing, otherwise not.

Also, look at the repo, a ton of files for a 1 line function? Really? You take less time to write that function yourself than to include that library. But you shouldn't have to write that function in the first place.


Your implementation is broken even if everything uses native Promises. I don't know how many times this exact thread needs to happen on HN (as it has many times before) until people realize their "no duh" implementations of things are actually worse than the thing they're criticizing.

Make an iframe.

In the iframe:

    > window.p = new Promise(() => {});
From the parent window:

    > window.frames[0].p instanceof Promise
    false
Congrats! Your isPromise function was given a Promise and returned the incorrect result. The library returns the correct result. Try again!


In case someone else is also confused by this, it seems that instanceof checks whether the objects prototype matches, and these prototypes are not shared across different contexts, which iframes are [0]. (Though I would still like to know why it works like this.)

[0] https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


For security reasons - you can modify the prototypes and you wouldn't want iframes to inherit that.


No, it was not given a Promise. It was given a foreign object from another window. If you want to inspect another window you should not be reusing code that is designed for single threaded operations. Instead, have a layer that translates, serializes, or explicitly defines an interface that the objects we are dealing with are foreign and need to be transformed. Then the abstraction implementation details of dealing with multiple windows become a concern of a single layer and not your entire codebase. Implicitly and magically treating a foreign window as this window, will fail in many subtle and unknown ways. The "brokenness" you mention is not in that implementation, it is correctly breaking, telling you that what you are doing is wrong, then you try to bypass the error instead of fixing your approach.


For foreign-origin iframes, that's exactly what people do using `postMessage`. But for same-origin iframes there's no need since you can access the iframe's context directly. So people can (and do) write code exactly like this that accesses data directly.

And it was given a Promise. You just shouldn't use instanceof in multi-window contexts in JavaScript. This is why built-ins like `Array.isArray` exist and should be used instead of `arr instanceof Array`. Maybe you'd prefer to write to TC39 and tell them that `Array.isArray` is wrong and should return false for arrays from other contexts?

There's no use jumping through hoops to avoid admitting that OP made an error. They were wrong and didn't think of this.


GP's comment screams XY problem which seem to be increasingly common these days.


If you think pointing out a bug due to an edge case someone didn't think of is the XY problem, I'm afraid you don't know what the XY problem is.


The problem was to get the promise out of the iframe when you shouldn't do this directly in the first place.

This literally is an XY problem: "I need to do A but it's giving me bad results, what do I need to add?" - "Don't use A, it's bad practice. Use B instead and keep using built-in tools instead of hacking something together" In this case use instanceof instead of is-promise because it's a hack around the actual problem of getting objects out of a different context that was explicitly designed to behave this way.

I'm afraid that you don't know what an XY problem is.

JavaScript developers always seem to think they are the smart ones after their 6 weeks of some random bootcamp and then you end up with some crap like NPM where a single line in a package out of hundreds maintained by amateurs can break everybody's development environment.


[flagged]


Yikes, please don't break the site guidelines like this.

https://news.ycombinator.com/newsguidelines.html


I didn't even mean you but the general JS community but if you want to think I did, okay, feel free to do so.


Though, let's also appreciate just how niche that case is. I'd be surprised if more than 0.5% of the JS devs reading this will ever encounter that scenario where they are reaching across VMs like that in their life.

`obj instanceof Promise` and `typeof obj.then === 'function'` (is-promise) are much different checks. Frankly, I don't think either belongs in a library. You should just write that code yourself and ponder the trade-offs. Do you really just want to check if an object has a then() method or do you want to check its prototype chain?


TypeScript supports 'function f(x: any): x is T' as a way to declare that if f returns true, x may pass as type T

https://www.typescriptlang.org/docs/handbook/advanced-types....


This package has an index.d.ts file that utilizes exactly this.


1) I'm not defending the implementation of is-promise. I don't care to, I'm not a javascript developer.

2) > sure there are non standard promises, ancient stuff, that to me shouldn't be used

If you're building a library, or maintaining one that's been built over many years, you can't easily make calls like that.


> If you're building a library, or maintaining one that's been built over many years, you can't easily make calls like that.

Well, you can, and in the JS ecosystem you'll often find cases where there are two libraries (or two broad classes of libraries) for a certain function that make different choices, one of which makes the simple, modern choice that doesn't support legacy, and one that does the complex, messy thing necessary to deal with legacy code, and which you use depends on your project and it's other constraints.


OK, then the legacy library can't easily make that choice. I'm not saying every single javascript developer should be accepting async or sync callbacks, just that some libraries are choosing to do that for legitimate reasons.


Are you saying that this isPromise package will not play well with TypeScript? One of those files (index.d.ts) solves the TypeScript problem using type predicates. TypeScript WILL know that the object is a promise if it returns true.


This function should absolutely NOT be a package. The problem is that JS has a very minimal standard library and despite tons of money going into the system, nobody's had the good sense to take a leadership role and implement that standard. In other languages you don't need to include external packages to determine the types of objects you're dealing with, or many other things.


And there's an interesting discussion to be had if it shouldn't be one of those snippets that everyone copies from Stackoverflow instead. And how much trouble in other ways that alternative has caused.


I don't think it should be a package.

One-liners without dependencies like this should live as a function in a utility file. If justification is needed, there should be a comment with a link to this package's repo.


What's the difference between a utility file and a package? That seems like a distinction without a difference to me.

If you use the same one liners in more than one project and you copy that utility file over, the line gets even fuzzier.


The utility file will never be updated and break your build without you doing it yourself.


Also you can just read the code the same way you read any other code. And since it's in your codebase and git diffs, you will read it.

Because the implementation detail of is-promise actually is important. It just checks if an object has a .then() method. So if you use it, it's just as important that you know the limitation.

Not everything needs to be swept under the rug.


Also: The utility file will never be updated and fix existing issues within the utility itself (unless you look up the package and diff it yourself). It's a trade-off.


As the commenter who suggested keeping it in a utilities file, I'd say that the trade-off is heavily weighted to not importing it as a package.

When you cribbed the code you should have completely understood what exactly the package was doing, and why, and known what issues it would have had. Since it's a one-liner, it is transparent. Since it is without dependencies, it is unlikely to fail on old code. So it's unlikely to have existing issues and unlikely to develop new issues.

Of course, if you end up using new features of the language in your code, it may fail on that, but the risk old stuff failing should have already been factored in when you decided to upgrade. In fact, the one-liner solves this better since you decide the pace of adaptation of your one-liner to the new features, not the package maintainer.


That's the trade-off I would most likely take in the "isPromise" case. But the opening question was a generic one ("What's the difference between a utility file and a package"), so the answer should reflect both sides.


I'd say that it should rather be a part of the type system. Some kind of `obj isa Promise` should be the way to do this, not random property checks. But that's JS...


The thing is that there is the Promise "class", which is provided by the environment, but there is a interface called PromiseLike, which is defined as having a method called then that takes one or two functions. Now, JS doesn't have nominal typing for interfaces, so you have to do "random property checks".

Typescript partially solves that by declaring types, but if you have a any variable, you still need to do some probing to be able to safely convert it to a PromiseLike, because TypeScript goes to great lengths to not actually produce physical code on its output, attempting to be just a type checker.

Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided, but alas, this is not the case.


shouldn't it then be called is-promise-like? Also, if you're being loose about it anyways, can't you simply just go for `if (obj && typeof obj.then == 'function')` and call it a day? I'd say that's short enough to include your own version and not rely on a package for.

I think that module over complicates it as it is, and most people don't need that level of complication in their code.


> Perhaps if TS or an extension allowed "materializing" TS, that is `value instanceof SomeInterface` generated code to check for the existence of appropriate interface members, this could be avoided

It's not perfect and a bit of a bolt-on, but io.ts works reasonably well in this area:

https://github.com/gcanti/io-ts


In theory `x instanceof Promise` would work, but the reason for this package is that there are many non-standard Promise implementations in the JS world.


It wouldn't work even if everything were native – see my reply above.


That applies for browsers, yes (though I'd argue is a rare edge-case), but create-react-app is a Node.js application.


create-react-app isn't even using is-promise directly. It's several hops in the dependency graph away.


Promises were not always part of the standard and for many years were implemented in user space, by many different implementations. Using duck typing like this was the only way to allow packages to interact with each other, as requiring an entire stack to say only use Bluebird promises is not realistic at all.


I'm totally with you on this. It's dumb that this is a problem but it is actually a problem.


I think you're on the right track. We all (I hope) agree that stuff like this should be standardized. But that's not the same as "should be a package".

At the very least, W3, or Mozilla Foundation, or something with some kind of quasi-authority should release a "JS STD" package that contains a whole bunch of helper functions like this. Or maybe a "JS Extras" package, and as function usage is tracked across the eco-system, the most popular/important stuff is considered for addition into the JS standard itself.

Having hundreds of packages that each contain one line functions, simply means that there are hundreds of vectors by which large projects can break. And those can in turn break other projects, etc.

The reason, cynically, that these all exist as separate packages, is because the person who started this fiasco wanted to put a high a download count as possible on his resume for packages he maintains. Splitting everything up into multiple packages means extra-cred for doing OSS work. Completely stupid, and I'm annoyed nobody has stepped up with a replacement for all this yet.


A function like this should not be something that anyone even thinks of writing or using.

In properly designed languages, values have either a known concrete type, or the interfaces that they have to support are listed, and the compiler checks them.

Even in JavaScript/TypeScript, if you are using this, you or a library you are using are doing it wrong, since you should know whether a value is a promise or not when writing code.


This function is most likely an artifact of before promises got standardized. One way promises took off and became so ubiquitous is different implementations could interop seamlessly. And the reason for that is a promise was defined as 'an object or function having a then method which returns a promise when called'.

Doesn't excuse the JS ecosystem and JS as a whole, which truly is a mess. But there's a history behind these things.


I think the point of the comment is: you should not be testing for this at all.

If your API works with promises, call .then() on what is handed to you. That's it. Don't make up emergent, untestable behavior on the spot.


You need to do this test if you are creating a promise implementation. That was my point, there is a reason code like this exists.


Why would an implementation need to test for it?

ISTM that a framework may need to test for promiseness if it calls promises and functions differently, but it can and should be done as a utility in the framework, not as a separate package.


I agree with that. I have no idea why it’s in a separate package. But I can say that about many packages :).

It’s possible to just treat everything as a promise by wrapping results in Promise.resolve() but that can have performance implications that some franeworks might want to avoid by only going down the promise route when they have to.

For promise implementations, If the callback to then() returns a promise, the promise implementation detects that and resolves that promise behind the scenes: http://www.mattgreer.org/articles/promises-in-wicked-detail/...


> Or, really, part of standard js, maybe.see:

It's a part of node, at least: https://nodejs.org/docs/latest-v12.x/api/util.html#util_util...


This will work for a standard Promise, which is great, but not for weirdo made up promises. It also was released, I think, in 2018.

It's one thing if you own the entire codebase, but if you're building a popular, multiple-years-old library/framework, you can't make the same assumptions.


I'm only finding about it today TBH. Usually I go for what this library does:

  function isPromise(obj) {
    return typeof obj?.then === 'function'
  }


Shouldn't a js framework exist that includes these static basic if checks in the core and offers them as a buildin method? Why load this as an external package, why not copy the code and maintain locally?


And it doesn't even check if it's a Promise. It's violating it's own naming contract. At least it should be called: isPromiseLike? To check if something is actually a Promise all you need to do is a `foo instanceof Promise`.


That won't work across window boundaries, since each window environment gets its own distinct version of Promise.


Then instanceof will break for all native objects. Who writes code that checks instances across window boundaries? This is flawed beyond the idea of how to properly check an instance, it's bad architecture - the result? This post, and probably more subtle bugs surfacing along the way.



how does that make sense in any universe. Just because I have a function named "then" does not mean that my object is a promise. Maybe "then" is the name of a domain thing in my project, for instance a small DSL or something like that. arghhhhhh !


objects with a .then(...) methods are treated as though they have Promise semantics by the language

See Promise.resolve https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Consider it a special function like "__init__" in Python. I think this is one of the problems with duck typing, existence of a public function introduces possible name collision onto the whole codebase.


It's almost as if treating a dynamically typed interpreted language as if it had static types like C++ is fundamentally broken.


Does anyone know if it really needs the !!obj&& at the start? Isn't that redundant with checking that the type is either "object" or "function"?

And is there a reason to use '!!' inside a conditional? Wouldn't obj&& do basically the same thing?


Unfortunately, that's not redundant, because `typeof null` returns "object":

https://2ality.com/2013/10/typeof-null.html


    !!null && false   === false
    null && false     === null


Irony is there's a simpler way to do it, making the library even less necessary. `const isPromise = x => Promise.resolve(x) === x`


It didn't break because of the source, it broke because of all the packaging/module bullshit around it. It seems the Javascript ec(h)osystem has firmly come down on the philosophy of "make modules very easy to use, but very difficult to make (correctly)".

The predictable explosion in dependency trees has caused the predictable problems like this one. I feel I much prefer the C/C++ way of "modules are easy to make, but difficult to use".


A nice one line argument for type safety.


[flagged]


[flagged]


It's not useful, interesting, or accurate to stratify people this way. You have no idea what someone's intelligence level or background is based on their usage of JS or C.

I loathe JS, but one of the best devs I know likes it. People's mileage varies.

For me personally, there's much more money in easy React products than making games. I'm not a great dev, but I'd be doing this same work even if I were.


I sincerely hope you're being sarcastic here.


Actually VHDL / Verilog programmers are on top of the food chain.


I don't know much about VHDL, but there seems to be a non-scientifically-measured negative correlation between the quality of candidates I've interviewed claiming VHDL experience vs. those without.


And is that reflected by their compensation, or ... ?


They will eat you.


The “food chain?” What do you mean by that?


Wow just wow. So here's your new Promise object:

class World { then () { return 0; } } isPromise(new World) // true

If there really isn't a safe and better way to tell if an object is an instance of Promise…then color me impressed.


There are custom Promise implementations (for reasons), such as bluebird.js. If you're supporting legacy browsers, there will be no standard Promise object. So the simplest way to check for the Promise contract is the code posted. But yes, in an ideal world, one would be able to just do `promise instanceof Promise`.


These cases should really be handled at the compilation/transpilation level, since there is one, and users should just write latest generation JavaScript without these concerns.


I mean, if you have to assume deliberately adversarial action on the part of your own codebase, you may have worse problems than having to duck-type promises.


A class with a `then` method isn't a rare thing that could only come up adversarially. Below I've linked two example from the rust std-library (I just chose that language because the documentation is really easy to search for objects which have a method name then). I think we can be sure that both booleans, and the "less than, equal to, or greater than" enum are not in fact promises.

https://doc.rust-lang.org/std/primitive.bool.html#method.the...

https://doc.rust-lang.org/std/cmp/enum.Ordering.html#method....


Cypress.io has chainable .then functions but they are not await-able and the documentation clearly states they are not promises and cannot be treated as such. It’s a bad idea, but it is out there.


I haven't used Cypress, but looking at its docs, I don't know that I'd agree its use of "then" is all that bad. I agree they'd have done better to find a different name, but this at least seems like the least possible violation of least surprise if they are going to reuse the name.

At the same time, it is intensely wild to me that their "then" is an alternative to their "should", which apparently just re-executes the callback it's given until that callback stops throwing. If your tests require to be re-run an arbitrary and varying number of times in order to pass, you have problems that need to be dealt with in some better way than having your test harness paper over them automatically for you.


What do Rust idioms have to do with Javascript?


The language has nothing to do with it. The point is just that the name "then" is a perfectly common method name.

If a small standard library is using it for things that aren't promises, you can bet your ass that there are javascript libraries using it for things that aren't promises.

Like I said, I just chose to look at rust first because it's documentation has a good search bar.


The language has everything to do with it, because the language is the locus of practice. Rust practice is whatever it is, and is apparently pretty free with the use of "then" as a method name, which is fine. Javascript practice isn't the same as Rust practice, and Javascript practice includes a pretty strong norm around methods named "then".

That's why the next time I run into such a method, that doesn't belong to a promise and behave the way a promise's "then" method does, will be the first time I can remember, despite having worked primarily or exclusively in Javascript since well before promises even existed.

I'm sure there is an example somewhere on NPM of a wildcat "then", and that if you waste enough of your time you can find it. So what, though? People violate Rust idioms too from time to time, I'm sure. I doubt you'd argue that that calls Rust idioms themselves into question. Why does it do so with Javascript?


It doesn't call into question the idioms of either language. It does call into question the idea of programmatically deciding whether or not something is a promise based on the assumption that the idiom was followed.

People bounce around between languages, especially to javascript. An expert javascript dev might not call things "then" but the many dabblers might. Going back to the original point this is a footgun, not only an avenue for malicious code to cause trouble.


So your point ultimately is that duck-typing isn't ideal? I mean, I agree, but I'm not sure where this gets us that we weren't before.


I don't actually mind duck-typing.

My primary point is just that you are mistaken when claiming that this bug could only be surfaced by malicious code.

My secondary (somewhat implicit) point is that having an "is-promise" function is a mistake when there is no way to tell if something actually is or is not a promise. This library/function name is lying to the programmers using it about what it is actually capable of, and that's likely to create bugs.


I mind duck typing! That's why I'm so fond of Typescript, where everything that shows up where a promise should be is reliably either instanceof Promise, or instanceof something that implements Promise, or a compile-time error.

Absent that evolved level of tooling, and especially in an environment still dealing with the legacy of slow standardization and competing implementations that I mentioned in another comment, you're stuck with best effort no matter what. In the case of JS and promises, because of the norm I described earlier in this thread, best effort is easily good enough to be going on with. It's not ideal, but what in engineering practice ever is?


So, I mind poorly implemented duck typing, I also mildly mind dynamic typing, but in principle I think static duck typing could be not bad.

With javascript promises in particular, the duck typing suffers from this unfortunate fact that you can't easily check if something can be awaited-upon or not. I don't think I really care if something is a promise, so long as I can do everything I want to to it. So I view the issues here as this function over-claiming what it can do, the limitation on the typesystem preventing us from checking the await-ability of an object, and the lack of static type checking. None of those are necessitated by duck typing.

I disagree that you're stuck with this best-effort function. It's perfectly possible to architect the system so you never need to query whether or not an object is a promise. Given the lack of ability to accurately answer that question, it seems like the correct thing to do. At the very least I'd prefer if this function was called "looks-vaguely-like-a-promise" instead of "is-promise".


Now we're kind of just litigating how "is-promise" is used in CRA, or more accurately in whichever of CRA's nth-level dependencies uses it, because CRA's codebase itself never mentions it.

I don't care enough to go dig that out on a Saturday afternoon, but I suspect that if I did, we'd end up agreeing that whoever is using it could, by dint of sufficient effort, have found a better way.

On the other hand, this appears to be the first time it's been a significant problem, and that only for the space of a few hours, none of which were business hours. That's a chance I'd be willing to take - did take, I suppose, in the sense that my team's primary product is built on CRA - because I'm an engineer, not a scientist, and my remit is thus to produce not something that's theoretically correct in all circumstances, but instead something that's exactly as solid as it has to be to get the job done, and no more. Not that this isn't, in the Javascript world as in any other, sometimes much akin to JWZ's "trying to make a bookshelf out of mashed potatoes". But hey, you know what? If the client only asks for a bookshelf that lasts for a minute, and the mashed potatoes are good enough for that, then I'll break open a box of Idaho™ Brand I Can't Believe It's Not Real Promises and get to work.

I grant this is not a situation that everyone finds satisfactory, nor should they; the untrammeled desire for perfection, given sufficient capacity on the part of its possessor and sufficient scope for them to execute on their visions, is exactly what produces tools like Typescript, that make it easier for workaday engineers like yours truly to more closely approach perfection, within budget, than we otherwise could. There's value in that. But there's value in "good enough", too.



This is a promise as far the language is concerned (and `is-promise` package uses the same definition as the language) - it's sufficient for an value to be an object and to have a `then` property that is callable. For instance, in the following example, the `then` method is being called.

    (async () => ({
        then() {
            console.log("Called")
        }
    }))()


Or just:

    const p = {then: () => 0}


Well, an object with a then() method is a promise.

Promise.resolve({then: () => console.log('called')})

Promises autoflatten since you can't have Promise<Promise<T>>, so you'll see that this code prints 'called'.


According this this library, maybe. According to the specification, no: https://promisesaplus.com/


Your code is indeed an example of a promise. A stupid one, since it never “resolves”, but it’s a promise.


I am one of the maintainers of a popular Node-based CLI (the firebase CLI). This type of thing has happened to us before.

I think the real evil here is that by default npm does not encourage pinned dependency versions.

If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).

In other stacks having your dependency versions float around is considered bad practice. If I want to go from depending on 1.2.1 to 1.2.2 there should be a commit in my history showing when I did it and that my CI still passed.

I think we miss the forest for the trees when we get mad about Node devs taking small dependencies. If they had pinned their version it would have been fine.


That’s still the fault of the package developer. “^1.2.1” means “any version with a public API compatible with 1.2.1”, or in other words “only minor versions”.

The whole point of semantic versioning is to guarantee breaking changes are expressed through major versions. If you break your package’s compatibility and bump the version to 1.2.1 instead of 2.0.0 then people absolutely should be upset.


Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds. The package lock files are themselves dependent on which package restore tool you are using (yarn vs npm vs ...) it’s also much too ambitious to believe that all packages in an ecosystem will properly implement semver. There may even be times where a change doesn’t appear to be breaking to the maintainer but is in actuality. For example, suppose a UI library has a css class called card-invalid-data and wanted to rename to card-data-invalid. This is an internal change since it is their own css, but could break a library that overrode this style or depended on this class. I would consider this a minor version but it could still cause a regression for someone.


> Allowing any version drift of dependencies at all means that if you don’t check in and restore using the package lock file, you cannot have reproducible builds.

This is the germane point in this incident.

The parent comment mentions that SemVer "guarantee[s] breaking changes are expressed through major versions". This is a common misperception about SemVer. That "guarantee" is purely hypothetical and doesn't apply to the real world where humans make mistakes.

The OP `is-promise` issue is an example of the real world intruding on this guarantee. The maintainers clearly didn't intend to break things but they did because everybody makes mistakes

Which points to the actual value proposition of SemVer: by obeying these rules, consumers of your package will know your _intention_ with a particular changeset. If the actual behavior of that changeset deviates from the SemVer guidelines (e.g. breaking behavior in a patch bump), then it's a bug and should be fixed accordingly.

Back to the parent's point about locking dependency version— I would add that you should also store a copy of your dependencies in a safe location that you control (aka vendoring) if anything serious depends upon your application being continually up and running.


I think you might be misunderstanding the above comment. The default behavior of `npm i <package>` is to add `"<package>": "^1.2.1"` _not_ `"<package>": "1.2.1"`. The point the commenter was trying to make is that the tool itself has a bad default which makes it easy to make mistakes. I would go so far as to argue that when `npm i` does not have the behavior a user would expect from a package manager in that regard.


And likewise, I think the point of that above comment is that such a change in default behavior wouldn't be necessary if package authors actually obeyed semantic versioning.

That is: "^1.2.1" shouldn't be a bad default relative to "1.2.1"; you generally want to be able to pull in non-breaking security updates automatically, for what I hope are obvious reasons, and if that goes sideways then the blame should be entirely on the package maintainer for violating version semantics, not on the package/dependency manager for obeying version semantics.

I don't have much of an opinion on this for Node.js, but the Ruby and Elixir ecosystems (among those of many, many other languages which I've used in recent years) have similar conventions, and I don't seem to recall nearly as many cases of widely-used packages blatantly ignoring semantic versioning. Then again, most typically require the programmer to be explicit about whether or not to allow sub-minor automatic version updates for a given dependency, last I checked (e.g. you edit a configuration file and use the build tool to pull the dependencies specified in that file, as opposed to the build tool itself updating that file like npm apparently does).


> If I npm install is-promise I'll get something like "^1.2.1" in my package.json not the exact "1.2.1". This means that the next time someone installs my CLI I don't know exactly what code they're getting (unless I shrinkwrap which is uncommon).

Yes, this is by design. If this weren't the case, the ecosystem would be an absolute minefield of non-updated transitive dependencies with unpatched security issues.


Probably off topic, but just want to say cargo does the same things on the Rust side, and it has been annoying me to hell.

And it's even worse in cargo, because specifying "1.2.1" means the same thing as "^1.2.1".


I feel the real issue here is downstream package consumers not practicing proper dependency pinning. You can blame the Node ecosystem, the maintainer of the package, etc. but there are well-known solutions to prevent this kind of situation.


This wasn't a big problem due to a package being suddenly upgraded in existing code. It's because a scaffolding tool (Create React App) used to set up new projects would set those projects up with the latest (presumably patch, maybe minor) version of the dependencies. In other words, because those projects did not exist yet, there was nothing to pin.

Unless you mean Create React App should pin all of their (transitive) dependencies and release new versions multiple times a day with one of those dependencies updated.


So you would exchange security for stability, if you use package pinning then you will end up with fosilized packages in your product, which will have all maner of security issues that have alresdy been fixed.


You can always use something like dependabot, which should help you quickly upgrade versions and also protect you from breaking your build.


If a package doesn't provide a stable branch that will receive security updates then it's not mature enough to be used anyway. That's the sensible middle ground between bleeding edge and security, unfortunately most packages/projects aren't mature enough to provide this.

There's a reason companies stick with old COBOL solutions, modern alternatives simply aren't stable enough.


I get notifications to update my Rails apps from GitHub as a matter of course when there's a CVE in my dependencies. Does this kind of thing not exist/is impractical for JS?


From my experience of getting ~30 of those notifications per week for a handful of JS repos, I can very much assure you that it does exist.


As a fellow commenter said, you would ideally use something like dependabot or greenkeeper/snyk.


I think these one-line-packages aren't the right way to go. Either JS-developers should skip the package-system in that case and just copy and paste those functions into their own project or there should be more common used packages that bundle these one-liners. I mean is_promise() and left_pad() are not worth their own package. Packages-dependencies of 10000 packages for trivial programs are just insane.

Is someone going to fix that?


>Is someone going to fix that?

Probably not. There is too much code in the wild, and NPM owns the entire JS ecosystem, and there has been too much investment in that ecosystem and its culture at this point for a change in course to be feasible.

The JS universe is stuck with this for the foreseeable future.


It's just a cultural problem. There's no reason why a library should abstract away `typeof obj.then === 'function'` if they want to check if something is a promise. Just write a one-liner the same way you don't pull in a `is-greater-than-zero` lib to check x>0.

The problem is when you try to level criticism at this culture and a cloud chorus of people will show up to assert that somehow tiny deps are good despite these glaring issues (a big one just being security vulns). And funnily enough, the usual suspects are precisely people publishing these one-liner libs. Then people regurgitate these thoughts and the cargo cult continues.

So there's no "fix" for NPM (not even sure what that would mean). I mean, anyone can publish anything. People just have to decide to stop using one-liner libs just because they exist.


Does it need much to change? I didn't mean to fix NPM. The problem is the non-existing standard-library. Just create one that everybody will use and everybody could cut their dependencies by thousands.


Several of these already exist, like lodash and underscore (which is a subset of lodash). After the rapid improvements on both the browser and node sides of the last couple of years (which filled in many of the blanks in this hypothetical "standard library"), they are less necessary than they may have been before. Also they can become something of a crutch. Fixing a bug a couple of days ago, I realized that an argument of Object.assign() needed to be deep-copied. Rather than adding a dependency for lodash or underscore or even some more limited-purpose deepcopy package, I just figured out which member of the object needed to be copied and did so explicitly. Done.

Another good way to not have to depend on big/tiny/weird modules published by others is to use coffeescript. So much finicky logic and array-handling just goes away.


Not everyone would use it, that's my point. The inertia behind the existing system is too great, especially in enterprise. All that would happen is that library would become just another Node package, and then you've got the "n+1 standards" problem.

The "nonexistent standard library" wasn't a problem in the days when javascript development meant getting JQuery and some plugins, or some similar library. It only became a problem after the ecosystem got taken over by a set of programming paradigms that make no sense for the language.

Yes, in my mind you'd have to change everything from the ground up, starting with no longer using javascript outside of the browser.


> Not everyone would use it

If the right people would provide the library, it would be used by enough people.

> Yes, in my mind you'd have to change everything from the ground up, starting with no longer using javascript outside of the browser

Whats the point of inside or outside of the browser?


The point is that different languages are best suited to different tasks. Javascript is a simple, very loosely typed scripting language with prototypal inheritance that was developed to be run in the browser. It's a DSL, not a general purpose programming language. Using it elsewhere for applications where another language with stronger and more expressive types would be more appropriate requires hacks like compiling it from another (safer, more strongly typed) language like Typescript, which still results in code that can be fragile because it only simulates (to the degree that a JS interpreter allows) features that the language doesn't actually support.

See the attempt to "detect if something is a Promise" as an example - the function definition for the package makes it appear as if you're actually checking a type, but that's not what the package does.

Most of the unnecessary complexity in modern JS, as I see it, comes from the desire to have it act and behave like a language that it simply isn't.


> It's a DSL, not a general purpose programming language

Sorry, but I fear that ship has sailed ;-)

And I've heard JS was developed by someone who wanted to give us Scheme (you can't go more general purpose than that) but had to resort to a more "friendly" java-syntax. IMHO javascript would be a great general purpose language if the ecosystem wouldn't be such a mess.


>Sorry, but I fear that ship has sailed ;-)

I know, I know. If anyone needs me I'll be in the angry dome.


Isn't this what "utility libraries" like lodash and jQuery (each for their respective domains) are for?


I see a lot of criticism to one-line packages, but IMO in the end what matters is the abstraction.

Thinking of the package as a black box, if the implementation for left-pad or is-promise was 200 lines would it suddenly be ok for so many other packages to depend on it? Why? The size of the package doesn't make it less bug-prone.

I see plenty of people who are over-eager to always be up-to-date, when there really isn't any point to it if your system works well, and so they don't pin their versions. This will break big applications when one-line packages break, but also when when 5000-line packages break. Dependencies are part of your source, don't change it for the sake of changing it, and don't change it without reviewing it.


> The size of the package doesn't make it less bug-prone.

Of course it does. It's more bug-prone just by being a package. More code is more bugs and more build-system annoyance is more terror (=> more bugs). If I only need one line of functionality I will just copy and paste that line into my project instead of dealing with npm or github.

> Dependencies are part of your source

I agree. If you see news about broken packages like this and you don't just shrug your shoulders your build-system might be shit.


It would be more ok if left-pad was part of a package called, say, text-utils which also included right-pad, etc. Same with is-promise, it sounds like it should be a function in a package called type-checker.


Weinberg's Law: If Builders Built Buildings the Way Programmers Wrote Programs, Then the First Woodpecker That Came Along Would Destroy Civilization.


Why are So Many of the Words in This Comment Capitalised? Is it a Title of Something?


That's the soft in software.


This sounds very clever but the nature of software development is quite different from building buildings. The rate of innovation is by magnitudes higher. And as opposed to buildings software can tolerate a certain amount of failure.


"Pfft. Chickens don't even know what a road is!"


Everyone crying about this on the Internet would do better to just take it as an easy lesson: pin your dependency versions for projects running in production.

This was an honest oversight, and even somewhat inevitable with so many expected supported ways to import/export between cjs mjs amd umd etc. It will happen again.

And when it happens the next time, if it ruins your life again, take issue with yourself for not pinning your dependency versions, rather that package maintainers trying to make it all happen.


And everyone who depends on projects that pin their dependency versions gets to be victims of security exploits long after they are fixed.

Dependency management is not as simple as you seem to think.


The "magical security updates" theory has never worked. Breaking insufficiently-pinned dependencies are vastly more common than unnoticed fixes on patch releases. On balance, semver has been good for javascript, but to the extent it contributed to the popularization of this dumb theory it has been bad. Production apps (and by a transitive relation, one supposes, library modules) should be zealously pinned to the fewest possible dependencies, and those dependencies should be regularly monitored for updates. When those updates occur, tests can be run before updating the pins.


Yes -- pinning dependency versions does not have to be at odds with security.

In fact, how secure is it, really, to keep dependencies unpinned and welcome literally /any/ random upstream code into your project, unchecked? This is yet more irresponsible than letting dependencies age.

But even then, it's not as if you have to choose -- you can pin, then vet upstream updates when they come, and pin again.


Right, making significant changes to the entry points of a library should be marked as a breaking change and bump the major version.


Well, I guess you can choose whichever poison you like.

Pinning isn't meant to be a forever type of commitment. You're just saying, "all works as expected with this particular permutation of library code underneath." And the moment your dependencies release their hot-new you can retest and repin. Otherwise you're flying blind and this type of issue will arise without fail.


The unspoken assumption is that you don't just pin and move on with your life. You take as much ownership over your package.json as you do with your own code, and know that you must actively review and upgrade as necessary (as opposed to just running "npm install" and trusting in the wisdom of the cloud)


And everyone who just upgrades whenever possible gets to be victims of security exploits too.

Users of Debian Stable missed Heartbleed entirely. It simply never impacted them.


I’m working on a thing I’m calling DriftWatch that attempts to track, objectively, how far out of date you are on dependencies, which I call dependency drift. I’ve posted about it here before [1]. I’m using it in my consulting practice to show clients the importance of keeping up to date and it’s working well.

I agree with the parent that it’s important to lock to avoid surprises (in Ruby, we commit the Gemfile.lock for this reason), but it’s equally as important to stay up to date.

1. https://nimbleindustries.io/2020/01/31/dependency-drift-a-me...


There are commercial tools like blackduck, sonartype/nexus which are used to sczn dependancies of not1 just node code, and highlite ourof date packages, known vulnerabilities, and license problems.


Tools like Safety can help in the python world, https://pypi.org/project/safety/, and cargo-audit https://github.com/rustsec/cargo-audit in the rust world. Stick them in your build chain and get alerted to dependencies with known exploits, so you can revisit and bump your dependency versions, or decide that that project is not worth using if they can't be bothered to consider security to be as important a feature as it is.


What you do is you pin dependencies, then automate regular dependency upgrade PRs. If your test suite and your CI/CD pipeline is reliable, this should be an easy addition.


We run Dependabot in our CI pipeline to flag security upgrades, and then action them. I'd much rather have that manual intervention than non-deterministic builds.


There are tools out there (like npm audit) that can alert you to known vulnerabilities.


Exactly: pin dependencies to avoid surprises, and use a CI to test compatibility of new versions, so you can deploy security updates on your own schedule, best of both worlds.

Github even bought Dependabot last year, so it's now free.


> pin your dependency versions for projects running in production

Works for existing apps, but people using create-react-app and angular CLI can't even start a new project.


Nah, create-react-app and others could easily pin dependencies of libraries they install in your new project to known-good versions.

Without doing that bit of diligence, this type of issue should be 100% expected.


Then you can’t upgrade anything unless create-react-app releases a new version (or you eject), which, in addition to the obvious release cadence problem, might introduce other compatibility problems.


By doing that they would avoid this issue, for sure. They would also introduce security issues by using old versions.

And this would do nothing for the fact that `npm install eslint && ./node_modules/.bin/eslint` was also failing.


Pinning dependencies might introduce security issues.

Not pinning dependencies is a security issue.


It's not like pinning means you can /never/ update. You just get to do it on your own schedule.

You can even automate updating to some degree -- running your tests against the latest everything and then locking in to those versions of all goes well.


Again, this only works for project skeletons, and not for any other package that happened to have a transient dependency on `is-promise` (which is a lot more than project skeletons).


I don't know much about those projects, but why did this break them? Are they not pinning versions?


Because they are starting a new project from scratch and would have nothing to pin their dependencies against?


Maybe I'm misunderstanding how those projects work. From what I recall, they generate a project, including the package.json. So I'm not sure why they couldn't just generate the package.json with pinned versions?

I don't write much JS, and have only used create-react-app just a few times, so feel free to explain why this isn't possible.


package.json only lists top-level dependencies. package-lock.json tracks all dependencies, and dependencies of dependencies. is-promise is one of those dependencies of a dependency, which you don't have much control over.


How would a top level dependency change versions if it bumped a transitive dependency? Is that a thing in js-land?


How could a dependency-of-dependency change version if one of the direct dependencies doesn't change version? I guess, if the direct dependency isn't pinning that version? Another case of, everyone should be pinning dependencies.


Exactly, node's conventions are to allow a range of versions (semver compatible). True, if all dependencies were pinned, this wouldn't come up as often.

That also means that there would be a lot more updating when security issues are found.


I'm a novice in this area but if your project relies on a bunch of external node packages why wouldn't you download them all and host them locally or add them to version control?


Adding them to your own version control is a nightmare: Your own work will drown in all the changes in your dependencies. The repository will quickly grow to gigabytes, and any operation that would usually take seconds will take minutes.

It's also just not needed. Simply specifying an exact version ("=2.5.2") will avoid this problem. The code for a version specified in this manner does not change.


Yes, putting your dependencies in version control alongside your project is no fun. Commit history is muddied, but also if your production boxes are running on a different platform or architecture than where you and your team develop, that can make a big mess too.

That said, with a big enough team and risk-averse organisation, it can be a brilliant idea to put your dependencies in /separate/ version control and have your build process interact that way.

In that scenario, even if your dependencies vanish from the Internet (as happened with left-pad), you are still sitting pretty. You can also see exactly what changed when, in hunting for causes of regressions etc.


To me an even bigger nightmare is your entire project or product depending on some external resource you don't control.


Checking them into your repo is called “vendoring” and it’s one way of solving the problem, yes. Personally, it’s my favorite approach. But it does have some challenges, as other commenters point out.


You'd use a proxy, yes.


> pin your dependency versions

And then to see "npm detected 97393 problems" or whatever the message exactly is.


You don't need to pin them forevermore -- just when you don't want everything to break unexpectedly :).

When you want to upgrade your dependencies, then go ahead and do that, on your own schedule, with time and space to fix whatever issues come up, update your tests, QA, etc.


That’s good: it’s easy to update and it means you do it in a controlled manner rather than the next time something deploys.


Can someone help me understand why a library like this is even necessary? Can't you just wrap everything and treat it like a promise?

    const aPromise = Promise.resolve(1);
    const notAPromise = 2;

    Promise.resolve(aPromise).then((x) => console.log(x));
    Promise.resolve(notAPromise).then((y) => console.log(y));

    // Logs:
    // 1
    // 2


This is not a library. Stop thinking of it as a library. It's a building block, a module.

> why a library like this is even necessary?

Do you know how to determine whether something is a Promise?

Wrong. Also the first few StackOverflow answers are wrong or incomplete.

You know what's better? Using the same library 3.4 million repos depend on, that is tested and won't break if you use a package-lock.

> Can't you just wrap everything and treat it like a promise?

Maybe. Maybe not. Treating everything as a Promise means you have to make your function asynchronous even if not necessary.


Somehow convoluted, but if you wrap in a promise like this then you make it async (similar to setTimeout(fn, 0)), so in some situations you might want to keep the non-promised code as non-promise:

https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

(red is async, blue is sync)


For compatibility with old versions of JS without Promise, when libraries used thenables or a promise library.


Dependencies in almost any software system are fundamentally built on trust.

You trust that minor version upgrades won't break the system, or that malicious code won't be introduced. But we're human... things break.

This can happen in any ecosystem, but npm is particularly vulnerable because of it's huge dependency trees. Which is only possible due to the low overhead of creating, including and resolving packages.

That's why npm has the "package-lock" file, which takes a snapshot of the entire dependency tree, allowing a truly reproducible build. Not using this is a risk.


Call me crazy, but... I don't add things to my projects without looking at the source. Mostly because it saves me from shit like this. If I see something is small enough, and easy enough to reason about, I'll just copy-pasta that motherfucker with a comment citing the source and date it was pasta'd (license permitting).

Things like this are so not worth a package, ever, it's something when you see it you go "oh yeah, that's the obvious, easy way of doing this" it's not a package, it's a pattern. I can promise you, this was only ever added to packages because people wrongly assumed because since it's about "promises" (spooooky) it must be complex and worthy of packaging.

As someone who doesn't do front-end work regularly, but also sank about 3 consecutive weeks (~6-8 hours/day) in the last year into understanding generators, yielding, and promises... I can tell you, the actually scary part about all of this, is pretty much no one just reads the fucking docs or the code they're adding.

Moral of the story, especially in the browser: the reward of reading the code before adding it is enormous, you'd be surprised how often the thing you want is just a simple pattern. Taking that pattern and applying it to your specific use case, instead of imposing that pattern on your use case will give you giant wins.... Learn the patterns and you're set for life.


> Call me crazy, but... I don't add things to my projects without looking at the source.

This is manageable when you are using Packagist, this is manageable when you are using Maven, where all dependencies are flat. When compatibility issues arise they have to be dealt with upstream.

This is NOT manageable when you are using NPM that will go fetch 30 different versions of the same package because crazy dependency resolution.

This is not a JS issue like people claim here, this is 100% a NPM issue because whoever designed this was too busy being patronizing on Twitter rather than making sensible design decisions.


Unless the packages you're adding are trivial I seriously doubt you're looking that close. Are you really going to code review 20000 lines of someone elses code every time you're adding something? 100,000? Also their dependencies? Those are very reasonable numbers by the way.


I do review a list of dependencies when I investigate which node package to use. For those for which I can choose, that is.

Sometimes I also take a look at the code.

And I've chosen the one with less dependencies often enough.


I said I look at them, I didn't say I inspect every single line of them. My point, which you've missed, is that simply looking at the code before you add it (spend a even a couple minutes) saves a lot of problems (like the one in create-react-app).

FWIW, I also won't add something to my project if I see it has a ton of dependencies on stupid shit. Literally, I gave up on react after realizing `create-react-app` is what the community recommends. I'm glad I did too. It's an insane amount of bloat, for nothing included but a view renderer, and if that's how that community rolls... I'm gonna have to pass.


If you don't read the source, how can you claim such moral superiority? Whatever security issues, nefarious code, etc., are almost assuredly hidden down in the weeds where you're not looking. You think other programmers don't glance at the structure? Of course they do.


I wish I had as much time as you. Looking through the source code of the millions and millions of lines of code that get packaged with any modern application before you add things to your projects sounds daunting!


Do you also copy/paste the tests? https://github.com/then/is-promise/blob/master/test.js . Just curious, sometimes it's really not just about the code but that it's continuously tested as well.


When would the actual function body need to change in this case? These micropackages almost never break because of the actual implementation, they break because they are unpublished (like left-pad) or because they change how they are exported (like in this case).

If the code is in your codebase it does not need a test.


create-react-app contains over 1000 packages. How long would it take to review all of those?


This sort of cavalier attitude where 1000 dependencies are yawn-inducing in their commonality is why I feel vindicated in never having wasted my time with this kind of ecosystem. Eventually the house of cards will come down. Let's all pray that it happens sooner rather than later.


When WebAssembly gets direct DOM access we'll finally have options and we'll no longer have to tolerate JavaScript. I expect the JS community will settle down and get somewhat saner after that, too.


I suspect that over time there will be more pushback against dependencies, as I've seen in other communities, viewing dependencies as liabilities carefully chosen.


Too long. Therefor create-react-app is not useable for anything other than toy or hobby projects.


This.

The problem isn't reading 1000+ dependencies, the problem is the 1000+ dependencies... There's no way, setting up a view renderer, in the context of a webpage, requires a 1000+ dependencies. I honestly did this exact thing with `create-react-app` and it's one of the reasons why I don't use/choose react. Too much bloat for no batteries included.


this doesn't make any sense. cra is webpack, but in a way that doesn't blow up every week. you can use react without bundlers, but what is the point. you'll be sitting there, without a dev env, no hot reload, no module resolution, minification, types, jsx, babel, ... any single one of these will get you 1000+ packages on the dev side. none of this is going into the published build of course.


You're right there's no way. CRA isn't React. You can still use React by just adding react and react-dom to a page via two script tags.


There are alternatives to create-react-app.

In fact, there are alternatives to React.


Of course - this issue isn't unique to create-react-app or React, most Javascript frameworks have a similar explosion of dependencies.

A quick check of angular-cli and vue-cli shows that an empty project uses 870 and 816 dependencies respectively.


Have fun telling that to your boss


Plus, pick the consensus option, any problems that come up are that option's fault. Fight for anything else, and everything's your fault. Even if it is actually better it can make you, personally, look worse.


"Nobody ever got fired for buying IBM"


I rarely justify technical decisions to my boss, and when I do, they smile and nod.


I work in a similar environment, but I think it's fair to realize many do not.


Same. My boss doesn't micromanage technical decisions.

Wouldn't work there otherwise.


Or the 25 people you work with across 4-8 different teams that finally settled on something that can allow people of different teams to move around without a lot of anguish.


Note that in this case the dependencies also include compilers (for several languages, because front-end projects use multiple languages, have to be compatible with varying levels of support for those languages, and there's some leeway for the consumer of the scaffolding tool as well to choose which toolchain they use - but the other options get installed as well). Do you also review the code of your compilers and runtimes? Test frameworks? Static analysers?


The bad code is almost never in the surface level library, it's in one of the buried dependencies.


tbh, it's something that should be included in a standard library (not a third party package or dependency)


Or just copy and paste it into your project. Using a dependency for one line is beyond ridiculous.


There is something aggravating about the first comment in an issue like this posted minutes after the issue was created to say “is this fixed yet?”


The worst part is that it was posted 10 minute later... I understand bumping a year old issue, but cmon give it a few minutes at least.


It was posted after chore: fix is-promise, they may have not realized that it's just some random pingback that github shows there, and not a message that somebody has committed a fix.


It's downloaded 11 million times a week. This touches a good majority of the Node ecosystem, so there's going to be quite a lot that doesn't work until this is remedied. And I'm not sure that package-lock.json is going to save folks here because it was a minor version update.

https://www.npmjs.com/package/is-promise


package-locks lock down exact versions. Otherwise there's no point to them.


Doesn't the lock file lock exact versions?


is it idiomatic in the JS world to always express dependencies in the "version X.Y or higher", vs "version X.Y"? Most of my experience is from the java/maven world where you're playing with fire if you don't just make it "X.Y".


There are a lot of idioms. A very common one, I think the current default, is to pin only the major version in the dependency list, and also to lock exact versions in an installer-generated lockfile following a successful install. If you find a locked version breaks your code, you adjust your dependency list, nuke the lockfile, and let a reinstall build it again.

The idea is that pinning major versions lets you get non-breaking improvements from package authors who use semver properly, and pinning exact known-good versions lets you avoid surprises in your CI builds.

It works pretty well when you start from a known good state and vet your dependencies reasonably well. The trouble here seems to be largely that CRA is designed, among other purposes, to serve people just getting into the ecosystem of which it's a part, and those people are unlikely to be familiar enough with the details I've described to be able to effectively respond.

The comparison with left-pad is easy, but this isn't at all on the same scale. It's a bad day for newbies and a minor annoyance for experienced hands. And, of course, cause for endless spicy takes about how Javascript is awful, but such things are as inevitable as the sunrise and merit about the same level of interest.


It's less about JS and more about semantic versioning (semver). So you're supposed to be able to expect that the API interface of the library is not changing on the second or third version number, only on the first one, in this format: MAJOR.MINOR.PATCH

But as we're still doing human versioning one way or another in package management, there will always be cases where it doesn't perfectly follow its versioning scheme or otherwise behaves unexpectedly because of a change. It's almost like we need new ways of programming where the constructs and behavior of the program/library are built up via content-addressing so you can version it down to it's exact content.


The "idiomatic way" is to use a package-lock.json, which keeps the dependencies (and transitive dependencies) at the exact version specified unless you decide to upgrade them.


Does anyone know if there's a way to upgrade a dependency of a dependency of a dependency of a dependency of a dependency in my yarn.lock without actually editing the yarn.lock, and also, waiting for five packages to update their dependencies, especially if they're locked or specified by even one of the five in semver rules?

For example:

Running `yarn why is-promise` in a CRA app:

`Hoisted from "react-scripts#react-dev-utils#inquirer#run-async#is-promise"`

Currently, running a `yarn upgrade-interactive --latest` doesn't indicate there are any updates, so presumably, this is still a problem upstream.

Also, if anyone's in a pinch right now, luckily enough, I made this yesterday, for an interview I had only a couple hours ago. I lucked out! But if anyone else might need it, maybe it'll help someone:

https://github.com/cryptoquick/demo-cra-ts

Oh, and, uh, pardon the pun... :/


Per other comments in the thread, this is the primary use case for Yarn's "resolutions" feature:

https://classic.yarnpkg.com/en/docs/selective-version-resolu...


Thanks! Tried, it seems to work! Repo updated, too. This seems to be what it does, for those curious: https://github.com/cryptoquick/demo-cra-ts/commit/5c84aa48e9...


One reason why we may have one line packages is the demand FAANG places on applicants. On three job screens from three different companies, I was asked, "Which npm packages have you created that we'd know about?"

For many who are hell-bent on entering these companies, yet have no known packages under their belt, they very well might fire off a one line package that actually gets some downloads, to be better "prepared" when screened.


A package just for:

return !!obj && (typeof obj === 'object' || typeof obj === 'function') && typeof obj.then === 'function';

https://github.com/then/is-promise/blob/master/index.js

This is insane


Hmm, I had some problems with a nested `is-object` dependency when updating some deps last week, I wonder if it’s related.


create-react-app was broken a couple of weeks ago when I tried to use the Typescript template. Some dependency in Jest had been changed to require a version of Typescript that had only been out for a few weeks, breaking everything (including create-react-app) that hadn't updated to the latest tsc. What an ecosystem.


Someone somewhere is going to organize a series of JS packages with the continuity of a standard library and full of verification and tests. Sane versioning, consistent interfaces and so on. The npm ecosystem isn't bad it's just unwieldy successful.


Probably also broke eslint too since `is-promise` is also a sub-dependency of that as well.

This is much less of an issue when using a lockfile, at least for existing packages/projects.


There should be a Promise.isThenable or Promise.is. Strictly speaking this library checks if a value is a thenable.

With optional chaining I would however use this check:

    typeof x?.then === 'function'
Or if I was code golfin:

    x?.then?.call
The first case does not account for built in prototype extensions and the second has false positives with certain data structures.

So the function in is-promise should be available as Promise.isThenable or Promise.is.


There's a neat tool called crater (https://github.com/rust-lang/crater) for Rust. It can run an experiment across every Rust package (or every popular one), so you can e.g see if some theoretically breaking compiler change actually hits anyone. Something like that could be interesting for node packages as well.


This is a prime example of where I think GitHub's acquisition of Dependabot and npm could really pay off. Imagine being able to publish a prerelease version of your library and run the CI tests of your consumers, all from within the GitHub interface. Dependabot already tracks compatibility between versions, so this would be a natural extension of that.


The the broken package just released a fix: https://github.com/then/is-promise/releases/tag/2.2.1


This seems like an issue with semver. Its idealism is not compatible with actual human behavior.

The package devs clearly violated semver guidelines and npm puts a lot of faith in individual packages to take semver seriously. By default it opts every user into semver.

If you need semver to be explained to you bottom up (lists of 42 things that require a major bump) then you don't get semver. All you have to do is think: will releasing this into a world full of "^1.0.0" break everyone's shit?

This and left-pad are extreme examples. But any maintainer with a package.json who tries to do right by `npm audit` knows that there is an endless parade of suffering at the hands of semver misuse. Most of it doesn't make the news.


I had an issue while running any command of vue-cli.. and then I even created an issue for it thinking that it might be a bug in the Vue CLI v4.3.1 But I think the truth has shown itself!


Everywhere around the world, users of Maven Central facepalm.


Looks like it broke @angular/cli too.


Who does this affect? I just did

    npx @angular/cli new hello-world-project
and that worked. I have remote Angular training on Monday and didn't want to do a global install.


It's already been patched with version 2.2.1.


Anyone know of a way that library maintainers can automatically test if changes like this will break consumers?


Now I understand the reasoning behind the Golang suggestion to commit the /vendor directory.


The fact that this has broken serverless for people is reinforcing my priors in a big way.


meanwhile I can regularly override even major release versions of dependencies in elixir with out breaking changes. dependency fickleness has always been a huge issue for me when working with node.


I hope that more packaging systems take the go modules approach and cryptographically and immutably identify their dependencies at time of addition to the project. This sort of breakage shouldn’t be possible.


This kind of breakage is perfectly possible in Go also - though the most common equivalent of "left-pad broke my project" for many Go developers is "X changed the case of their GitHub username and now all my import paths are broken".


I'm sorry if you were unaware, but they absolutely do that, and were doing that long before go was.


I am aware of package lockfiles.

If deps are immutable, then nothing anyone does in any other package (short of having the package repository take the code down) should be able to break your future builds.

If that were true, TFA would not be news.


> If deps are immutable, then nothing anyone does in any other package (short of having the package repository take the code down) should be able to break your future builds.

They are. You're only affected if you don't use a package-lock.json or start a new project (which will pull the latest versions of the dependencies).


I'm not a node expert but i believe the problem is that most people auto-update their node dependencies (I know I do, but I only have to do it rather rarely, since I don't primarily use node), because there are just so often minor security regressions that need to be fixed.


Maybe it should be easier to do npm install latest-1


As always: vendor your dependencies.


Good luck vendoring node_modules.

Why are these threads filled with people who know nothing about node?

npm and yarn both have lockfiles for this purpose. Vendoring only bloats your repos.


> Why are these threads filled with people who know nothing about node?

That’s a quite bad assumption from your part based on almost no information.

I don’t know about the rest of the thread but I’m personally quite familiar with node. A lock file doesn’t fix the same issues vendoring does. The lock file gives you an explicit list of version used, vendoring save the exact copies of the dependency with the rest of your code.

By vendoring anyone who is working on the project is using the exact same version of a dependency, AND you don’t have to care about an external provider (the registry being up, etc, that’s way easier for you CI too), AND you can review dependencies upgrade via git as if it was your code.

Of course that’s a mess when the JavaScript ecosystem has an infinite amount of dependencies for a hello world.


vendoring my dependencies wouldn't save me from a rare issue I had dealing with npm packages, for example I had a package that relied on an underlying api call to a machine learning cloud api, the api call became deprecated. Not writing code is the only sure way to have no bugs.


Using package-lock.json gives you the same effect.


every package can be a one line package if you minify it. lines of code as a metric for code quality is always relative. The fact that this is a one line package has nothing to do with the outcome. a one-line code change in a 5000 line dependency could just as much have messed up create-react-app. The size is irrelevant.


This is correct. Many tools are split into multiple packages for — hear me out — convenience.

I regularly extract features from my apps into new npm packages. This way they can be reused by other apps.

Troglodytes can keep copy-pasting code between apps while npm users publish once and update everywhere.


how about "single function" instead?


Why NPM? I think that is the real question I do not see anybody asking. SO I will...

Why NPM?? What is the point?


Right.

No point at all.

A waste of time and a yawning security hole


A little copying is better than a little dependency.


nvm


Isn't this the story of leftpad?


Read the linked post.


FUCK I thought the linked article described the situation


way to read the article before commenting


Chill with the js hate, this happens everywhere.

Maybe not to this extend, but if X (where X is whatever you are thinking about) had similar amount of people using it (especially junior people) this would happen there as well.


No.

Other languages don't publish/import packages that are one line of code. I have never seen an issue like this with any other language that I've worked with.

Any sane developer that needed a one-liner like this would just manually implement it.

Not to mention that these sorts of functions are unnecessary in languages with a good stdlib or statically typed languages like rust, etc.


I have posted one liners to crates.io that were eventually put in the stdlib.


True this problem also exists in Rust, even going so far as people "claiming" and SELLING nice package names.


Name squatting on crates.io is another issue entirely, though. It's also a can of worms that I won't open.


I haven’t heard about selling at all. Have a pointer?


Know what happens every time people like you say this here on HN? They post the one-liner they would have manually implemented in their code base and it's wrong. The one that comes to mind is the "is-negative-number" package. Yes, the geniuses of Hacker News, after finding out there was an npm package for determining whether something was a negative number, could not correctly implement that function.

You and everyone here are not as clever as you think you are. This is why people prefer known-good implementations. The maintainer here did a bad release, big fucking deal.


Please don't use allcaps for emphasis on HN. This is in the site guidelines: https://news.ycombinator.com/newsguidelines.html.

As a stretch target: it's a bad idea to create demons out of an assortment of posts you randomly saw on HN. This site gets 3M posts a year. You can find basically anything in there.

https://news.ycombinator.com/item?id=22098687

What happens is that we each have pre-existing images that bug us (e.g. for example, people who overrate their own genius) and as we move around in the statistical cloud, random bits of whatever we run into stick to the pre-existing image and give it form. Poof, you have a demon—but actually it just became visible. Readers with other images see other demons and arrive at other generalizations. It's not good discussion because it's really about one thing but we make it about another, and comments that are skewed in that way limit their own interestingness. (I definitely don't mean to pick on you personally. We all do this.)


You may be shocked to find that there are very novice developers as well as those with 20+ years of experience who frequent HN. Using a few poorly written comments is a strawman, unless the comments you're referring to are written by the same people you're addressing here.


Maybe its a failure of the language when it takes a third party package to determine if a number is greater than or less than zero?


> Maybe its a failure of the language when it takes a third party package to determine if a number is greater than or less than zero?

It's not a failure of the language. Javascript has comparison operators like every other language, it's entirely possible to determine if a number is greater than or less than zero without importing a third-party package.

What it is is a failure of modern JS development culture, because apparently it's anathema to even write a simple expression on your own rather than import a dependency tree of arbitrary depth and complexity and call a function that does the same thing.


Their code would have been wrong even in strongly typed languages because it considered 0 to be a negative number. What language prevents you from making that mistake?


> COULD NOT CORRECTLY IMPLEMENT THAT FUNCTION

As opposed to blindly trusting and adding a dependency for a random library with a one liner?

I don't think the "don't roll your own crypto" argument really applies here. Of course we can come up with hypothetical situations where developers are incompetent or don't test their code at all. This includes armchair analysis for a post on HN, by non-javascript developers.

I would argue that it's still better than adding a dependency. Heck, you could even copy/paste the correct code.

I know I'm not a perfect programmer, so important functionality like this gets unit tested as necessary. :-)


A package with over 11 million weekly installs vs. a brand new implementation by a coworker, when I might not necessarily be around for the code review? Absolutely. I would blindly trust the package 100% of the time. Zero hesitation.


As demonstrated in this reply - https://news.ycombinator.com/item?id=22979718, this package is also wrong. So the argument is invalid?


The package isn't wrong, you're just confused about duck typing.


This happens because libraries installed by create-react-app depend on many other libraries (1026 transitive dependencies as of today).

As a comparison, Django, a large Python web framework, has only three dependencies (pytz, sqlparse, and asgiref), which don't have dependencies themselves


Yeah, in 10 years of Python and front end development, this is the most fragile ecosystem.

That's not a criticism of NPM, just the way it's being used today.


pytz is a great example here. Imagine a separate package for every time zone.


Perhaps, but I think the JS ecosystem encourages dependency explosion like no other. Looking at a 6 or 7 year old lazily written Rails app, with a lot of functionality written throughout the years, I see about 200 gems. Creating an empty app with create-react-app, it has about 1000 packages.


No, this does not happen everywhere. Show me this happening in Debian.


You can't use the very latest version of any software in Debian at all without adding a custom repository, at which point you have the same issue. So the comparison is not apples to apples.


You can use Debian Unstable, or maybe just use stable and reliable dependencies so that your software is also stable and reliable. That would require putting in some effort, though, and we can't be having that, can we?


The "slow and steady" approach works well for mature or stagnant ecosystems, but only when the packages are small enough that distribution developers can reasonably backport security fixes. That clearly doesn't work with big programs like Chrome and Firefox, so they have to resort to shipping the latest ESR version.

Writing JavaScript on Debian is practically impossible without sidestepping the package manager in some way. In a lot of cases, the hacks you have to do to run up-to-date software on a distro like Debian decrease reliability significantly.


You can do that with NPM if you pin your dependencies to exact versions, which is the same solution that you would use for any other package manager, and basically what Debian and other Linux distros do for you. I don't know why you think this problem is somehow unique to NPM or the JavaScript ecosystem.


And yet, somehow Debian isn't in the new every few months. There's a fundamental difference in culture, for one. But the fundamental difference in approach is there, too. Debian packages are vetted. npm packages are not.


How about a rolling release like openSUSE tumbleweed then? I have been using it for years, I generally update once a week and I have never broken my system due to an update. Never.


haha i understand what you mean, but debian's https://wiki.debian.org/DontBreakDebian page is not an accident :)

i made my comment more as a joke, shit happens everywhere, and as i said maybe not to this extend.


All of this is telling users how to avoid breaking Debian, and mistakes that they ought to avoid. This isn't Debian being broken and the users being collateral damage. This isn't a symptom of the very Debian ecosystem itself being fundamentally broken.


i have been using debian since potato, and i have seen some damage :D


The package referred to in the clickbait title is `is-promise`


How would you rewrite the title to not be clickbait?


"1-line package "is-promise" broke `NPM create-react-app`" ?


I would include the name of the package.


The title doesn't strike me as clickbait. The significant thing is what happened.


Personally as a JS dev, the significant thing is which package it was. These stories happen all the time, so when I see them I’d rather know which package it is at a glance so I know if I’m affected.


Fun thing with building applications overusing npm is that you usually don't know exactly what packages you have. Not until you check for the specific package, so you probably don't "know at a glance" if you're affected or not.


Hah, true. I got bit by one of these two days ago where the offending package was trying to install node itself (wtf?). It was two levels deep of `yarn why` before I figured out the issue. Fortunately I've been around long enough that the first thing I did was check the package for github issues... sure enough, found a 5 hour old issue with a bunch of people complaining of failing builds. If I hadn't searched first, I probably would have banged my head for another couple hours...


Does the package name need to be in the title if it's already the URL? :)


I don't know how "clickbait" that title can be when it is, in fact, longer than the line of code in question:

    declare function isPromise<T, S>(obj: Promise<T> | S): obj is Promise<T>;
This is, indeed, the only line of exported code in the entire package.

I genuinely don't understand the NPM world.


> I genuinely don't understand the NPM world.

You're right, you don't. What you posted is just the function declaration, not the implementation.


I think it reflects how the evolution of the JS ecosystem strongly resembles natural evolution. This package is now a vestigial organ, but there was a time when it served a useful purpose. Other packages formed connective tissue to this package, and since those package may still be useful, this one has stuck around.


Me neither. I can't wait for Deno 1.0 next month.

https://deno.land/


How exactly will the new runtime fix the habit of Javascript developers to pull-in millions of dependencies?


Makes me look forward to this more https://romejs.dev/ since one of the ideas is that it will have no third party dependencies...


What is this exactly? The website is a bit unclear.


Deno is much like node (uses the V8 engine, does not require a browser) and was created by the man who created node.


I was wrong about what this was I have edited this comment


Why would a Rust wrapper around the C++ project that is V8, which implements a garbage-collected programming language and environment, "use less ram" just by virtue of some parts of it being written in Rust?


That's just the type definition. Think of it like the .h if you're into that sort of thing.


NPM is the answer to the question: what would happen if everyone refused to use any idioms ever, and instead replace them all with packages?

God help you, if you import a JavaScript (or Rust) package today. Lest you fall in a gaping chasm of endless cascading dependencies.


It's not NPM's fault, it's developer's fault.

I would never in my right mind publish a one-line package, Python, Javascript, whatever.

And I would never add a one line package as dependency


This is why regression suites are important.

EDIT: I wasn't dissing the developers. They have regression, this was just an accident. I was stating it is important. My bad (too late to delete).


The package does have CI setup, however the test matrix does not cover the latest node versions (which are the ones that are affected).

See https://github.com/then/is-promise/blob/master/.travis.yml (missing v11, v12, v13, v14)



The failing CI here is unrelated to the issue but it's still pretty bad a release was made with failing CI.


It was a five-year release (followed quickly by a 3.5-hour release and a sub-minute release) [0], so they may not have wanted to dig into CI.

[0] https://github.com/then/is-promise/releases


Could create-react-app have avoided this through regression suites?


Not really. NPM relies heavily on semver - https://semver.org/. In this case, the package that was updated updated a minor version, which means it should be backwards compatible, but it wasn't for later versions of Node.

Of course, you can always lock your build to exact versions of your dependencies (lock files in NPM used to be a complete cluster, in my opinion they are less of a cluster now - you can pretty much do everything you want with them but there are some gotchas that make it easy to shoot yourself in the foot). The issue is that when you run 'npm install', it will pull the latest semver-compatible versions of your dependencies.

So for everyone decrying how this is a bad example of NPM and the javascript ecosystem, I really think the opposite is true. Yes, it broke a lot of upstream dependencies, but importantly only for new builds of those items, and furthermore it was found almost immediately.

Also, of course, you can specify exact versions of your dependencies - you don't have to rely on semver. That means, though, that you need to be more vigilant about pulling in bug fixed and security fixes, and most people take the tradeoff that they are comfortable pulling in patch or minor versions, but using lock files once they have a build they have verified.


The regression suite never gets to run if it shares the dependency.

And the system under test shouldn't even compile for the tests to run either. So it isn't so much the regression suite saving you so much as it is just acting as the client of first resort.


CRA would be running the tests, not is-promise. CRA could have pinned every dep, and had a bot (dependabot) automatically run tests against every new version of every depended-upon package, and update only when those tests pass.


Bumping your comment because I would like to know. I'm following the github thread.


Potentially. If cra had pinned all their deps, and used a bot to automatically bump deps contingent on passing a comprehensive regression matrix, this would have been avoided. GitHub's Dependabot is good for this. In my opinion everybody besides libraries should pin deps and use dependabot.


Exactly. We use Renovatebot for the same purpose. It pins dependencies and creates PRs for updates. Amazing to see how often the builds break, even sometimes after minor updates. But at least we fix them before release, and not after... :)


Yep. One of the very nice things about npm/node versus python or go or some others is that package locks and dependency pinning is possible. But few people seem to use it.

I’ve seen reports of people using a go library that gets a minor update and breaks their app, at which point they become SOL as go always installs the lad test version. I myself have been working in python projects where the dockerfile simply says “pip install blah” and I get different deps than the working version. No clue why anyone would be okay with working like that.


It's not true that Go always installs the latest version of a dependency. `go get github.com/x/y@v1.3.4` installs v1.3.4 of x/y, assuming there is a tag matching that.


I’m not familiar with go, would this persist to other people attempting to install the package?

The issue I’ve seen is:

https://github.com/go-yaml/yaml/issues/558

> Please do follow semver as it's a nightmare for us to manage particularly using go module (you can't stick to a particular version).

And of course everybody’s idea of a breaking change is different, so this idea that you can’t install a particular version seems unworkable.


If they'd pinned the dependency versions, and ran the tests before updating the pins, the tests should have catched it.


Install any moderately complex nodejs lib or app and it will throw tons of warnings, ignored errors, and security issue alerts. As you should with any app running in production, lock down everything and watch network traffic because there are innumerable backdoors in the JavaScript ecosystem.


Please show the community the back spots you are aware of.


My company's current production electron app has 360 npm dependencies. We have CI for the UI but not for the USB/FFI stack, so any time we have to touch that code everyone blanches.

> innumerable backdoors in the JavaScript ecosystem.

Same goes for Python and CPAN. Any "click here for fancy module" installer has this problem.


You don't need so many dependencies with python. Python is a batteries included language, and so are most python libraries.


I fully disagree.

Open up any serious Python project and you'll find significant dependencies. Math, graphics, IO, stats, ML... anything you really want to do requires dependencies. In fact, one of my biggest issues with Python is the cross-platform incompatibility of many packages which makes it a terrible choice for my deployment. (Even worse if the project has Cython components!)

I often end up having to scour github for forked pywheels that aren't vetted. Which are then cloned ad infinitum.

Its a tradeoff between extensibility and open source / free software, and robustness.


Math -> You use numpy, scipy, none of these have any significant dependencies. And libraries this complex are not even available for node.

Graphics -> Python comes with included Tkinter, and others are also one include away.

Stats -> Scipy does a lot of the stuff. There is a built in package for stats. Again, no stats package has 100 dependencies, and node doesn't even have anything with even 1/10th of the features

ML -> I mean node has nothing here, nothing, while pytorch has total of six dependencies. In node, left pad might have these many.

Python doesn't need left pad, isNumber, isInteger, isOdd, isPromise , take your pic.

> In fact, one of my biggest issues with Python is the cross-platform incompatibility of many packages which makes it a terrible choice for my deployment. (Even worse if the project has Cython components!)

But python has high performance libraries written in C, can you even use node for any of the cases where python has platform compat issues?

It is a tradeoff, and there is no comparison. Python needs far far less dependencies than node. e.g, Flask has 2 total dependencies, express has 48 direct dependencies, and even then flask comes out ahead on features, so much so that you would need many more packages to do the same stuff with express.


I'm not comparing functionality of Node and Python. They are different beasts. I was pointing out problems inherent with Python packaging, which you didn't even address in your fanboy rant.


Last time I used create-react-app, it was installing 30 000 files. Only this number is a problem by itself.


[flagged]


This is nothing but an attempt at brigading. Get out of here with this.


[flagged]


Agreed. Libraries and tools often don't work in a straightforward manner. Lots of tools reach below the surface and do their own tampering and monkey-patching of the runtime, module system or environment. Layer upon layer gets deposited over time. It's like doing construction on topsoil riddled with unmarked gas, water, and electrical lines.


You had one job...


mrw you're using a package to do a type-check:

return !!obj && (typeof obj === 'object' || typeof obj === 'function') && typeof obj.then === 'function'


Is this news? Happened so many times before. NPM is broken. Yarn 2.0 is never gonna take off. These problems have been fixed long before. Waste of life.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: