Hacker News new | past | comments | ask | show | jobs | submit login

Everyone I know uses some form of lock file, and most of the modern programming languages support it.

As for upgrading only when absolutely necessary, let's be honest, nothing is absolutely necessary. If the software is old, or slow, or buggy, well dear users you'll just have to deal with it.

In my experience however, it's easier to keep dependencies relatively up to date all the time, and do the occasional change that goes along with each upgrade, than waiting five years until it's absolutely necessary, at which point upgrading will be a nightmare.

I much rather spend each week 10 minutes reading through the short changelog of 5 dependencies to check that yes, that changes are simple enough that they can be merged without fear, and with the confidence that it's compatible with all the other up-to-date dependencies.




Both extremes are bad. If you never change anything, you are left behind on a lot of security updates and bug fixes. The longer you wait the harder it is to move.

The “stay current” model comes with risks too. It’s just a matter of figuring out which manner has the better value to risk trade off, and how to mitigate the risks.


One company I worked for had a bot that would periodically go and try to upgrade each individual app dependency, then see if everything built and passed tests.

If it got a green build, it would make a PR with the upgrades, which you could then either choose to merge, or tell the bot to STFU about that dependency (or optionally, STFU until $SOME_NEWER_VERSION or higher is available, or there's a security issue with the current version).

If not, it would send a complain-y email to the dev team about it, which we could either silence or address by manually doing the upgrade.

This worked out rather well for us. I think the net effect of having the bot was to make sure we devs actually paid attention to what versions of our dependencies we were using.


The solution was, and always been, to have someone review every commit to the libraries that your app uses, and raise red flags for security vulnerabilities and breaking changes to your application.

Oh, boy, I would love to work for a company that has someone like that. Know of any? At the least, I would love for a company to just give me time to review library changes with any sort of detail.


> It’s just a matter of figuring out which manner has the better value to risk trade off

This is also bad.

If you want to have both stability/reliability and also receive security updates you need somebody to track security issues and selectively backport patches.

This is what some Linux distribution do. Mainly Debian, Ubuntu, paid versions of SuSE and Red Had and so on


The sweet spot would be using a lock file for reproducible builds, and programmed dependency upgrades. Before the dependency upgrade you can check if the new version breaks something and plan for it.

I've used this in Python, where I try to keep dependencies to a minimum. I don't know if that would work in JavaScript, the dependency tree is huge there.


How do you deal with regressions?

For example, we once upgraded the redis client. One brief revision of the parser submodule had an apparent resource leak. (I can't imagine how...) Causing all of our services to ABEND after a few hours.

Because everything is updated aggressively, and there's so many dependencies, we couldn't easily back out changes.

--

FWIW, Gilt's "Test Into Production" strategy is the first and only sane regime I've heard for "Agile". Potentially reasonable successor to the era when teams did actual QA & Test.

Sorry, I don't have a handy cite. I advocated for Test Into Prod. Alas, we didn't get very far before our ant farm got another good shake.


When I see a regression, I look at what was recently updated (in the past hours, days), and it's usually one of those packages. Because frequent updates tend to be independent, it's usually not difficult to revert that change (eg. if I update react and lodash at the same time, chances are really good that I can revert one of those changes independently without any issues)


This is the way.

Also, you should try to update as few things as possible at once, and let your changes "soak" into production for a while before going in all "upgrade ALL the things!"

Why? Well, sometimes things that fail take a while to actually start showing you symptoms. Sometimes, the failure itself takes a while to propagate, simply due to the number of servers you have.

And, one of these days, cosmic rays, or some other random memory/disk error is going to flip a critical bit in some crucial file in that neato library you depend on. And, oh, the fouled up build is only going to go out to a subset of your machines.

You'll be glad you didn't "upgrade ALL the things!" when any of these things ends up happening to you.


Totally agree. That's why I'm asking if compulsively updating modules is a JavaScript, nodejs, whatever pathos.

This team pushed multiple changes to prod per day. And the load balancer with autoscaling is bouncing instances on its own. And, and, and...

So resource leaks were being masked. It was only noticed during a lull in work.

And then because so many releases had passed, delta debugging was a lot tougher. And then because nodejs & npm has a culture of one module per line of source (joking!), figuring out which module to blame took even more time.

I think continuous deploys are kinda cool. But not without a safety net of some sort.


I'm gonna toss this grenade out here, just because I don't see a better place to do it lol...

One of the companies I worked at had an incident a couple years before I started, where there were multiple malicious Python libraries being used in the code. For 3 months.

Luckily, the libraries didn't do anything significant except to ping an IP address in China, actually did perform their advertised functionality, didn't exfiltrate any data other than the source IP address on the ping packets, and they were easy to replace once the situation was found out. But, for months, they had servers pinging or attempting to ping somewhere in China.

Oh, and those libraries made it into our internal repos we were using to pin versions with, too.

Beyond being a speecy spicy meatball of a story, the moral here is that you have to constantly be on your guard and verifying every single line of code that goes into your application somehow.


Also, you really ought to not allow egress from your servers except to your load balancer.

Just attempting should create alarms.


Correct. That's probably actually why it took so long to detect the malicious packages: when the got installed on machines running internal services, nothing much unexpected happened. Come to think of it, I actually don't remember how the packages were detected and identified to begin with.


In my experience although lock files are widely used in Node development, utilizing the lock files as part of a reproducible build system is far less prevalent in the wild. In fact, the majority of Node development I've seen eschews reproducible builds on the basis that things are moving too fast for that anyway, as if it were somehow a DRY violation, but for CICD. Would love to hear from Node shops that have established well followed reproducible CICD.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: