Regardless, can we talk about the conduct in this GitHub thread? I know every community is different but is it common to have memes and jokes posted this quickly and often in a GitHub issue? It makes it really hard to follow and discourages genuinely useful discussion of workarounds or progress.
Some people getting in the workforce in the last few years have troubles making the distinction between work and play contexts. It's extremely visible on github, slack, &c. which are more and more looking like discord / reddit (gifs, memes, random jokes in the middle of serious discussions)
Eternal September or the September that never ended is Usenet slang for a period beginning in September 1993, the month that Internet service provider America Online began offering Usenet access to its many users, overwhelming the existing culture for online forums. -- Wikipedia
In my observation, this demographic is most densely concentrated in the JavaScript community. Not sure what that says about the language itself, but there’s definitely a glut of unprofessional JS devs out there.
Ooh back in my coding days it was PHP. And before that in my undergrad years was Vb6 ... which provided me and several other programmers with lots of frustration and consulting hours to fix the mess the owners' kid made while "programming" their new CRUD system.
There will always be a "cool new" language attracting newcomers and kids.
Rather: if I'm looking for the solution for some problem, I don't want to scroll through memes, because I want to be productive, getting things done. Spam/fog in the resolution-thread obviously doesn't help with that; the opposite is the case.
Basic moderation in some form would be useful there. It’s interesting - there’s not (that I’ve seen anyway) a tremendous amount of spam or for the most part off topic conversations in in github issues. I’ve seen phishing attempts (I think) that seem to be “hey just go download and run curl ____.sh | bash” and you’ll be good to go occasionally but that sort of banter isn’t typical (at least in the circles I run in).
I mean there is a difference between having fun and conducting yourself in a professional manner where appropriate.
With the increase in adoption of certain technologies (node etc) I have noticed a relationship with the people employed to work with them and their poor level of professionalism in the workplace with regard to their work and how they deal with other people (in the UK) and even how they dress. Similar age groups in other areas don't display these traits.
A number of these people work as contractors as well, so at least it was easy to get rid of them.
While there are many points in your post that I agree with, we really should stop worrying about how people dress.
If it's a developer sitting behind a computer screen all day long with zero customer contact whatsoever, he really shouldn't be forced to wear a certain dress code just to satisfy someone's standard of professionalism. Such feelings are in similar spirit as "woman should not wear trousers" a few decades ago - there is just no rationale behind it other than conditioning by society.
There is a limit. I'm sure you have one too if you really think about it. Is it tattoos all over the face? What about sandals? Or shorts? Or going bare-feet around the office.
I don't know... It's about people giving me money for work. I don't feel like I can go to work dressed like any given Sunday.
To me (and of course, it's a personal opinion) it shows that someone cares. It's like when you do your resume, you want it clean and presentable, without typos, to show that you care.
We all like good user interfaces and experiences in our programs. To me, the look is like the presentation layer of our own UX. Is it required for our work? Most of the times it is not, because our work depends on the logic layers, but it doesn't hurt to have a nice presentation layer.
And I'm not telling about going to work in a D&G suit.
EDIT: and I'm not telling that I would discriminate or think right away that someone doesn't care! I don't do that. My attitude about good looking is mono-directional; from me towards others.
They care by being there when the team needs them.
They care by being honest, authentic and candid.
They care by making things simpler and smoother for themselves and those around them.
They care by striving to avoid office politics and back stabbing.
They care by being the team mate we all wish we had and strive to be.
They care making our common working space a place we are all eager to come to in the morning.
And I would pick a reliable, self-aware, well-rounded, face tattooed, green haired and barefoot colleague that's comfortable in their own skin any day, over any well-dressed, high maintenance, drama king / queen.
What's interesting though, is that even knowing, and thinking that, I couldn't help but be spontaneously biased towards he or she who made the so-called effort of dressing to look the part. And that's a problem.
Also, I am nowhere near being that ideal team mate. More like a work in progress >.<'
> biased towards he or she who made the so-called effort of dressing to look the part
Anecdote: my wife is a psychologist. They are taught to observe how a patient presents him/herself at an appointment. Is her/him presentable? What about personal hygiene? Does he/she take care of him/herself? These are characteristics that say a lot, and should be taken into account when meeting someone. There is a difference between taking care of oneself and trying to appear like something else by wearing branded clothes, or to appear like something else by dressing like a 60's hippie (or whatever).
> And I would pick a reliable, self-aware, well-rounded, face tattooed, green haired and barefoot colleague
Me too. But the first 3 are characteristics you wouldn't know for a while. In the meantime you have to work the hiring with what you see. Unfortunately, it's like that.
I also believe that the worst and most dangerous people in the world walk around in expensive suits, behave correctly in public and never say a bad word.
Tattoos, someone's skin color or moles on people's facing at work doesn't bother me. Someone complaining about it would.
Sandals/shorts/barefeet? It's winter here but in those brief moments of summer please do that. Wearing a three piece suit means you won't be able to work as hard during the heat wave.
Its an interesting discussion no doubt. Sometimes easier to have a "dress code/standard" in an office as it sets an (externalised) baseline.
I think the key thing is balance, not everyone is comfortable to wear a 3-piece suit but also few people are comfortable working with people who ave poor hygiene (e.g. dont shower after the lunchtime gym)
I think as long as someone is clean and presentable (context dependent) then there shouldnt be an issue.
It helps focus the culture on the business over other social queues and aspects, it removes the anxiety that different social groups will have with that workplace.
If that makes the office more diverse, than the real issue is in the hiring bias, not in how you dress. For sure if you only hire people like you, which can happen both in a business environment or a social one, you will have an office less diverse, that's by definition.
I disagree entirely that we can't be social with people different as us. I also disagree that business shouldn't be social, but that's mostly an opinion.
It's the same argument people have for uniforms in school. It seems logical I just wish there were some studies that confirmed the results, though it seems hard to quantify.
I'm not saying I want memes everywhere, but since I'm spending 1/3 of my life at work I'd rather it be a good time.
> Go post memes on the LKML and see what happens
I think the seriousness (almost angry tone) in certain communities also is a disservice to attract new members. If I make my first PR in some project, it's easy too feel attacked when the reply is a negative one, albeit strictly professional. If it however has a more light-hearted tone and a bit more sandwiched fun-serious-fun one feels more welcome.
The serious tone is a huge draw to the LKML and that way of working. They really do care about not having left-pad type meltdowns. They really do care about being able to find information easily, even years after the fact.
There's a big difference between shooting the shit in your private communications and cluttering up places where people are looking for solutions.
Contrasting an opinion you don't like with the other extreme is usually called a strawman and is not a good way to argue.
As others have said, I too am not against a light-hearted tone. But when you have a legitimate problem that might be blocking you from doing your work and your only source of information is a GitHub issue then it should go without saying that the information should be dense and compressed and it should absolutely lack memes, yes.
I work badly with people that pronounce professionalism as a form of conduct too much to be honest. That said, my professional conduct on the technical side is always to vendor that shit if the project is of a nature that allows doing so (mostly restricted to proprietary software). Even for builds. There are harsh disadvantages with doing that if maintenance is disregarded, but these are problems you can tackle yourself instead of being dependent on third party tools working.
The sheer numbers of users of NPM says that this can happen from time to time and the stability they are providing is nothing other than exceptional.
Otherwise professional conduct mainly comes down to creating an atmosphere of noncommittal distance to deal with difficult personalities. I think being subjected to it for too long should disintegrate any personality, because it is completely unnatural. Hard to imagine people longing for more of that.
It should get quarantined into a different channel or server. If not, and you don't like a bunch of shitty meme spam, it's difficult to filter out while keeping your coworkers unblocked.
I think it's fine to joke, but even if I try, I really have to make an effort to not get angry when someone at work or an external replies with something written like this:
"yeah, u know, i could do it better, lol".
Edit: to add that I agree with the idea there is a problem with people making a distinction between work and play contexts.
Keypresses is a pretty poor metric to base communication upon. It annoys me to see "w/" and "w/o" anywhere except Twitter. The meaning isn't clear to many non-native speakers, and it's jarring — we recognise the most common words by the shape of the whole word, so it's easier to read "with" and "without".
"&" is a ligature for the letters "et". A traditional way to handwrite it, other than as &, is a "crossed epsilon", something like Ɛ̸, which looks more like "et". There's a Unicode character closer to this form: 🙲. I have no preference between 🙲, &, etc. and et cetera — none is common enough that I would recognize it by shape while reading prose.
How about as a stylistic choice? There are lots of little choices like that one makes when writing that together form a person's individual writing style.
You can also blame current corporate practices and culture in the development industry for encouraging this sort of behavior and tailoring environments to attract fresh grads.
It's certainly acceptable to blow off steam but there are times and places for that. The official issue thread is not one of them. It looked like a Reddit post more so than the official NPM repo. I have nothing against Reddit just pointing out it's a reasonable venue for this sort of commentary (or even HN).
The thread looked like folks celebrating like the power went out and they'll be going home from school for the day instead of "oh crap, this is my job and I have to fix this issue or my livelihood is at risk because I decided to have no backup plans in place for such a situation where NPM is down."
Agreed. Github isn't a purely professional tool, and NPM isn't a purely professional tool, and Javascript isn't a purely professional tool.
I'm seeing comments on here that some of these people are acting like high school kids. Some of them probably are high school kids.
And that's fine. One of the biggest goals of Open Source in the first place is that software shouldn't only be developed and used by professionals. If you want that reality, you have to tolerate that unprofessional people are sometimes going to be part of the surrounding conversation for some projects.
In short, if the NPM maintainers hate this, they can modify community rules or remove spammy posts. And if they don't hate it, then I don't really get what the big deal is about allowing people to be unprofessional when talking about code -- other than maybe arguing that Github should have better filtering tools for readers.
Indeed, Github intrinsically caters for both hobbyist coders (some of whom may be quite young) and professional devs. Some degree of culture shock is inevitable.
It's not about employment status. Injecting noise into a channel that is currently being used to diagnose a severe bug affecting many people disrespects the time of those working to fix the issue. Other, more appropriate venues are available for people who want to blow off some steam.
Yep, noticed this on our local Teams groups too. The deluge of animations can be incredibly distracting and fill space making it more faf to go back and check details from earlier in the discussion.
In GH in particular, I’m not sure why images are allowed to be rendered in comments at all. They are almost always harmful. External links can be used instead
> I think the introduction of emoji reactions was a bad idea, it gamifies the issues system.
It depends on the community. I've seen many where the reactions are genuine and actually serves a purpose. From what I've seen, its almost always a popular js/node project that attracts this kind if behaviour like seen in this thread (based on my own observations).
Last example is vscode's santa hat[1]. It seems that that thread is mostly empty now but I remember there was huge meme thread going on in that issue or in reddit/hn when that thing was happening.
Your comparison of today's js/node to PHP from 20 years ago is interesting.
I wonder if javascript will be in similar place in 10-20 years that PHP is today (e.g. "its not like it used to be and things a actually quite good now")? The language it self (javascript) might be there already, but is the community anywhere close yet? SQL injections used to be the thing PHP was known for in the past but not so much these days (the community is more experienced?). Meanwhile, problems with NPM and the whole packaging situation is what javascript is known for these days, but will the be so on 20 years?
Javascript is a far better designed language than PHP. The identifiers don't have lengths picked because the maintainer used length as a hash function to name one notorious feature.
Thats just one data point, and for which you could argue that javascript has not one, but two null values: null and undefined.
PHP got (or is getting?) type hinting, while javascript doesn't have such thing (typescript or flow doesn't count). Javascript is event loop based, while PHP is (afaik) request->response based. These days, javascript usually is compiled for web, while PHP just sits there on the server and you can swap out the source files when you upgrade. You could go on and on.
I actually noticed your comments as a voice of reason and upvoted. I have a feeling that the issue in question was probably linked from some Discord/Slack/subreddit and everyone just jumped on the bandwagon.
The countless +1 and "Same here" comments without additonal info are a huge problem as well on Github. A simple click on a +1 button would suffice instead of a full comment. It's especially irritating when you follow a lot of issues hoping to get notified when there is a resolution but instead you get spammed by people making these pointless comments.
It's a social problem rather than a technical one at this point. When Github had no emojis people had no choice but to leave '+1' comments. Now even with that choice some communities still comment in this way for some reason. Commenting is open-ended and moderation would be needed to filter out the repetition. People can also post GIFs, but they can also use the same image upload functionality to post screenshots of bug repros. It hinges on what the user believes is the better choice for getting the point across.
About all you can do is change the mindset of the community making the comments, which is hard as they're numerous.
Not related to GitHub but the same issue is a pet peeve of mine in Whatsapp and Facebook messenger groups as well. Everything's a message so you end up getting notifications for people sending thumbs up or what have you[1].
It really clutters the conversation so when you're trying to find, e.g., the flight information, or the link to $IMPORTANT_THING, that somebody posted a few days ago you end up scrolling through screens and screens of total guff.
I wish there was a quick way to collapse messages that aren't relevant or useful, and that goes for GitHub issues as well.
[1] Apart from the fact this is distracting, and potentially a nuisance if you're working, driving, whatever, it also drains your phone's battery.
I have no comment on #2, as I don't play in that sandbox.
However, regarding #1... the interface allows people to post images in their comments on issues. This has a valid, useful reason - for showing screenshots of bugs, for example.
The problem is not the interface. The problem is the people using the interface.
Eh... I think 90% of the complaints people have here are related to the interface.
1. When I sign up for updates I get spammed with notifications -- you should be able to listen only to updates from the project maintainers.
2. I can't find the official status -- again, this is a problem because the official status is buried inside a conversation thread. This could be pretty easily solved by allowing maintainers to pin comments, or by (again) allowing filtering comments by project maintainers.
3. Replies to comments get lost in the memes -- because Github issues don't support threading, and (again) there's no filter by replies or mentions.
Or...
4. Memes just make me mad because they're unprofessional -- and okay, this one is the community. But I just can't muster the energy to care about this, or to feel sympathy for the people who care about it. If we were talking about abuse, or 3rd-party unwanted advertisements, maybe I could get on board. But a gif is not a real problem.
The fact that Github has literally no searching at all on issue threads is the fault of the tool, not the community. The fact that NPM doesn't have an official status page is the fault of NPM, not the community.
The other day I needed to use it for the first time in a year. A package that is used in many other packages host by npm was returning a 404. Slowly stackoverflow started to fill up with similiar questions.
Apppearly 4 important packages were unavailable in my region because they were deleted by accident. Some people were vpning into Europe, I found an Italian mirror.
Not sure if things are always that crazy but I don't have a reliable feel.
From the outside, it seems like Javascript developers are all in high school, participating in a popularity contest, feel the need to sprinkle emoji everywhere and communicate solely with memes and gifs.
This almost entirely explains the entirety of the javascript ecosystem and the seemingly ever increasing number of, and frequently changing popularity of, the various frontends frameworks. It seems a hype driven ecosystem, so it's not surprising it's also full of people posting memes and such.
> From the outside, it seems like Javascript developers are all in high school
I wonder what the reason is for this kind of behavior to exist only in the Javascript community? Could it be that a vast majority of Javascript developers are really in high school? Are there any good stats sources for it?
Idk, it wasn't much more annoying than the 400th person chiming in with a "me too" before that. I already might have completely missed a comment explaining the situation between all these.
Actually those animated gifs might have been better than all those "me too", since a text-only post informing about the situation would have stuck out much better.
Most companies try to stay professional, and many people carry that behavior into github. However I have worked at a major tech company that trended closer to "millenial" in age. There memes were an everyday part of the corporate culture. Professional emails quite often would have a meme included. I can see someone coming from a corporate culture like that thinking it would be perfectly normal to carry this behavior into a public github issue like this.
I was at Amazon and meme wars were common. I get that it may seem absurd but when done at the right time, humor can help people defuse a tense situation.
I am honestly surprised about the number of very upset people in this thread and the trashing of millennials. Yeah the ass of every workplace generational joke- most millennials have kids going to college now.
If you think millennials like memes Wait until you work with some zoomers. A lot of people screaming get off my lawn right now.
I do not get this. My first job was at a company which had very unprofessional communication internally (for good and bad) but we always made a clear distinction between internal and external communication. There is a time and place for everything.
Sure, it's an opinion, but I've been keeping an eye on this issue since very early on, and I have seen some mentions of workarounds. Best of luck finding them, however, as they're buried somewhere in the middle.
Given that the comments in question add absolutely no value to the issue and instead insert a bunch of visual noise and fog then I'd argue it's not an opinion.
Honestly, don't depend on central repositories for daily availability. Especially if you are doing CI that redownloads everything from scratch. Use something like artifactory to cache the repository you are using: https://www.jfrog.com/confluence/display/RTF/npm+Registry
I think that's the issue of cost/reward. The cost is
- N developers can't work for X hours
- or the company can't release new versions due to CI dependency on the registry.
- or the registry removes a package you were using
- or the existing package contents changes to something malicious
BUT you pay this price very occasionally and if you're a small shop, the cost is often negligible.
On the other hand, maintaining your own mirror has very real costs even though they can be small. One time setup, hardware, sometimes license or hosted service fee, security upgrades. When there's a sponsor maintaining the central repository, having very good uptime and offering it for free, the marginal utility of a local mirror is quite small.
If you're using Yarn using the offline cache and checking that into source control is a great way to not only prevent the reliance on NPM always being online but to also make sure everyone is using the exact same versions of dependencies.
I would not recommend to use the Taobao registry though. This is operated by a Chinese company. Aside from the cybersecurity concerns, if it's hosted in China you'll be getting bad latency.
Well, how do you know it's "just" a mirror service, and it is not using a zero-day to exploit your system, by installing a root-kit or copying your code to their servers?
If you're concerned about injection into a third-party package, you should be using `package-lock.json` (or equiv) and integrity hashing your dependencies at install time.
I'll admit I don't know the specifics of how NPM works or if it's even a valid concern. But cybersecurity is becoming much more about a power grab than actual hacking these days. And if you depend on things in China for your American company, you can bet that will be on the table for any future attacks.
I dealt with this problem at a previous job where we had a build pipeline and apps that were very dependent on NPM. To fix it, I used nginx to build a set of two-tiered caching servers within our CI Kubernetes cluster. One used a ramdisk to provide a low latency cache for the NPM client fetches. The second was a disk-backed cache to provide persistence to the ramdisk-backed cache. I had a script that would alter the package deps so that they were pulled from the ramdisk-backed service, which used the disk-backed service as it's backend. The disk-backed service uses actual NPM URLs for its backend.
The result was lightning-fast fetches and no rate limiting.
We needed the two-tiered system because this was Kube and occasionally we would have to rebuild/restart nodes and we didn't want to completely lose the cache when that happened.
You can easily extend this system to handle any package artifacts used in your build process: .deb's, .rpm's, etc.
Here's an explanation from CloudFlare as to the root cause. [0]
> I am the engineering manager for the DDoS protection team and this morning at 11:06 UTC we tweaked a rule that affected one of our signals. The signal relates to the HTTP referer header, and we have a piece of code that looks at invalid referer headers. In this case we tweaked it to include not just "obvious garbage" but "anything that does not conform to the HTTP specification"... i.e. is the referer a URI? If not then it contributes to knowledge about bad traffic.
> So... why did this impact npmjs.org? It turns out that a lot of NPM traffic sends the referer as "install" which is invalid according to the HTTP specification. As NPM is also a heavily trafficked site this resulted in the DDoS systems picking this up and treating the traffic as a HTTP flood and determining that a rate-limit should be applied.
> When we noticed that NPM was seeing an increase in HTTP 429s (as seen on Twitter) we contacted NPM and started an internal investigation. As soon as we identified the root cause we reverted the change, which was at 13:00 UTC.
> We'll note that NPM and 1 other site use the referer for purposes outside the HTTP spec and we'll update our systems to ensure that this does not happen again. Additionally we'll improve our monitoring around changes of this nature so that we can discover impact sooner and roll back automatically.
I came here to see if anyone had insights about what happened to see if I could apply any lessons learned. But all the comments seem to be "get off my lawn" style comments annoyed about people posting jokes in the issue.
Unfortunately, I am unable to resist the urge to add to the noise by complaining about people complaining.
Is it common to rely on a free service like npm for your company's core business? It seems like you would be taking a huge risk by not mirroring anything you need internally.
I believe (based on lots of anecdata) that it's not just common, it's absolutely overwhelmingly often the case at companies of pretty much every size, be it a data scientist using stuff from CRAN for mission-critical modelling or an OS package repo or the like. It appears that few shops have this fully under control.
I strongly recommend keeping a local mirror of your dependencies... however, I've spent years maintaining such mirrors for fairly large projects (incl. Artifactory, Nexus, and one-off OSS setups like Docker registry server), and I think it's easy to underestimate how much work it is.
Whether you use expensive 'turnkey' solutions like Artifactory or keep things simple, there's just a surprising number of ways for a local mirror to go wrong, especially if you depend on it for any kind of third-party dependency compliance control.
Some repository mirrors will also become very large, which means that if you're e.g. running them in a cloud provider the bill can add up. Not really a problem on local hardware but the up-front cost of hardware can be substantial and a lot of startups have little to no in-house IT capability (e.g. the org I work with right now has reached hundreds of employees without having a single system administrator on staff, so as devops person I end up having to do the care and feeding of our recently purchased local hardware as well).
In general I think this is an important and often overlooked issue in modern tech businesses - it is amazing how many technology-centric firms like software startups get to appreciable size relying entirely on outside SaaS/PaaS providers with no real in-house IT operation. This reduces up-front and staffing cost but has a way of coming back to bite you when you hit a certain point. A conversation I've been in before, in reasonably large software outfits, is "we want actual real office phones now, but telephony-as-a-service is real expensive and the on-prem products use scary words like VLAN and QoS in their setup documentation". As someone with an IT rather than software background it's a little baffling to me how this happens, I feel like a combo sysadmin/network engineer would be an early hire. But here I am working for a company instead of running one...
Yes, it is very common. Setting up local caches for package repositories is rarely prioritized high enough to ever get done by IT or the developers. There is almost always something else which is more important to the business.
Anecdata time: I'm in a 300 person (~50 dev) company serving the enterprise space (we have SOC audits). All our NPM and Maven needs are handled through a local Artifactory instance.
I wonder how many requests to npm are an utter waste given that often dependencies don't change due to the lockfile.
In Travis there's some (not too obvious) caching mechanisms that in many cases avoid this and speed builds up a ton.
I wonder if we would win from a review on popularity of configuring the cache and to educate people further on its use. I'm sure other CI systems have similar capabilities.
I’ve been considering checking node_modules into source control for some time now, has anyone else done that successfully? There would be a variety of benefits:
1. Eliminate redownload of packages on every CI build
2. Reduce the amount of gigantic IO operations from unpacking the tens-of-thousands of files sitting in node_modules.
3. Better security: code checked in can be audited better if not downloaded every single CI build.
yarn’s PnP system is promising for the zero-install paradigm, but it doesn’t seem quite ready yet (so many packages don’t seem to get their dependencies right).
Checking node_modules to git was the Preferred Way of working with dependencies in Node community in early days. Way before lockfiles, way before shrinkwrap and friends this way one could use `git diff` and `git bisect` to find out what dependency upgrade broke their application code. Several prominent community members and early adopters of Node advocated for this idea: they loved the idea of treating dependencies as integral part of your app, having good familiarity with the 3rd-party code you're using, etc.
However, early adopters of npm in the frontend world (back in Browserify and Require.js days) didn't like the practice (notably, because many parts of the dependencies contained node-only code, tests and scripts that were needed for building dependencies, etc.), and started putting node_modules in .gitignore. At the same time, Node people started to use other means to manage dependencies for reproducible builds: namely, private npm registries, dockerfiles, etc.
Over time both frontend and Node communities recognized the need for lockfiles, which we eventually got with Yarn and later versions of npm.
Yarn v1's "offline mirror" feature is explicitly meant for this use case. I wrote about it a few years ago, and have been successfully using it since then:
Did that for a small Ivy-based project (Ivy is a simpler maven replacement) that had security implications.
We had a task every month for one developer to go manually upgrade one or two dependencies and commit the changes after testing (java libraries tend to upgrade much slower than Node).
Helps if you only have one platform you're developing on and deploying to (e.g. x86-64 Linux). If developing on macOS there can be Mac specific binaries installed, depending on the package.
That's why npm has a command `npm install --ignore-scripts`. It download the dependencies, but doesn't run the postinstall scripts (that either download pre-build binaries or run a compiler locally).
In early days of node (circa 2011-2013) we used to do the following:
1. run `npm install --ignore-scripts` first.
2. Check the node_modules folder to source control,
3. run `npm install` again - this time without the flag
4. put all extra files generated by install scripts to .gitignore
This way the third-party code (at least, the JS-part of that code) was in the repository, and every developer / server got the version of binaries for their architecture.
It wasn't a bullet-proof, though, since:
1. The scripts could do different things anyway
2. More importantly: one could upload a new version of library to npm with the same version number.
These days, lockfiles and stricter npm publishing rules largely eliminated both issues, and updating dependencies doesn't produce 10k-line diffs in git history anymore.
And if you do have more platforms, why not just check in one
node_modules-directory for each?
This idea to redownload all packages all the time from external
sources (and not even having a fallback-plan) seems completely
brain-dead to me. Didn't the people learn from leftpad-gate?
> And if you do have more platforms, why not just check in one node_modules-directory for each?
Now you have to sync it or risk running into unreproducible build failures. Also, if you update the binary dependencies on say, macOS, then you still need some x86-64 Linux to build the dependency.
Not saying it is not possible but without a proper process (e.g. a build server being the only place that updates dependencies) this is going to be painful.
I was (incidentally) just looking for a way of running a npm repository. Apparently it is not so simple as running a web server with a manifest (which is the case for basically every other package manager out there). Is there a reason for that? Is npms approach somehow better?
The thing with using web technology for distribution is that it’s easily accessible and, crucially, that it’s cachable in-line.
NPM started off as a CouchDB app, and you used to be to keep a local mirror of the full repository by running a copy of CouchDB and setting up one-way replication (not sure if this is still possible).
This is one of the reasons why, in Go, my team vendors our dependencies. For any service that seeks stability and the ability to deploy any time, removing networked build dependencies is important.
Vendoring is checking in your dependencies with the source. You can def consider that a local cache. The next step up is to run your own proxy server (pretty much a package server / mirror). Next is to use a service like artifacory that does similar.
We should call it a backup. Calling it a cache puts people in mind of size/hit ratio tradeoffs; what's needed is zero tolerance for loss of mission critical code.
In the last company I worked for, we had a pretty standard build pipeline for a web application. I was a bit shocked to see how many different packages repositories we depend on for every build: NPM, Pypi, Alpine packages, docker hub ...
a single failure in any of those centralized systems that we don't pay for and builds fail.
It 100% is, unfortunately. I don't recall this happening in recent history, but it has been the case that 3rd party services have broken CI/CD pipelines and production pushes (e.g pip broke a few weeks ago, and their own pipeline for deploying changes was blocked by the bug).
It's very easy for these kind of dependancies to creep into the build process. If the worst case cost of not being able to create a new build out-weights the cost of rearchitecting your build process then it's something you should seriously consider. On the plus side it also brings additional benefits like faster builds and resilience against packages being unilaterally removed.
I'm always surprised that npm, a for-profit company with a lame business model, is graciously serving redundant package requests millions of times per day to everyone's CI/CD flows.
One day this is going to happen for real and it will be because npm org decided to charge for API requests by `npm ci`.
Isn't mirroring still a thing? I am not saying this as the old creepy guy (well a bit), but seriously wondering why a registry like npm doesn't have tons of geographicaly spreaded mirrors. Package can easily be signed and mirrored, that shouldn't be complicated.
1) Glad we have a Nexus proxy in front of npm... [a]
2) Cloudflare strikes again, this one company is at the same time making the internet better and worse. I'm constantly blocked by cloudflare for something that should not be, it's extremely frustrating. Then when you complain to them they throw their hands up and say "owner hasn't configured their site for that". Ugh.
Regardless, can we talk about the conduct in this GitHub thread? I know every community is different but is it common to have memes and jokes posted this quickly and often in a GitHub issue? It makes it really hard to follow and discourages genuinely useful discussion of workarounds or progress.