I'm not going to say where and I'm not asking for interested parties (we're not currently hiring anyway), but where I work (as a kernel/OS developer), package maintenance and management has been an ever-increasing burden to the point that we've tried hiring somebody specifically for the role. Unfortunately, it seems like this is a very specialized skill set that the vast majority of developers seem to find boring, uninteresting, and drudgerous, and finding someone qualified and interested has been exceedingly difficult. How would someone/a company go about finding people that are actually interested in this kind of work? There are clearly people who take on this task (some may even enjoy it!), and many who even do it for free for various open source projects, but I've no clue how to recruit or connect with any that would be willing to do this for a job. Any tips for finding or connecting with such folks would be much appreciated.
Attending in person won't be possible, but I'll have to keep an eye on this conference and see if they provide a way to post/share such opportunities.
I think you might have more success advertising to sysadmins or devops engineer rather than developers. It's precisely the negative attitude, or at least apathy, towards packaging that makes it difficult to find employment where packaging is valued. On the other hand, a lot of the packaging related work has traditionally been the remit of the UNIX/Linux system administrator, and that as a profession is slowly moving towards automation as 'devops' - but it's still fundamentally the same concepts.
Also, you could sprinkle in some 'optional but desired' skills and experience relating to packaging into your advertisements, anything from CMake and GNU Autotools to Nix and Podman. That would be enough to pique a packager's interest and differentiate it from the 'write-only' coding jobs!
I will second this advice; both of the engineers I have worked with who actively enjoyed being tasked with packaging software identified as systems operations / devops / site reliability engineering.
These are both very useful comments; thank you! In your experience, do the devops/sys ops engineers typically have useful coding experience? In our environment, porting a package to our OS may require a little, or sometimes a fair bit of modification to make it work and we often need to be able to read/understand the upstream code in whatever language it's in and change/add to it. Hence our (or at least my) assumption that we'd need a "developer."
A good sysadmin may not be able to do graphics programming, but they have plenty of practice in the sort of programming used in package management, as well as a fairly good understanding of operating systems.
They’re not going to grasp everything as well as somebody with a development background, like learning NEW programming languages on the fly and upstreaming changes, but I don’t think it’s totally outside of what they can learn and hiring a developer would present just as many challenges.
I might look more for somebody who bills themselves as “devops” or “sre” than sysadmin though since those job descriptions tend to effectively filter out those who can’t program at all. If anything I see those people as being software developers that specialize in systems administration.
In my experience a lot of people who started as developers 20+ years ago move on to system administration, and retitled themselves SRE/Devops over recent years.
I don't want to talk about ageism, or hiring biases, but I'd imagine if you found anyone who had an SRE-role and a long history they would be extremely likely to have had serious coding experience.
I've run into a few "young" devops engineers, and they tend to be more familiar with k8s, CI, and similar, rather than actually developing. But I guess that could well come down to the random people I've met, I wouldn't like to generalize.
> In my experience a lot of people who started as developers 20+ years ago move on to system administration, and retitled themselves SRE/Devops over recent years.
Usually they don't retitle themselves, companies do.
It seems the whole IT industry just collectively decided to rename the sysadmin role as devops when using cloud and containers technologies regardless if they actually use DEVOPS methodology or not.
In my experience creating packages for an operating system package manager or Docker images, a DevOps or sysadmin mindset is more beneficial than a developer mindset. For those packages, the goal is to follow operating system or deployment best practices.
I’ve not created a Python or Rust package or something like that. If you’re creating that type of package, my guess is that it may help a lot to hire a developer who loves to think about the most “idiomatic” way to package your code.
Edit: if your package will be published and used by customers, then you might look for DevRel skill as a bonus.
> In our environment, porting a package to our OS may require a little, or sometimes a fair bit of modification to make it work and we often need to be able to read/understand the upstream code in whatever language it's in and change/add to it.
What one needs to understand will vary so much depending on the upstream package and how it's normally built (what it expects) that I think even someone with experience in packaging will sometimes get stuck or need to seek help or advice. The more substantive the porting work is, the more that will be true.
That said, this kind of thing is pretty common for Linux distro packagers, and the weirder the Linux distro is, the more true that is. It's not at all unusual for Nix folks to add special build params, patch upstream software, or mangle binaries with sed in a post-build step to get them to understand NixOS' unusual filesystem structure, or to build offline in the restricted sandbox.
So I'd reach out to contributors to BSD ports systems, especially if they added ports for Linux-centric software, and contributors to really weird Linux distros (NixOS, GuixSD, GoboLinux, Void Linux, etc.).
I've never been in charge of hiring decisions anywhere so idk what the overall landscape of DevOps job candidates looks like. But I've done DevOps and I definitely do, and my friends in roles associated with that title can definitely do at least some coding and scripting in a way that I think suits most packaging tasks well. If a DevOps person has a CS degree of previously worked as a developer, they can probably figure out what they need to when it comes to patching upstream software to get it to build. Even if they don't, if they do some recreational programming, they can probably push through.
As a sysadmin I used to do package management for different OSes but this is not necessarily the most rewarding job. Atmo, most engineers doing that are juniors or started doing that as junior and never cared exploring anything else / moving company.
I not 100% convinced that profile works either. I tried multiple times to hire for my team which works with a combination of library maintenances build tool development and package management. I even tried a junior developer with an open mind. The skills needed for this job is sadly to broad. One needs to know the platform/language and tools to package software. And don‘t get me started with CI etc. Sysadmins and what most understand of DevOps are not the right kinda people. Not to say there are not people among this group.
The most appropriate group of folks may be those that already maintain software for the OSS community. The skillset more resembles trades where time and experience matters substantially more than any intrinsic talent or overall intellect to be effective at the job.
Honestly, sometimes I wonder if this is perhaps an under-considered area for Generative AI applications given how much drudgery is involved yet precision tends to matter.
Maybe… I mean yes at the bottom most package system are a zip with some kind of manifest or a manifest which describes where and how the lib can be build from which sources. If an AI could deal with the fine details of these systems… I mean Debian packages are harder than Arch packages, or at least that is my experience. There packages which are super easy to work with like native python, ruby and Java. But mix it with c extensions and boom. The CI can get complicated.
If there's a lot of thankless, mundane work for the majority of projects that seems rather routine from what I've seen described thus far across multiple decades, so this seems quite ripe for some form of automation sans projects that seem to keep requiring changes constantly. C extensions, interop across languages, and ABI considerations are certainly much more complicated and require a lot of complicated work but given that 90% of the issues even power users run into are things like "the lxml library I installed won't get detected by package X when I run autoconf, please help."
There's nothing sustainable as an industry about something like software dependency management and versioning being so critical for things "just working" while people are sticking their heads in the sand saying "not my problem" as a rule. I'm so tired of solving the same problems that were already solved in the past because of developers wanting to work on "more interesting things" and doing the bare minimum to get stuff compiling and running again regardless of the package format. In fact, Docker may have made this situation even worse with so many Dockerfiles that will basically be unable to work. This is part of why I'm interested in the Nix ecosystem because there's so much work being done to make software installations repeatable and easier to manage. Unfortunately, it's squarely aimed at developers and is way, way too complex for most sysadmins to put the time into understanding, especially when they're so overwhelmed like everyone else with so many tasks.
I‘m 100% with you there. I also think that the problems described are solvable on some levels if not all. I remember the time when brew took of on macOS. I used macports at the time. The reason why people preferred brew was the fact that installs where way faster. The reason was simple: brew based everything on system libs where macports similar to nix maintained packages for all libraries etc. It took way longer to install a package with macports. I‘m running Nixos on my private machine (I need to work with macOS at work as the only other option would be windows) and it is sadly way too complicated. Nix flakes make this harder and easier at the same time since nix channels had a way of breaking as well. I switched to nixos because I had issues with running nix packagemanager on other distros (including macOS). It either behaved wrong when installing gui packages or in some cases with a rust package I maintain had issues with glib-c which nearly drove me nuts.
As to docker. Yes I also think it made stuff worse in the majority of cases. What people forget to grasp is that Linux be it a vm or a container is just a the kernel and that we have dependencies for some programs and libraries to the kernel. Reason why we have distros in the first place. So not all containers might run the same on each host because it docker runs on the same kernel as the host machine or the VM which installed (macOS) Issues are rare but can happen. Also the number of base images. „Let’s use alpine because that means I build a small container image“ … No it means you build on top of an alpine release with all the installed system libs etc. And that may or may not work with your software. And stuff like that.
> One needs to know the platform/language and tools to package software
How long is acceptable for your packagers to struggle with learning a build toolchain and its quirks? How many language ecosystems are you working with at your company?
> And don‘t get me started with CI etc.
Respectfully, I'd like to get you started on CI. I kinda feel like the landscape of CI platforms right now sucks, and a lot of my daily work is integrating with them. I'm curious what your gripes are!
I meant to not starting with the issues of CI but the complexity of it and how many solve it. There is the simple shell script idea which mostly runs on this one CI system and nowhere else. Or the we use a build system which abstracts all kinds of things away but need a good chunk of knowledge to understand why it’s there in the first place. Developers from camp shellscript with the desire to keep it easy ask: „why do this etc etc“. It’s not that their wrong it’s mainly the ignorance of other things like build reproducible builds and platform and system tools abstractions that make these more complex. To just expect that some developer who never dealed with a multi arch/platform product to just willingly jump into the complexity is unfair.
I also don’t like the current CI landscape. I still run a Jenkins at work cause it gives me the most freedom and I know it best (it’s sadly what keeps me off other systems, at least I know what parts not to touch). We use gradle which I really liked 5 years ago. We also use it for other non Java systems since it is pretty adoptable. But the recent version changes bring more and more weird changes which make it harder and harder to maintain custom plugins for it. The rules to code build plugins are more stricter enforced by the compiler which yes is good for gradle but hard to maintain as one needs to read up and understand why one can no longer access a file input outsite of a build execution phase for example even though it worked fine for the last 4 versions. And then there is more.
> Unfortunately, it seems like this is a very specialized skill set that the vast majority of developers seem to find boring, uninteresting, and drudgerous, and finding someone qualified and interested has been exceedingly difficult.
The reason most find it boring, uninteresting, and drudgerous (I would also add "frustrating" to this list) is because one has to build on top of decades of bad decisions and quick hacks, over and over again. Which also suggests a way to make it interesting: make it about fixing the underlying problem (even if in a limited context) instead of just going along with whatever is available. For example, you can attempt to automate producing various packages from some "sane" common metadata.
I'd be interested in the other direction: I accidentally stumbled into a position where I get to do packaging work, which is one of my favorite parts of the job. But like... that was an accident. As much as I would love to do more work in the same vein, I have no clue how I would intentionally seek it out if I were job hunting.
The large open source companies do a lot of packaging, probably some of the smaller open source consultancies too, maybe try some of the employers listed here:
My packaging experience is limited, but I found the docs were quite poor. It took perseverance & researching install error messages to create an RPM that worked. So you might consider hiring for people willing to do that kind of work.
The good news is that I learned a tremendous amount about where programs and files should actually be installed through packaging. It was surprising to me how many conventions package managers enforce. I was also surprised how poorly software followed those conventions when installed without package managers. Packaging improves user experience, developer experience, and quality. Can’t recommend it enough!
> It took perseverance & researching install error messages to create an RPM that worked. So you might consider hiring for people willing to do that kind of work.
At its toughest, packaging is a long series of exchanges: one build error for another. It is a stubborn, patient, rabbithole diving, well-I'll-be-damned, yak-be-shorn kind of work.
> The good news is that I learned a tremendous amount[.] [...] Can’t recommend it enough!
And it's totally worth it, both for the packager and their users. :D
At my last employer, we were called Systems Engineers. I created tooling to switch us off of a custom packaging solution to using Debian packages. The other part of my project was tooling around Ansible for configuration management across a fleet of over 48,000 vms deployed at over 6,000 locations, with different locations requiring slightly different configurations.
I also helped build a web based system to track approvals and schedules for all the changes.
I think Dev Ops or Systems Engineers would be the most used title for this type of work.
Go to where the packagers are; the open source distros and related companies. Some of the distros like Debian have lists of consultants, or a jobs list:
> package maintenance and management has been an ever-increasing burden to the point that we've tried hiring somebody specifically for the role
I do packaging work at my current job, and it's one of my favorite tasks there. I currently maintain a small collection of private Nix packages for macOS and Linux, and last week spun up a private Homebrew tap (for Casks only!) on a lark. I have some experience packaging things in other formats, too, and I'd be happy to pick up a new one if I had a use case.
But it has literally never occurred to me to look for a packaging job specifically! I didn't know roles dedicated to packaging were available anywhere.
> How would someone/a company go about finding people that are actually interested in this kind of work?
Even when I'm happy at work, I'm generally interested in keeping an eye on jobs where Nix skills are desired. 'Nix' itself is hard to search for (because it's ambiguous with *nix as an abbreviation for Unix-likes), so I also search for 'NixOS' and 'Nixpkgs' on job search sites sometimes. I don't think I've ever once seen a result for 'Nixpkgs', so any listing that mentioned that would stand out, and a job explicitly titled something like 'software packager', 'software distribution engineer', 'package management engineer', etc., would definitely catch my eye.
Job boards and development mailing lists for large package collections (Linux distros, BSD ports collections, and other collections like Conda) would also likely have some interested people! (In the case of the latter, just make sure that job posts are in line with the norms of the list.)
Communities surrounding advanced build systems (Bazel, Buck, Pants, etc.) would also probably be good places to look.
I'd not seen that project; definitely something I'll be looking at, thanks. Unfortunately, for what we're doing, it doesn't look like this will work as is... certainly worth some investigation though.
Good lord. I'll never understand why people promote this project, since it's main feature is that he's doing it wrong. The complexity he's trying to avoid exists for a reason, not because people like complexity.
It depends on what you want as a result. Do you want the shortest path to "dpkg -i the-result.deb" working on your server, or do you want a package complying with debian standards around behaviour and file locations, with packaging/source changelog split, potentially upstreamable? Because the second one is a lot more work, but few people actually need that.
The second one is more work and it's what you want. Because it gets stuff into the main archive that everybody benefits from in the future. Otherwise your project is a series of hacks that create an unmaintainable mess.
No, sometimes you really don't care about that. Some software will always stay internal. Some is so specific that it will never get to a distro. Some is just experimental and providing standards-compliant packaging for it is a waste of time. And that's when FPM can be useful if you want to go that way.
Yeah, FPM is great not for official/public packages (or libraries) but for "for some reason our $org installs internal (or patched, or just self-built OSS) stuff via package manager" and most often even all the versions, like /opt/foo-1.0, /opt/foo-1.1 and so on. It solves this problem very well.
I'd say it solves it passably, in the simplest and most minimalistic way possible to still call them packages, technically.
I question how one gets here, nay, the 'problem' we're solving.
Presumably there's config management involved already capable of shipping bits. ie: the repo definitions to find these hackjob packages, or deliver them directly.
If this is what you're willing to invest, go Slackware at it - use a tarball. You aren't gaining anything notable by throwing an archive at a packaging format.
The meta that FPM ignores is what provides packages their value! If changelogs were at least a part of it, I'd be a bit more accepting.
Otherwise, I see it mainly as misappropriation. Best case, naive and well served. Worst case, giving the impression of better distribution than actually exists
An organization that does packages, but leaves this as the answer, fails itself and the members. Over 90% of the purpose is dutifully ignored/not standardized
It is also a branding problem, imo. Part of the reason I prefer software from distro repositories is because those conventions the distro maintainers enforce help ensure that packages won't do stupid things and break my system.
Especially with DEB and RPM, where the packaging format supports arbitrary hooks that run as root, this is a big deal. High quality packages that meet distro standards will inspire confidence in customers' sysadmins. Substandard packaging may do disservice to your core software's brand, if your actual software is more solid and thoughtful than the packaging.
A thousand times this. As someone who has been fascinated with package management for a long time, my first reaction to FPM was horror. FPM is by and for people who resent package management and want to avoid actual packaging work, not people who are passionate about doing it well.
You want people who, when tasked with creating a package for a distribution format that is new to them, look not to tools like this, but to the conventions and standards of successful distros which use that format as examples.
You want 'native' engagement with those packaging formats, not hacks like FPM. The parent commenter's suggestion is an excellent one.
Please heed this - FPM is basically versioned tarballs. It's only useful for the most simple/contrived scenarios
Actual package specs require effort for good reason. It's not just an archive... but interdependence, steps to perform on (un)installation, changelogs, and so on.
There are some helpers already provided, ie: RPM macros.
Sure, they're esoteric, but show me a specialization that isn't. Refer to the Fedora packaging guidelines and enjoy life.
It's for sure a ton harder to use (IMHO) because it mandates the creation of a manifest yaml, versus "fpm -s dir -t deb my-directory && echo tada" but not having to deal with ruby (or docker) can make it a better fit for several circumstances
It might be helpful to know that this niche is mostly organizing itself under the "release engineering" term. So explicitly putting up job offers for a release engineer might help.
Like you said there are many people do it for free for open source projects. Why not try to figure out the reason for them to do so and do so for free? With that reasoning, I believe you maybe can find out some changes to the "job" or the "company culture", or the "expectation from the management", or something else that could help you attract more candidates.
I've tried the basics of building a package manager before for fun, and found it a much harder problem than I anticipated (as most things! that's why I experiment by trying to build them, to learn! And sometimes actually build them successfully [1]). So I do agree with the sentiment of finding someone qualified to actually build it and not just a generic dev.
Depending on your needs I'd first try to not need the expertise in the first place (a generic public package manager is much more complex than say, a tightly controlled plugin system), or try to leverage existing package managers/tooks.
If that fails/not possible, and you cannot find a proper fulltime dev, many open source devs working on these topics might be up for consulting, so I'd reach out to them to try to set the general guideliness/direction and then use in-house talent to fill in the gaps.
I find it's a mildly common skill among system administrators/SREs. I personally picked this up, and developed it, in these circles across many companies
Good ones are a bit rare... as this tends to go
They tend to write automation/utilities that help keep the lights on - packages are an important part in distributing that.
Literally go to nixpkgs/nixos areas and ask for who's looking for a job? Well, those folks are going to want to do Nix packaging, so... ymmv. Also, my other comment about Nix in this thread has gone through a rollercoaster so uh, that's sure interesting. Hard to not raise an eyebrow at.
I’m a maintainer at Arch Linux and have been in FOSS for long enough to know the landscape and who’s skilled looking for work. Send me an email with your needs and we can talk.
As others have said, what you’re hiring for is more of a specialised sysadmin role. I agree with leaving the software engineering label behind.
Some large tech companies dedicate multiple teams to package management and installation. It's a unique enough problem that you may not want developers to focus on it, better to roll it up to a dedicated team. They usually work closely with release operations.
> There are clearly people who take on this task (some may even enjoy it!), and many who even do it for free for various open source projects,
I feel like I missed something obvious here. Doesn't this answer your own question? I've worked for companies before that found great talent for very unique skills by contributing to similar projects and reaching out to frequent commit authors.
This attitude is exactly why hiring sucks. You and everyone else want to find your purple squirrel, your Python developer with more years of experience in Python than Guido Van Rossum.
Why is it so hard to accept that the best candidate you will ever get may be someone with relevant but not _exactly_ the experience that you want?
Having been on both sides of this equation, I don't think I'm being unreasonable, and am well aware of the fact that good candidates and the "right" hire will rarely be the ones that check off a list of random attributes plucked from the thin air between the pointy-haired bosses ears. I'm not looking for a purple squirrel, and I'm not looking for someone with 25 years experience in rust or the like. I'm looking for someone who both has an interest in, and a capacity to do a somewhat unique job we need done. We've found plenty of qualified people, and hired some, many of whom tend to be fresh graduates with zero non-academic experience. What I've had a difficult time finding, even figuring out how to look for, are the people that have the interest or desire to work on this particular type of niche problem. Few developers out there seem to, and we've found even less, so the package porting/maintenance/management part of our job continues to be a burden for the whole development team. Thus my question. And thankfully, some insightful replies suggest that I'm mainly looking in the wrong category of candidates. All of which has nothing to do with matching an overly strict list of experience requirements.
Given an upstream project with good practices, packaging is fairly easy these days, since most of it is automated. Most of the work is around reviewing upstream changes and fixing any issues found.
This might be the most mundane topic Ive found myself naturally extremely excited about.
If your a developer working full time in only one or two languages you may never experience just how good/bad you have it.
When you do, its really eye opening.
Every time I transition to a new language professionally it can be like opening a bag of Bertie Bott's Every Flavor Beans when you look into the packaging story.
* Go binary release story is great but the gopath method for dependencies is annoying
* Elixir has lockfiles and built-in package docs but the release story deviates too much
* Javascript now that everything has settled into npm is a delight but the lack of stdlib, painful local aliasing and extremely heavy node_modules folder can be offputting
* Python just sucks (lets hope poetry can bring the promised land of deterministic builds)
As an old school server side Java dev, I boggle at the hoops folks jump through to get stuff done when it comes to packaging.
We had two large dependencies: the JDK, and the app server (our container, like Tomcat). Maven, that we’ve had forever, for library dependency. The resulting War files were effectively self contained.
JDKs were trivially installed, explode it somewhere (anywhere) and set JAVA_HOME. App servers were the same. Self contained directory trees. Install as many as you like, just change the port they used. Or just add another War to the one you have already. Pros and cons.
War files were essentially static linked, carrying all of their libraries. Self contained blob you could drag and drop into a directory and watch the server shutdown your old version and fire up the new one.
Sure, we had our bit of DLL Hell, rarely when building the app. I’m certainly not going to suggest we never had class loader issues.
And enterprise Java has a notoriety all its own, but packaging was pretty low on the list. But we didn’t need Docker, or dedicated VMs or anything like that. Our OS dependencies were all handled by the JDK. Java didn’t have any shared library concerns. I honestly never questioned it, either it was statically linked, or just entirely self contained outside of the C library. Everything else was in Java. OpenSSL, for example, was never an issue.
I’m on board though, I loathe packaging. I’m not a big fan of arcane parameters passed on over the council fire and tea. Just never been my drive.
It definitely does have such concerns, but the community has just accepted that the JDK will never help with this and so every JAR that uses a native library hacks around the lack of proper support in its own unique and special way. Usually some project-specific ad-hoc code that extracts native libraries into some directory in the user's home directory.
Is the "cache" versioned properly? Maybe.
Can you control where it is? Maybe.
Code signing? Probably not.
Can you ship only native code for the machines you actually care about? Maybe.
Does it work when you run as a dedicated server UNIX users that doesn't have a home directory? Nope.
Hydraulic Conveyor (https://hydraulic.dev/ - disclosure, my company) fixes a lot of these problems automatically. But it's not like there are no problems to fix, and of course writing a library that uses native code is still a big pain. It's a major blind spot of the JDK unfortunately, and the community has never risen to the challenge of fixing it.
I'm currently working on Java bindings for my own native library. (I'd like to use the Panama FFM API, but unfortunately, my first user is stuck on JDK 11 for now, so I'm stuck with JNI.) Do you have any recommendations on how I should handle the packaging and library loading problem, so I don't make things worse for Conveyor (though my first user isn't using that)? Any reference implementation that I should look at and copy?
So it's really easy. Just run System.loadLibrary("foo") before doing anything else. On developer setups that line will throw because the JVM won't find the shared library, so then you can go down the road of extracting things to the homedir or whatever else you want to do (or do it manually and check in the results). Deployed installs will find the library in the right platform specific directories inside the app package and pass.
* Rust is awesome, and nothing can beat it. Cargo (the package manager that ships with Rust) is the best thing since sliced bread! Every other language can suck a bag of burritos.
I'm sure we can all agree that Cargo is the best thing since sliced arrays, but I've finally reached the point where we have enough different Rust projects at work that rebuilding the world from scratch all the time and having ~40 GiB `target/` directories scattered around the place is getting old.
I'm not meaning to criticize Rust or Cargo here; they're both doing their jobs just fine. But I do find myself craving a different compromise for much of what I/we use Rust for today. And I'm really hoping that Wasm Components (and WASI) will be that different compromise — e.g.:
- Don't rebuild an entire HTTP stack from scratch for every tool that happens to use HTTP.
- Therefore most projects won't have enormous `target/` dirs.
- Reuse components built in CI for Linux directly on a Mac dev machine (cross platform).
- Mix and match new processor architectures available in AWS without building two versions of everything.
- Also reuse some components in the web browser (yes, we actually have real, boring use cases for this).
- Wait, I'll be able to do all of this and still be writing Rust? Shut up and take my money.
I do wonder how much sharing could be done at a project level for libraries. Would it be possible to have at least debug builds with matching compilation settings be compiled down into a shared place?
Definitely. Much of this can be mitigated by using a shared target directory, which is at least somewhat supported in Cargo. I should probably start doing that and promote it at work if I don't run into any problems.
Some clever garbage collection would help, too, but I imagine different people would have different and very strong opinions about how that should work.
The fact that two projects that use the same library may enable different features also complicates things. Again, a of this can be mitigated.
I'm looking for a step change for application development, though — i.e. not 5 minute build reduced to 2 minutes, but 5 minutes reduced to 5 seconds (and a correspondingly tiny target directory). That's what excites me about WASI in this context at least.
Leaving WASI aside for a minute, I do wonder how much more could be saved in local disk space and compilation time across projects (and hosts, a la sccache) if this was a high priority goal for the Rust project. E.g. even if the MIR for a crate with two different sets of feature flags enabled ends up substantially different, would they still compress well against each other if a lot remained common?
Once upon a time I briefly looked at symlinking rubygem install dirs from global to project-specific directories (because a project-specific $GEM_HOME avoided functionally all the problems with just about any other approach, and is still what I use today). Functionally it worked, it just needed some tooling to make it easy.
It sounds like what you might want is a shared global dir that uses hashes of feature flags to separate crate installs in the same target directory, then some after-the-fact GC to hardlink matching files across different builds of the same crate. Then you can symlink from there into your local project target.
FYI - Go no longer uses GOPATH for managing dependencies since the official module system was released in Go 1.11. There's still plenty of tutorials and the like out there that mention GOPATH, but it shouldn't be needed any more for basic scenarios.
All that old information is terribly confusing, but it's not entirely useless. GOPATH is still critical, as it's how you can run fully-offline builds of Go programs and override dependencies without changing the root project itself.
This isn't accurate: `go mod vendor` will dump the dependencies into the vendor repository these days, so you can build totally offline. There's also a mechanism for renaming dependencies (so you can replace them basically).
Not quite: both vendoring and the 'replace' directive in go.mod files (which I presume is the mechanism for renaming dependencies you are referring to) both require modifying the root package.
With GOPATH, I can package a single library and use that package to fulfil the dependency on the library of any other Go application offline. Conversely, with the vendoring mechanism, the Go tooling will need to download (or copy from cache) the library when you originally create the vendor directory or when you add a new dependency to the root project.
The Python world won't improve as long as programmers add dependencies on libraries written in other languages and (here's the important part) attempt to compile those packages themselves within a Python build process. Poetry is a nice chapter in the Python package definition story, but it is only a tentative step to fixing the wider problem.
I firmly believe that languages should not manage packages. While it makes the simple cases easy for beginners, the trade off is mixing languages becomes harder. There is no perfect language and often mixing should be the right answer and we don't want any more friction there.
I hope that fad dies. It assumes all the world is x86-64 Linux. Maybe you get a few who acknowledge the raspberry pi. However there is a whole world of new other processors, *BSD, and others that the fad makes difficult to use.
Nobody that currently exists - at least to my knowledge. This is a hard, mostly thankless problem. Distros solved a similar problem, but they have different motivations and so didn't fully solve everything. Languages like rust have a partial solution, but they don't play well with other languages and the full complexity that can result.
I quite agree! And would it even be harder for beginners? When trying a new a program written in a language I don't use frequently, I usually spend a good half-hour working out what command to run to install the dependencies and going through the logs working out what implicit requirement wasn't in the README. A holistic package, even one written for a different software distribution than the one I use (Debian at present), would immediately get me 99% of the way there.
PS. If you find yourself in the same situation I do, Repology is your friend: https://repology.org/
I do believe this is an issue of not having explicit dependencies. Julia takes the approach of, we build and ship everything for every OS, which means Pkg (the package manager) knows about binary dependencies as well. Making things more reproducible in language
Linux distros often do things to force packages to declare all their dependencies: Nix and Guix use unique prefixes and sandboxed builds, openSUSE builds all their packages in blank slate VMs that only install what is declared, standard Fedora tools for running builds in minimal chroot environments, etc.
I'm not aware of any language ecosystem package managers taking similar measures to ensure that dependency declarations in their packages are complete.
The problem with system packaging is there are so many systems. For example if you only package something for Debian, how should a Fedora, Arch, Gentoo or even Mac and Windows user use that?
The problem with systems packages is they solve a slightly different problem from language packages, and so while they are closer to what is needed, they are not right either.
Debian has by far the most rigorous packaging standards of that list, so subsequent packaging for the other distributions should not be very difficult. Bundling the dependencies into an OCI container or AppImage to run them on non-Debian systems is also trivial once you have a Debian package, but of course that comes with disadvantages. Neither Mac nor Windows have a proper first-party package manager (although winget makes some inroads, thanks of course to Keivan Beigi), so comparison becomes rather a moot point for those platforms.
Agreed. It's unbelievable to see all these languages inventing packaging over and over again. It's just an archive with some metadata and a hash/signature and a transport mechanism.
One package management feature that not even the really good languages seem to have is synchronizing dependency versions across multiple packages, usually in the context of a monorepo.
Like if my codebase has webapp A, library A and library B rather than separately defining that they all use third party library foo v3.1.4, it would be really nice to have a single source of truth.
It's one part of the problem. Reliably managing those environments at scale can be tricky. Plus dealing with hybrid environments that include pip and conda packages.
I have a corollary about that: it's not open source if I can't build it
Now there's for sure a spectrum there, since "can't" could mean a lot of different things to different people, but a good straight-face test is whether they have a CI build specification in the public repo since building in CI and a newcomer trying to build the repo often have very overlapping concerns
I think that's a really narrow view of what librarians do. It's not just having stock of all the books, it's knowing the books that are relevant to particular fields of study, it's having a mastery of research and source-finding, and so much more.
Would be fun of them to get a keynote speech from someone involved with EPS[1] (no, not that EPS[2]). I do wonder what parallels the two kinds of packaging have in common.
As an expert in the software kind of packaging but only casually knowledgable about the electronics kind, I suspect rather little. However, the former is probably going to be much more prominent in the microelectronics world soonish, as dynamically configurable chips based on FPGAs are becoming very popular, and highlight more than ever how inadequate the conventional IDEs and SDKs for embedded software are. Proper software packaging is almost non-existent in embedded software engineering, but the current work-around of just using one company's SDK for everything won't scale to support all the FPGA cores that people want to use.
Local dev, cloud dev, CI, production – all with the same config file. Fingers crossed my talk submission for PackagingCon gets accepted. It'd be awesome to share this new way of working with a wider audience.
I see this in almost every thread on this topic, but feel it buries the lede since its dependency upon nix makes it more complicated than it seems, at least every time I've revisited it. Maybe it's an artifact of its 0.x version, or the fact that I have improper expectations
With regard to your "CI" use case, this is part of why I said I must have improper expectations because
At first, I thought it's a joke conference site, but if I think about it, it's actually a great idea!
Just goes to show how we are (or at least, I am) used to $PROG_LANG conferences, or Agile/SysAdmin/SRE/Mobile confs. Could be cool to have linting conf, code editor summit, etc.
I am a beginner in this space but I have some interest in trying to find solutions to the pain of package usage.
* A new programming language and ecosystem could try to solve package management from day 0. ScrapScript is an example of this. I've heard good things about Go and Rust.
* You can make package management fun by thinking of it as a data flow factory and like Factorio (which I've not played but I do get the feeling of that game) As it stands it's just lots of tedious boring busy work.
* If only dependency usage was as enjoyable and straightforward as shopping and arranging bought things in a room.
* I am investigating the modelling of packages as bundles of types foremost and state machines that can be traversed by the package manager to determine state interactions and compatibility automatically.
* Changes to packages break everything. You could diff ASTs to see what's different.
* I don't enjoy breaking changes. I have some old projects where I never pinned versions that cannot be built because I don't know what versions they work against.
Simply having a sane standard library (as packages) goes a long way to making package management nice. Because then most packages are level 2 depending only on core libraries whose API rarely changes.
Even JS/NPM, if it has this, would be a lot nicer to use. I find Python nicer than NPM just for this reason even though people grumble. And .NET even better. Elm is perhaps the best package manager because of the focus on developer experience there.
Another good tip: have just one package manager for your ecosystem!
> You can make package management fun by thinking of it as a data flow factory and like Factorio (which I've not played but I do get the feeling of that game)
having played the game, Factorio is way more fun than package management
They have a YouTube channel and I hope they publish all of the talks after the fact. Should anyone from the conference read this it would be great if paid corporate packages were offered that made the recorded sessions available for download.
For me, the packaging mechanism itself is a less important step than the idea of distributions: can I set up a collection of packages that is known to be mutually compatible? Or is the approach "one repository to rule them all"? This is where I always (when I was poking at it, a while back) ran into trouble with Cabal and Stack. It was always possible (likely, in fact) that I would select some set of libraries that managed to be mutually incompatible, so I had to go version-chasing to get something that worked.
This is something that Linux distros had to solve right from the start, so it's built into the concept, but I only very rarely see it done at all, let alone done well, in language package managers.
Will the announce a package manager to manage package managers? Every language having a package manager is getting out of hand. Having a unified way to install packages across RPM, DEB and Gentoo is also enticing. Maybe even Mac and Windows.
Oh man, missed the deadline to submit a talk but we're working on some really cool packaging related to conda environments. Maybe for next year's conference.
Excited to attend-- this is a topic that's becoming extremely important especially in the ML world where dealing with dependencies is a total nightmare and most of the solutions we've seen don't scale well to large orgs.
Cool! We're also working on improving conda environments and the interaction with them. If you would like to have a talk find us through https://prefix.dev
I suppose there might be others, but I just wanted to mention http://pkgsrc.org/pkgsrcCon/, a conference on the pkgsrc packaging system. This had been going on for years, but seems to have stopped in 2019 due to the pandemic and not resumed...
there is still a lot of room to improve packaging.
i really miss the conary packaging and build system. it was not perfect, but it essentially put packages into a revision control system so that for one version numbers of packages didn't matter any more. the whole set of packages for a release was locked into place so you could have a new release of the distribution with an older version of a package. and you could switch distribution versions like you can switch branches in git.
at one point my system was so messed up that it wasn't really usable any more. even installing or removing packages didn't work. but i was able to run a command that would switch to the latest stable release version. conary then shuffled around downgrading several packages that i had installed to the right release version and getting me to a clean release state, so my system was workable again. neither rpm nor deb systems are capable of doing that, and i am not aware of any others either.
Nix is capable of that and I consider it golden standard right now, it also allows you do multiple things not related to "package management", such as configuration management, creating vms / docker images, cross-compiling, reproducible dev envs, deploying to AWS etc...
NixOS and Guix are capable of that; Fedora Silverblue and Fedora Kinoite (the same technology, just with different default desktop environments) also similar conceptually, but are a little less flexible to manage than through their management CLIs than those of Nix and Guix. I think all of them would recover to a consistent and fully-functional state, but Silverblue/Kinoite would not necessarily remove the 'ghost' packages automatically.
I like a package management system integrated with project management, such as JS's package.json or Python's pyproject.toml. I want to manage project scripts, tool configs, and dependencies in one place. It's sometimes annoying, especially for larger projects, but overall, I hope more languages adopt that style.
I really don't. It puts one tool in the position of having to be good at all those things; and implicitly puts one ecosystem in a privileged position compared to everything else in the project by default, rather than by choice. It also means the file format of that one tool needs to be hammered into shape for every use case, and if it's something lobotomised like JSON, everyone suffers.
Give me a Makefile any day. Yes, that's a specific choice of top-level tool, but it's a better choice than most for that specific top-level job because you just shell out to whatever else you need. Not having to rebuild dependency management is a win, too, where you can take advantage of it.
It's not an unpopular opinion! Perhaps ironically, it is for the package management that software distributions provide (Debian, Fedora etc.) where GOPATH is most used - the Go modules system interferes with the guarantees that those distributions make about the builds, whereas GOPATH lets the distributions calculate dependencies themselves.
Bindle seems interesting but I'm not quite sure how to use it or whether it will take off anytime soon. Maybe cargo has some of its interesting features.
I don't understand yet what Bindle is trying to do. It seems to be half archive format, half package manager, yet not innovating in either area nor adding anything by their combination. Additionally, it explicitly doesn't support 'latest' tags or branches, which exist in package ecosystems for a reason.
It seems the client/server is neccesary. Not an issue for cloud projects composed of many servers and SDKs like Deis Labs or Fermyon, but an issue for others wanting to adopt it.
Not exactly. Most Docker containers rely on distro package managers. You're usually running apt or apk inside your Dockerfile. And the container system rootfs needs to be laid out. It's a non-trivial amount of work to do that and keep it up to date.
I am a fan of building a traditional native package in a multi-step Docker build, and the final container artifact can be a simple `RUN deb -i my-pkg.deb`
I find that targeting traditional system packages has benefits. 1) it's not really that hard, and 2) it forces you to lay things out consitently, or, at the very least, the distros conventions are helpful.
NOPE. Docker neatly encapsulates the problem and allows you to somewhat ship a reproducible deployment... until something needs to be updated. Now either you rebuild your image (which may not be reproducible) or patch it (which comes with its own share of problems). Caching can also be a nightmare if your image is built from common stages. Dealing with vulnerabilities is also a pain especially for things already in production.
Docker (or container images in general) are great but they solve a limited set of problems well and tend to hide others.
No, I don't understand why this myth persists. Docker fetches tarballs, runs commands, and tars up directories. Often, Docker is used to run package management commands (e.g. `apt`, `dpkg`, `yum`, `cargo`, `mvn`, `nix`, `cabal`, `sbt`, `pip`, `npm`, `gradle`, `stack`, `guix`, etc.); the latter are the actual package managers.
Docker "solves" package management in the same way Bash scripts "solve" package management: you can use them to run actual package managers; but also, you probably shouldn't (e.g. Nix is better at creating Docker images than Docker is, for example).
The question is a bit short, so I'm inclined to say: no, the things they solve is only mildly related. But maybe you have a specific thing in mind that Docker solves, so feel free to share what you think so someone or me can say something more useful about this!
Attending in person won't be possible, but I'll have to keep an eye on this conference and see if they provide a way to post/share such opportunities.