Hacker News new | past | comments | ask | show | jobs | submit login

When you make a new library for C/C++, you generally just install it locally at first (pkg-config and PKG_CONFIG_PATH make this trivial). It doesn't (need to) get packaged by a distribution until a program using it is packaged. You don't need to do this yourself, unless you yourself use a distribution and want it packaged there.

As for the amount of dependencies, you have to wonder if it's better to have tonnes of small dependencies or whether it's better to have a few bigger ones. When I need something in a library, I don't make a new library for it, I first try contributing to it, and failing that I vendor and patch it. I feel this is less common in the brave new world of everyone just doing whatever and pushing it to a largely unmoderated repository...




I suppose I could manually traverse my whole dependency tree myself and install all those packages myself. Set up cmake and pkg-config for all of them. If anyone else wants to try out my software, they’d have to manually look up all the dependencies in my github (?) and install those too. And hope the versions all line up nicely - which is a big if during development. Maybe I could script all of that dependency tracing - if only there were some standard software to do that. Guess not.

If C/C++ is all you know, you’re used to how painful all of that is. But what you’re proposing is a significantly worse experience in every way compared to what I can do with cargo or npm today. None of the claimed benefits seem remotely worth the cost of wading back into C dependency management hell. And cargo and npm show just how unnecessary all that pain is in C!

Just make good tooling. We can have nice things. I stopped using C not because I decided to write assembler. I stopped when better languages came along. You want me to stop using cargo? Nobody wants to go back to manual dependency management. Make something better.


> And cargo and npm show just how unnecessary all that pain is in C!

Leftpad would like a word.

Every dependency is a liability. It's another thing to vet, another thing which can disappear. Having few dependencies in c is a feature, not a bug. The approach to dependencies in rust was a major turn off to me. I recall 15 just to get random numbers in the intro tutorial. No thanks.


Just in case you don’t know, packages can no longer be pulled from npm by the developer. At least not without emailing someone. And you need a very good reason - like that version is ransomware.

The particular pulled package version identifier (foo@1.0.1) can also never be reused. If you audit 1.0.1, it will never change out from under you.

The leftpad fiasco was hilarious and embarrassing. But it can’t happen again because of changes in npm policy.


Keeping track of C/C++ dependencies is basically the point of Linux distros...

Cargo's main problem is that it's (mostly) rust only. Same problem that pip/npm have, relegating you to (mostly) single-language silos.


> Keeping track of C/C++ dependencies is basically the point of Linux distros...

Judging by the proliferation of docker I don’t think they’re doing a great job of even that. Docker shouldn’t be needed if apt was up to the task. Nixos is a much better design imo.

> Cargo's main problem is that it's (mostly) rust only.

True. But apt’s main problem is that it only works on a few distributions of Linux. Limiting the users of my rust library to rust developers is usually not a problem. Limiting the users of my rust library to the people who use homebrew or arch or something is not fine. And I don’t want to redo all the packaging work on every operating system and major Linux distribution. Life is too short for this make work BS. Especially given it could be automated away anyway. Rust packages already explicitly define all their dependencies. Why can’t apt just generate .dpkgs from my Cargo.toml?

Apt’s other problems are that packages need to be installed globally. And they make a mess of versions. And there’s no feature flags. Run `apt search vim` some time and see how many packages they have! Or the same for llvm. It’s a dog’s breakfast.

The time that cargo’s rust focus hurts it is with binary packages. It’s not enough for ripgrep to live in cargo since it’s designed to be useful for people who aren’t rust developers. And I don’t have a good answer for how to solve that. I wish I did.


We'll this also works for rust libraries as they're usually useless for other languages, whereas libraries like gdal, opencv, fftw, etc. can be used by many languages. (Though to be fair, until rust gets a stable ABI it's not going to be a good choice for this sort of thing).

I personally consider docker/podman just completely punting on dependency management, and try to avoid them, but I'm probably in the minority...


Rust has a stable C export ABI.


Funnily enough, the most notable rust-implemented with C ABI shared library I'm aware of (librsvg) just became available on crates.io a few months ago...


...except pip/npm often build C or C++ while packages install, at least in some cases.

And Rust has the same issues, at least when shipping or consuming libraries with C APIs is in the picture.


Yes, sometimes, but it won't resolve C/C++ dependencies in general.


meson actually solves this problem on the build system, their wrap subsystem allows you to use the system provided lib if found, or fetch from a database during the build process if not.

best of both worlds, imo.


If you're smart about it you're not gonna have a very deep or wide dependency tree for your own deps. If you need more than maybe two of your own deps, maybe you need to reconsider how you structure your libraries, or you can join multiple libraries into the same project/build system umbrella (e.g. glib/gobject or the whole of QtBase)


Why would I want to join unrelated modules together into a kitchen sink library? The only benefit of that is to work around bad dependency management systems that make dependencies a hassle to work with. But otherwise, they’re a bad idea. Packages should do one thing and do it well. If I want a good random number library, I don’t want to import bob’s junk draw of random code. I want a small, reusable random number library. I want one that I chose. And my users want one they chose. And if that choice is bad, I want to be able to replace it without throwing out half my ecosystem or breaking downstream compatibility.

QtBase is a symptom of C++’s bad package management. 8 unrelated utilities should be in 8 small packages that users can pick and choose based on their needs.


It's less a workaround around bad dependency management, and more about the fact that it's simply easier to reason about a smaller set of dependencies regardless of the ecosystem. This includes both vetting, ensuring version compatibility, checking license compatibility and reducing the SBOM, as well as governance over the project and integration between different modules.

Too often have I seen the equivalents of frameworks in other languages being split over sometimes hundreds of packages, that don't always make it clear that they're to be updated in tandem, what their exact relationship is, and that the same organization manages all of them.

As for QtBase, it's a superproject, but that doesn't mean that you can't use its individual modules separately, and depending on the distro (e.g. debian) install them separately as well. A singular project installing multiple related libraries makes a lot of sense.


> This includes both vetting, ensuring version compatibility, checking license compatibility and reducing the SBOM, as well as governance over the project and integration between different modules.

Auditing large code bases takes disproportionately longer than auditing small code bases. So I don’t think that’s a win. License compatibility is trivial to check. SBOM is strictly larger if I pull in a kitchen sink package because I’m probably not using most of the stuff inside. Better to just pick out the components I want.

The one thing I’ll grant you is that shared ownership and visibility means it’s less likely that one rogue person will sneak ransomware into the dependencies.

Personally I’d love a capability based package manager that lets me pass a package exactly the capabilities it needs to do what I downloaded it for. Why does every package need access to my files when I install or run it? That’s ridiculous. Totally unnecessary and a massive security risk. We could solve that C++ style by hobbling package managers and using fewer, jumbo packages. But that doesn’t solve the root problem. I want a package system in a language which lets me pass capabilities to each library I use, when I need to use them. Eg “Please open your web server on the port associated with this capability token.” This needs language support too but it’d be so much better from a security point of view than anything that came before.


> or you can join multiple libraries into the same project/build system umbrella (e.g. glib/gobject or the whole of QtBase)

This also seems worse than the status quo. So instead of having 3 or 4 dependencies I now only one that combines 20 different ones I don't need.


This is an example where your tooling limits your solution space. There’s nothing inherently wrong with a deep or wide dep tree.


Why is this inherently better than better tooling for dependency management?


> It doesn't (need to) get packaged by a distribution until a program using it is packaged.

I think this is a key difference in approach. In languages with their own packaging systems, you routinely package libraries before there are programs using them. Publishing them on Crates/PyPI/npm/whatever is the bare minimum if you expect anyone to use them!

The number of tiny dependencies can go too far - I don't think I need to mention left-pad. But the difficulty of using dependencies in C/C++, and the results of reinventing stuff, vendoring, or using kitchen-sink dependencies like Qt, don't seem optimal either. There must be a happy medium somewhere.


As I said in another thread, I think the happy medium is lots of small packages (people want that). And a capability security model within programming languages so small dependencies are limited to interacting with their parameters (and any resources their parameters provide them) and can’t speak to the OS directly. That would solve 98% of the supply chain problem.

Leftpad has already been solved by a npm policy change forbidding packages from being unpublished.


> As for the amount of dependencies, you have to wonder if it's better to have tonnes of small dependencies or whether it's better to have a few bigger ones.

You don't have to wonder, we've done the experiment. Anyone working in an ecosystem with non-joke package management knows that tonnes of small dependencies are a lot better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: