People here focus on Python, but to me, a bioinformatician, conda is much more, it provides 99.99% of the tools I need. Like bwa, samtools, rsem, salmon, fastqc, R. And many, many obscure tools.
I wish you luck with tracking down versions of software used when you're writing papers... especially if you're using multiple conda environments. This is pretty much the example used in the article -- version mismatches.
But, I think this illustrates the problem very well.
Conda isn't just used for Python. It's used for general tools and libraries that Python scripts depend on. They could be C/C++ that needs to be compiled. It could be a Cython library. It could be...
When you're trying to be a package manager that operates on-top of the operating system's package manager, you're always going to have issues. And that is why Conda is such a mess, it's trying to do too much. Installation issues are one of the reason why I stopped writing so many projects in Python. For now, I'm only doing smaller scripts in Python. Anything larger than a module gets written in something else.
People here have mentioned Rust as an example of a language with a solid dependency toolchain. I've used more Go, which similarly has had dependency management tooling from the begining. By and large, these languages aren't trying to bring in C libraries that need to be compiled and linked into Python accessible code (it's probably possible, but not the main use-case).
For Python code though, when I do need to import a package, I always start with a fresh venv virtual environment, install whatever libraries are needed in that venv, and then always run the python from that absolute path (ex: `venv/bin/python3 script.py`). This has solved 99% of my dependency issues. If you can separate yourself from the system python as much as possible, you're 90% of the way there.
Side rant: Which, is why I think there is a problem with Python to begin with -- *nix OSes all include a system level Python install. Dependencies only become a problem when you're installing libraries in a global path. If you can have separate dependency trees for individual projects, you're largely safe. It's not very storage efficient, but that's a different issue.
> I wish you luck with tracking down versions of software used when you're writing papers... especially if you're using multiple conda environments.
How would you do this otherwise? I find `conda list` to be terribly helpful.
As a tool developer for bioinformaticians, I can't imagine trying to work with OS package managers, so that would leave vendoring multiple languages and libraries in a home-grown scheme slightly worse and more brittle than conda.
I also don't think it's realistic to imagine that any single language (and thus language-specific build tools or pkg manager) is sufficient. Since we're still using fortran deep in the guts of many higher level libraries (recent tensor stuff is disrupting this a bit, but it's not like openBLAS isn't still there as a default backend).
> home-grown scheme slightly worse and more brittle than conda
I think you might be surprised as to how long this has been going on (or maybe you already know...). When I started with HPC and bioinformatics, Modules were already well established as a mechanism for keeping track of versioning and multiple libraries and tools. And this was over 20 years ago.
The trick to all of this is to be meticulous in how data and programs are organized. If you're organized, then all of the tracking and trails are easy. It's just soooo easy to be disorganized. This is especially true with non-devs who are trying to use a Conda installed tool. You certainly can be organized and use Conda, but more often than not, for me, tools published with Conda have been a $WORKSFORME situation. If it works, great. If it doesn't... well, good luck trying to figure out what went wrong.
I generally try to keep my dependency trees light and if I need to install a tool, I'll manually install the version I need. If I need multiple versions, modules are still a thing. I generally am hesitant to trust most academic code and pipelines, so blindly installing with Conda is usually my last resort.
I'm far more comfortable with Docker-ized pipelines though. At least then you know when the dev says $WORKSFORME, it will also $WORKFORYOU.