No native docker support, no headless management options (enterprise strength), Limited QoS management, lack of robust python support (out of the box), interactive user focused security model.
There is no such thing. Tell me, which combination of the 15+ virtual environments, dependency management and Python version managers would you use? And how would you prevent "project collision" (where one Python project bumps into another one and one just stops working)? Example: SSL library differences across projects is a notorious culprit.
Python is garbage and I don't understand why people put up with this crap unless you seriously only run ONE SINGLE Python project at a time and do not care what else silently breaks. Having to run every Python app in its own Docker image (which is the only real solution to this, if you don't want to learn Nix, which you really should, because it is better thanks to determinism... but entails its own set of issues) is not a reasonable compromise.
This is incoherent to me. Your complaints are about packaging, but the elixir wrapper doesn't deal with that in any way -- it just wraps UV, which you could use without elixir.
What am I missing?
Also, typically when people say things like
> Tell me, which combination of the 15+ virtual environments, dependency management and Python version managers
It means they have been trapped in a cycle of thinking "just one more tool will surely solve my problem", instead of realising that the tools _are_ the problem, and if you just use the official methods (virtualenv and pip from a stock python install), things mostly just work.
I agree. Python certainly had its speedbumps, but it's utterly manageable today and has been for years and years. It seems like people get hung up on there not being 1 official way to do things, but I think that's been great, too: the competition gave us nice things like Poetry and UV. The odds are slim that a Rust tool would've been accepted as the official Python.org-supplied system, but now we have it.
There are reasons to want something more featureful than plain pip. Even without them, pip+virtualenv has been completely usable for, what, 15 years now?
I've seen issues with pip + virtualenv (ssl lib issues, IIRC). I've always used those at minimum and have still run into problems. (I like to download random projects to try them out.) I've also seen issues with python projects silently becoming stale and not working, or python projects walking over other python projects because pip + virtualenv does NOT encompass all Python deps to the metal. This also doesn't mean you can have 2 commandline Python apps available in the same commandline environment, because PATH would have to prefer one or the other at some point.
Here's a question- If you don't touch a project in 1 year, do you expect it to still work, or not? If your answer is the latter, then we simply won't see eye-to-eye on this.
that's not good enough. If I'm in the business of writing Python code, I (ideally) don't want to _also_ be in the business of working around Python design deficiencies. Either solve the problem definitively, or do not try to solve the problem at all, because the middle road just leads to endless headaches for people WHILE ALSO disincentivizing a better solution.
Node has better dependency management than Python- And that's really saying something.
I don't see why it should be so binary. I said it "mostly" just works because there are no packaging systems which do exactly what you want 100% of the time.
I've had plenty of problems with node, for example. You mentioned nix, which is much better, but also comes with tons of hard trade-offs.
If a packaging tool doesn't do what i wanted, but I can understand why, and ultimately the tool is not to blame, that's fine by me. The issues I can think of fit reasonably well within this scope:
- requirement version conflicts: packages are updated by different developers, so sometimes their requirements might not be compatible with each other. That's not pip's fault, and it tells you what the problem is so you can resolve it.
- code that's not compatible with updated packages: this is mainly down to requirement versions which are specified too loosely, and not the fault of pip. If you want to lock dependencies to exact versions (like node does by default) you can do this too (with requirements.txt). It's a bit harsh to blame pip for not doing this for you, it's like blaming npm for not committing your package.lock. It would be better if your average python developer was better at this.
- native library issues: some packages depend on you having specific libraries (and versions thereof) installed, and there's not much that pip can do about that. This is where your "ssl issues" come from. This is pretty common in python because it's used so much as "glue" between native libraries -- all the most fun packages are wrappers around native code. This has got a lot better in the past few years with manylinux wheels (which include native libraries). These require a lot of non-python-specific work to build, so i don't blame pip where they don't exist.
It's not perfect, but it's not a big enough deal to rant about or reject entirely if you would otherwise get a lot of value out of the ecosystem.
The thing is, most people who are writing python code are not in the business of writing python code. They're students, scientists, people with the word "business" or "analyst" in their title. They have bigger fish to fry than learning a different language ecosystem.
It took 30 years to get them to switch from excel to python. I think it's unrealistic to expect that they're going to switch from python any time soon. So for better or worse, these are problems that we have to solve.
> at least be able to use Python, but in a very controlled, not-insane way
Thats funny, about 10 years ago I started my career in a startup that had Python business logic running under Erlang (via custom connector) which handled supervision and task distribution, and it looked insane for me at the time.
Even today I think it can be useful but is very hard to maintain, and containers are a good enough way to handle python.
> containers are a good enough way to handle python
I disagree. My take on that is that they are an ugly enough way to handle Python. And, among other problems, don't permit you to easily mess with the code (one of many reasons why this is ugly). Need access to something stateful from the container app? That's another PITA.
I feel you on a lot of this! But out of the box Python support? Does anybody actually want that? It’s pretty darn quick & straightforward to get a Python environment up & running on MacOS. Maybe I’m misunderstanding what you mean here.
>it’ll run reliably on other people’s machines a few years from now
That's optimistic. What if the system Python gets upgraded? For some reason, Python libraries tend to be super picky about the Python versions they support (not just Python 2 vs 3).
That's using a Linux VM. The idea people are asking about is native process isolation. Yes you'd have to rebuild Docker containers based on some sort of (small) macOS base layer and Homebrew/Macports, but hey. Being able to even run nodejs or php with its thousands of files natively would be a gamechanger in performance.
Also, it were possible to containerize macos, or even do an unintended vm installation, then it’d be possible for apple to automatically regression test their stuff.
Honest question: why do you want this in MacOS? Do you understand what docker does? (it's fundamentally a linux technology, unless you are asking for user namespaces and chroot w/o SIP on MacOS, but that doesn't make sense since the app sandbox exists).
MacOS doesn't have the fundamental ecosystem problems that beget the need for docker.
If the answer is "I want to run docker containers because I have them" then use orbstack or run linux through the virtualization framework (not Docker desktop). It's remarkably fast.
I have a small rackmounted rendering farm using mac minis, which outperform everything in the Intel world, even order of magnitude more expensive.
I run macOS on my personal and development computers for over a decade and I use Linux since inception on server side.
My experience: running server-side macOS is such a PITA it's not even funny. It may even pretend it has ssh while in fact the ssh server is only available on good days and only after Remote Desktop logged in at least once. Launchd makes you wanna crave systemd. etc, etc.
So, about docker. I would absolutely love to run my app in a containerized environment on a Mac in order to not touch the main OS.
Funny, I ran a bunch of Mac minis in colo for over a decade with no problems. Maybe you have a config problem?
Of course, I had a LOM/KVM and redundant networking etc. They were substantially more reliable than the Dell equipment that I used in my day job for sure.
Software-wise it's much different to an expected behavior. For example, macOS won't let you in over SSH until you log in via Remote Desktop. You'll get "connection closed" immediately.
Or sometimes it will.
And that depends not on the count of connection attempts or anything you can do locally but rather on the boot process somehow. Sometimes it boots in a way that permits ssh, sometimes not. The same computer, the same OS.
Then after you login on screen sharing and log out, macOS will let you in over ssh. For a few days. And then again will force you to login via GUI. Or maybe not. I have no idea what makes it.
I have trouble reading macOS logs or understanding it. It spews a few log messages per second even idle. If you grep ssh these messages contain zero actionable data, like "unsuccessful attempt" or similar.
Another complaint is that launchd reports the same "I/O error" on absolutely all error situations, from syntax error in plist to corrupt binary. Makes development and debugging of launchagents very fun.
What would a containerization environment on MacOS give you that you don't already have? Like concretely - what does containerization mean in the context of a MacOS user space?
In Linux, it means something very specific: a user/mount/pid/network namespace, overlayfs to provide a rootfs, chroot to pivot to the new root to do your work, and port forwarding between the host/guest systems.
On MacOS I don't know what containerization means short of virtualization. But you have virtualization on MacOS already, so why not use that?
On macOS probably I'd like chroot and pid/mount namespaces. I'd like to install OS and dependencies in a container and run my application from there so that it does not interfere with host OS. My app is GPU heavy and has lots of dependencies (OpenCV, LAPACK, armadillo, lots and lots) and I'd like to not pollute the host OS with it.
Also I want to run the latest OS with all security patches on the host while having a stable and known macOS version in a container given how developer-hostile Apple is.
What you want is virtualization, not containerization. And you have this already. Since MacOS doesn't have a stable syscall interface, decoupling the host/guest in a mount namespace and chroot would lead to horrible breakages when the system libraries of your container are out of date with your host OS. So you would have to share the host OS and a big portion of the userspace to begin with.
Or you can package your app as a .app and not worry about it, there's no "pollution" when everything is bundled.
Yeah, seems like on macOS that level of isolation is achievable solely with virtualization unlike in Linux. We were talking about missing things in macOS and containerization is one of them.
> MacOS doesn't have the fundamental ecosystem problems that beget the need for docker.
Anyone wanting to run and manage their own suite of Macs to build multiple massive iOS and Mac apps at scale, for dozens or hundreds or thousands of developers deploying their changes.
xcodebuild is by far the most obvious "needs native for max perf" but there are a few other tools that require macOS. But obviously if you have multiple repos and apps, you might require many different versions of the same tools to build everything.
Sounds like a perfect use case for native containers.
Docker Desktop now offers an option to use the virtualization framework, and works pretty well. But you're still constantly running a VM because "docker is how devs work now right?". I agree with your comment.