I'm surprised the post title says "with focus on running containers". That would be Colima, AFAIK, which uses Lima but is a separate project. See https://github.com/abiosoft/colima. Colima is meant as a free replacement for Docker Desktop on Mac OS.
Lima, OTOH, is more of a nice way to run a Linux VM on Mac in a way that integrates the guest and the host systems to a good degree by default. It wraps either QEMU or Apple's VZ framework.
For a more traditional VM GUI, there's https://mac.getutm.app/ which is a completely separate free project that also wraps QEMU or VZ. It will run any OS you want, not just Linux.
followed by "the rest of the owl," since I doubt very seriously that debian-12-nocloud-arm64 boots up docker or containerd and your cited command does not mount $HOME in qemu such that $(docker run -v $HOME/work:/work debian:12) would do something reasonable from within the VM
If you were interested, Brew also offers "opt" links to avoid hard-coding version numbers into paths:
I recommend checking out OrbStack[0] for containers and VMs on MacOS.
It’s really, really, really awesome and performant. And it seems to be a one-man show. Really impressive. I’ve switched from docker for desktop a year ago, and it’s been a night-and-day difference.
Most I’ve shown it to have very quickly switched to it as well.
(I’m completely unaffiliated, just a very happy user)
I had really bad experience with orbstack, it is fast but the dev domains work half of the time and sometimes point to the wrong container. the "support" was not really reactive, that is when i figured it is probably one or two people behind it. Total no go for anything i have to pay money for and rely on at work. I spend the time building the dev domain feature i was missing from colima and never looked back. Performance seems to be on par, especially with vz and virtiofs virtualisation.
Ops here, bored is good. Bored means it's working properly and it's a solved problem. Also, I haven't seen a good alternative outside ConfigMgr like Puppet/Chef/Ansible or something like Nix which has another set of problems. Containers work well enough and their positives vastly outweigh negatives in most cases.
It's also a solvable problem for most part. If you have a set of packages almost all the software needs, someone should probably create base image for everyone with everything pre-installed so your dockerfile starts with FROM companybaseimage:latest
> Ops here, bored is good. Bored means it's working properly and it's a solved problem.
It's not a solved problem because build systems keep breaking in random places.
Granted, these problems are in general easy to solve. But we're talking about death from a thousand papercuts here. In other words, boredom from a thousand compile errors.
:shrug: Thousand compile errors sounds like Dev Team hasn't buckled down and fixed their build script (Dockerfile). Maybe they should take some time and do that. Other beauty about containers is ability to build in environment Devs control and any changes to build servers shouldn't impact them.
Perhaps you have never been in the situation where one package forces you into a particular version which in turn forces its dependencies to versions, etc., eventually pushing you into an impossible situation.
Anyway, I wish I had a Dev Team like you have to solve these problems for me ...
No, not really but that is because I use containers as a way to create multi-purpose machines, not single-purpose application hosts. No docker or any of its clones except for those few services which are wholly based around it - e.g. Kasm [1] - but (bottom to top) Debian - Proxmox - Debian containers running multiple services. This lets me parcel out that biggish server into smaller bits which each play their own role: a database server running anything database-related (Postgres, Mysql, Redis, Rabbitmq, etc), a mail server running mail-related stuff, a 'serve' server running all user-facing services (dozens of them on a single container), an authentication server running LDAP/Kerberos and a central Letsencrypt certbot instance which serves all other services in my domains, a few build servers for different environments and, yes, a separate 'docker' server where it truly is 'containers all the way down, sonny'. Using containers this way makes for an easily manageable system where components can be swapped or migrated without significant effort without overcomplicating things just for the sake of it.
> "a 'serve' server running all user-facing services (dozens of them on a single container)"
I'm no expert, but I've encountered the "stick to one process per container" rule of thumb many many times. Could you please share your perspective? Thanks in advance.
Because it's excellent rule of thumb. When you need to run more than one process in container, you are forced to write scripts/service handler. This introduces complexity to system and troubleshooting is much more difficult when things go wrong. However, on practical level, it has to be done occasionally especially when containerizing third party applications that were not designed for containers.
Ansible is directly your friend here. Puppet/Chef, and others help in other ways too.
Unless your problem/complaint is deeper then that.
Bored sysadmin: All I do is install X, Y, & Z over and over again. Surely there's more to life then this.
or
Resource-minded DevOps: Hey, it seems like we install the same 4 packages on these hundreds of containers! Surely there's something we can do to optimize this!
I haven't really ran into this issue that much, we've had the same setup across Ubuntu 18.04, 20.04, 22.04 and the only thing that has broken across that was setup of wireless.
Does it have precompiled binaries for embedded systems?
My problem is that embedded systems usually have little compute power so compiling stuff for them is really tiresome (and cross-compiling is another nightmare), so I apt-get what I can and compile as little as possible.
Also, afaiu, Nix is tied to a particular version of libc which has a high probability of not working with the vendor-installed libraries on my systems.
Depends on the embedded system. It has precompiled binaries for aarch64-linux, but idk about others. There's definitely no public binary cache for bespoke, proprietary platforms though.
> Also, afaiu, Nix is tied to a particular version of libc which has a high probability of not working with the vendor-installed libraries on my systems.
Nix ships a whole dependency tree with every package, down to and including a libc. If you have another libc, Nix won't care.
On the other hand, if your hardware isn't supported by the libc Nix ships, the natural path is probably to package whatever given libc in Nix and build against that. Then you are back to building from source via Nix.
Nix has pretty good support for cross-compilation, multilib, and 'remote builders', though. You can set your embedded systems up so that Nix builds happen on more powerful machines and then get copied over.
Nix evaluation itself requires a lot of RAM, though, so if you use Nix for embedded you probably still want to push packages to the weaker systems from the outside.
I’m on Mac and I think Lima is terrific. To me it’s as if VirtualBox and Vagrant had a child. Inside the VM I mainly run Docker Engine and sometimes other Linux-specific applications.
Since we're here. What do people use when they need to test their software installs well on naked Ubuntu Server of some version? To not do manual setup in Virtualbox, can be Linux-only.
I've found Multipass https://multipass.run/ by Canonical and I wonder if anyone recommends it.
I work for Canonical, so maybe my opinion is skewed, but I personally use Multipass every day - it's super simple, it's quick, and their reverse SSH filesystem mounting is really handy
This could be what I've been looking for. I'm trying to run SLURM on Mac, but it uses some Linux kernel specific stuff so it can't just run under Docker because the macOS kernel doesn't have those features. I've resorted to using a VM but would really prefer something lighter weight.
I swear that I used to run it on my Mac (I know I ran SGE, but I’m 90% sure I installed SLURM at some point).
Are you trying to run Mac jobs or Linux jobs? Is there a reason you need to run it in Docker (which still runs in a Linux VM, doesn’t it)?
I switched to using a more lightweight scheduler when I need to run a batch job on my Mac. But even then, I’m running Mac jobs (or generic *nix jobs), not Linux specific code/containers.
But, if you need to manage a Linux VM for SLURM (or any other reason), I really like Lima.
All nice if you need a system to try something quickly. But nowadays I just want to use infra as code. With Nix(OS) and docker/podman compose a system is clean and comprehensible. I feel like I don’t really need VMs anymore.
I agree NixOS + docker/podman-compose is a good compromise but one has to be aware NixOS still run podman as root (1) [0]. What is very scary and defeat the purpose of rootless containers.
It's the official NixOS which does that, and the official module isn't a requirement for using Podman on NixOS. Running it rootless is just a matter of defining your own systemd unit.
This replaces Vagrant + Virtualbox (VMs are configured via YAML files instead of Vagrant's DSL). It might be less flexible, but with my basic needs I haven't noticed. Importantly, there's no need to install Virtualbox Extension Pack, with its non-free license.
Lima, OTOH, is more of a nice way to run a Linux VM on Mac in a way that integrates the guest and the host systems to a good degree by default. It wraps either QEMU or Apple's VZ framework.
In my mind, Lima is spiritually similar to https://github.com/89luca89/distrobox, but for a Mac host
For a more traditional VM GUI, there's https://mac.getutm.app/ which is a completely separate free project that also wraps QEMU or VZ. It will run any OS you want, not just Linux.