Hacker News new | past | comments | ask | show | jobs | submit login
Linux virtual machines, with a focus on running containers (lima-vm.io)
162 points by udev4096 7 months ago | hide | past | favorite | 57 comments



I'm surprised the post title says "with focus on running containers". That would be Colima, AFAIK, which uses Lima but is a separate project. See https://github.com/abiosoft/colima. Colima is meant as a free replacement for Docker Desktop on Mac OS.

Lima, OTOH, is more of a nice way to run a Linux VM on Mac in a way that integrates the guest and the host systems to a good degree by default. It wraps either QEMU or Apple's VZ framework.

In my mind, Lima is spiritually similar to https://github.com/89luca89/distrobox, but for a Mac host

For a more traditional VM GUI, there's https://mac.getutm.app/ which is a completely separate free project that also wraps QEMU or VZ. It will run any OS you want, not just Linux.


Being mean here:

Why use either? Aren't they both just "opinionated" wrappers around this basically:

    qemu-system-aarch64 \
      -M virt,accel=hvf \
      -cpu host \
      -smp cpus=8 \  
      -m 8192 \
      -boot d \
      -drive if=pflash,format=raw,file=/opt/homebrew/Cellar/qemu/9.0.0/share/qemu/edk2-aarch64-code.fd \
      -drive if=virtio,format=qcow2,file=debian-12-nocloud-arm64.qcow2 \
      -net nic \
      -net user,hostfwd=tcp::2222-:22,hostfwd=tcp::6443-:6443 \
      -device virtio-gpu-pci \
      -device nec-usb-xhci \
      -device usb-kbd \
      -device usb-tablet \
      -nographic


followed by "the rest of the owl," since I doubt very seriously that debian-12-nocloud-arm64 boots up docker or containerd and your cited command does not mount $HOME in qemu such that $(docker run -v $HOME/work:/work debian:12) would do something reasonable from within the VM

If you were interested, Brew also offers "opt" links to avoid hard-coding version numbers into paths:

      -drive if=pflash,format=raw,file=/opt/homebrew/opt/qemu/share/qemu/edk2-aarch64-code.fd \
and if one happens to be already running with $(brew shellenv) then

      -drive if=pflash,format=raw,file=${HOMEBREW_PREFIX}/opt/qemu/share/qemu/edk2-aarch64-code.fd \
will allow the script to work on both arm and x86_64 setups


It seems Lima also wants to run containers, but using containerd:

> The key difference is that Colima launches Docker by default, while Lima launches containerd by default.

https://lima-vm.io/docs/faq/colima/


Yes, it took me a minute to realize that but excited they support Apptainer


I recommend checking out OrbStack[0] for containers and VMs on MacOS.

It’s really, really, really awesome and performant. And it seems to be a one-man show. Really impressive. I’ve switched from docker for desktop a year ago, and it’s been a night-and-day difference.

Most I’ve shown it to have very quickly switched to it as well.

(I’m completely unaffiliated, just a very happy user)

[0]: https://orbstack.dev/


I had really bad experience with orbstack, it is fast but the dev domains work half of the time and sometimes point to the wrong container. the "support" was not really reactive, that is when i figured it is probably one or two people behind it. Total no go for anything i have to pay money for and rely on at work. I spend the time building the dev domain feature i was missing from colima and never looked back. Performance seems to be on par, especially with vz and virtiofs virtualisation.


+1, the dev is fairly active on here too https://news.ycombinator.com/user?id=kdrag0n


Anyone else getting totally bored from installing an reinstalling software environments over and over again into containers?


Ops here, bored is good. Bored means it's working properly and it's a solved problem. Also, I haven't seen a good alternative outside ConfigMgr like Puppet/Chef/Ansible or something like Nix which has another set of problems. Containers work well enough and their positives vastly outweigh negatives in most cases.

It's also a solvable problem for most part. If you have a set of packages almost all the software needs, someone should probably create base image for everyone with everything pre-installed so your dockerfile starts with FROM companybaseimage:latest


> Ops here, bored is good. Bored means it's working properly and it's a solved problem.

It's not a solved problem because build systems keep breaking in random places.

Granted, these problems are in general easy to solve. But we're talking about death from a thousand papercuts here. In other words, boredom from a thousand compile errors.


:shrug: Thousand compile errors sounds like Dev Team hasn't buckled down and fixed their build script (Dockerfile). Maybe they should take some time and do that. Other beauty about containers is ability to build in environment Devs control and any changes to build servers shouldn't impact them.


Perhaps you have never been in the situation where one package forces you into a particular version which in turn forces its dependencies to versions, etc., eventually pushing you into an impossible situation.

Anyway, I wish I had a Dev Team like you have to solve these problems for me ...


If you aren't leveraging the namespace abstraction of containers, why use them at all?

Noting that I am being frank and honest here, but you don't have a dev problem here, but most likely an organization level problem.

Namespaces and interface contracts should be dramatically reducing dependency issues, not causing them to explode.


Every time I am dealing with containers, I wonder myself about the irony of Linus discussion with Tanenbaum.


No, not really but that is because I use containers as a way to create multi-purpose machines, not single-purpose application hosts. No docker or any of its clones except for those few services which are wholly based around it - e.g. Kasm [1] - but (bottom to top) Debian - Proxmox - Debian containers running multiple services. This lets me parcel out that biggish server into smaller bits which each play their own role: a database server running anything database-related (Postgres, Mysql, Redis, Rabbitmq, etc), a mail server running mail-related stuff, a 'serve' server running all user-facing services (dozens of them on a single container), an authentication server running LDAP/Kerberos and a central Letsencrypt certbot instance which serves all other services in my domains, a few build servers for different environments and, yes, a separate 'docker' server where it truly is 'containers all the way down, sonny'. Using containers this way makes for an easily manageable system where components can be swapped or migrated without significant effort without overcomplicating things just for the sake of it.

[1] https://kasmweb.com/docs/latest/index.html


> "a 'serve' server running all user-facing services (dozens of them on a single container)"

I'm no expert, but I've encountered the "stick to one process per container" rule of thumb many many times. Could you please share your perspective? Thanks in advance.


Because it's excellent rule of thumb. When you need to run more than one process in container, you are forced to write scripts/service handler. This introduces complexity to system and troubleshooting is much more difficult when things go wrong. However, on practical level, it has to be done occasionally especially when containerizing third party applications that were not designed for containers.


Ansible is directly your friend here. Puppet/Chef, and others help in other ways too.

Unless your problem/complaint is deeper then that.

Bored sysadmin: All I do is install X, Y, & Z over and over again. Surely there's more to life then this.

or

Resource-minded DevOps: Hey, it seems like we install the same 4 packages on these hundreds of containers! Surely there's something we can do to optimize this!


Even with scripting it is tedious because the scripts keep breaking with new versions of packages.

And the number of required packages gets bigger and bigger.


very true.

But with Nix!...Now you have two problems.


I haven't really ran into this issue that much, we've had the same setup across Ubuntu 18.04, 20.04, 22.04 and the only thing that has broken across that was setup of wireless.


If i have to do it more than twice, its being automated.


It's probably because everybody have been doing that for a decade and we have no good straight explanation on why.

The most naive thought it was good for security reasons.

It kept a lot of people busy during the ZIRP era when money was abundant or corporation were moving to the cloud.

My guess is either the industry is gonna stagnate or a new simpler solution to package and run application is gonna appear sooner or later.


Maybe try nix?


Does it have precompiled binaries for embedded systems?

My problem is that embedded systems usually have little compute power so compiling stuff for them is really tiresome (and cross-compiling is another nightmare), so I apt-get what I can and compile as little as possible.

Also, afaiu, Nix is tied to a particular version of libc which has a high probability of not working with the vendor-installed libraries on my systems.


Depends on the embedded system. It has precompiled binaries for aarch64-linux, but idk about others. There's definitely no public binary cache for bespoke, proprietary platforms though.

> Also, afaiu, Nix is tied to a particular version of libc which has a high probability of not working with the vendor-installed libraries on my systems.

Nix ships a whole dependency tree with every package, down to and including a libc. If you have another libc, Nix won't care.

On the other hand, if your hardware isn't supported by the libc Nix ships, the natural path is probably to package whatever given libc in Nix and build against that. Then you are back to building from source via Nix.

Nix has pretty good support for cross-compilation, multilib, and 'remote builders', though. You can set your embedded systems up so that Nix builds happen on more powerful machines and then get copied over.

Nix evaluation itself requires a lot of RAM, though, so if you use Nix for embedded you probably still want to push packages to the weaker systems from the outside.


I've been enjoying gokrazy. Makes cross compiling and updating my rpi server a breeze. Still an early project though but it works well for my needs.


So for you maybe not nix? I'd have never known given the original comment


And now you have two problems...


Isn't it always a matter of choosing which problems you prefer?


The problems that Nix gives you are much nicer to have than the problems of tbe FHS absurdity.


I genuinely wonder, who is this for? I'm having a hard time telling from the website.


Its basically building block for Docker Desktop replacements. The "adopters" section is informative, you got podman-desktop and finch there.


I’m on Mac and I think Lima is terrific. To me it’s as if VirtualBox and Vagrant had a child. Inside the VM I mainly run Docker Engine and sometimes other Linux-specific applications.


Since we're here. What do people use when they need to test their software installs well on naked Ubuntu Server of some version? To not do manual setup in Virtualbox, can be Linux-only.

I've found Multipass https://multipass.run/ by Canonical and I wonder if anyone recommends it.



I work for Canonical, so maybe my opinion is skewed, but I personally use Multipass every day - it's super simple, it's quick, and their reverse SSH filesystem mounting is really handy


Good to hear. I'll definitely check it out.


I used vagrant in the past for this, but that's years ago.


This could be what I've been looking for. I'm trying to run SLURM on Mac, but it uses some Linux kernel specific stuff so it can't just run under Docker because the macOS kernel doesn't have those features. I've resorted to using a VM but would really prefer something lighter weight.


Docker always uses a VM on the Mac given there is nothing in the Mac kernel that can directly emulate Linux system calls.


The batch scheduler?

I swear that I used to run it on my Mac (I know I ran SGE, but I’m 90% sure I installed SLURM at some point).

Are you trying to run Mac jobs or Linux jobs? Is there a reason you need to run it in Docker (which still runs in a Linux VM, doesn’t it)?

I switched to using a more lightweight scheduler when I need to run a batch job on my Mac. But even then, I’m running Mac jobs (or generic *nix jobs), not Linux specific code/containers.

But, if you need to manage a Linux VM for SLURM (or any other reason), I really like Lima.


For MacOS, Macpine has a similar premise: https://github.com/beringresearch/macpine

With a view to use lightweight Linux VMs (alpine) to:

* Easily spin up and manage lightweight Alpine Linux environments.

* Use tiny VMs to take advantage of containerisation technologies, including Incus, LXD and Docker.

* Build and test software on x86_64 and aarch64 systems.


Will this do any gui/X? (Didn’t see any mention on the site faq)

Also can it interact with the host microphone and play sounds thru host speaker etc?


See "Adopters" section of the GitHub README:

https://github.com/lima-vm/lima?tab=readme-ov-file#adopters


Thanks. Although I didn’t meant a gui front-end or management of Lima.. I meant can Lima do Linux gui/X applications?


Yes, using X forwarding and XQuartz for instance. However, there are some bugs. https://github.com/lima-vm/lima/issues/2099


Looks like Incus/LXD/SystemD-nspawn.

All nice if you need a system to try something quickly. But nowadays I just want to use infra as code. With Nix(OS) and docker/podman compose a system is clean and comprehensible. I feel like I don’t really need VMs anymore.


I agree NixOS + docker/podman-compose is a good compromise but one has to be aware NixOS still run podman as root (1) [0]. What is very scary and defeat the purpose of rootless containers.

- [0] https://github.com/NixOS/nixpkgs/issues/259770


It's the official NixOS which does that, and the official module isn't a requirement for using Podman on NixOS. Running it rootless is just a matter of defining your own systemd unit.


Lima is very useful on Mac where you can’t launch OCI containers natively.


Vagrant + VirtualBox + Ansible user. No Docker (I can't justify the time it would cost me to migrate).

Does this replace Vagrant, VirtualBox, or something else, and I have the wrong paradigm?


This replaces Vagrant + Virtualbox (VMs are configured via YAML files instead of Vagrant's DSL). It might be less flexible, but with my basic needs I haven't noticed. Importantly, there's no need to install Virtualbox Extension Pack, with its non-free license.


I prefer Ansible syntax over the Containerfile


Can become an alternative to vagrant+vitualbox?


Indeed!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: