I'm excited to see this finally happen—acknowledgement that Docker Desktop has been an essential part of the process of onboarding devs (who often use the command line, but sparingly) into containerized development with a gentle on-ramp.
The big question is whether Podman Desktop will be (a) stable and not a memory-hog, (b) make container workflows on Mac/Windows as simple (conceptually at least) as on Linux, and (c) be a sustainable effort for the community that's still extremely Red Hat-centric.
To that last point, I still see very little adoption outside of the Red Hat ecosystem. It seems like `docker-ce` is still installed on most servers, `docker-compose` for lightweight app orchestration, and when people use Kubernetes, few people know or care what underlying container management daemon is running.
I can write docker-compose file, run it on my laptop.
Can't do that without spending 2 days of pre study with K8s. And to have it ready on a basic laptop is not fun. At all
I can do it. It's still overly complex for my need (local development with a db, some services...). I don't need the complexity of k8s for that.
In fact, I don't need a big complex solution for most of my deployment. Container are not only to be deployed in the Cloud, or on a Cloud like infra. They're convenient to create and isolate some service on one (yes 1) server without having to fight to install the dependencies. docker-compose up and my blog is up, some random website, a logger based on kibana or whatever.
So yup, I don't care about k8s. I'll care about it when the need arise, knowing full well that there's cost and benefit to it. But not for my current low need.
I work on an operations team of about 2.5 people who handle everything for a small SaaS company and we're slowly containerizing. New systems are built and deployed in Docker and the legacy stuff is slowly getting there. We're using Swarm which I'm thinking is probably more like the middle ground.
I've tried to make podman work so many times but one of my biggest issues is the lack of great documentation and support from third parties (that's okay!). I just have a hard time finding good examples well explained to gain anything out of it.
I do think my issues are coming from not fully understanding rootless/rootful stuff combined with SELinux doing its thing as well.
I like docker compose to deploy stuff.
I do use podman+distrobox (Steam Deck & Fedora Kinoite)
It looks like I can finally see my distrobox containers with podman desktop. The first time I installed it months ago I could see them.
Anyone have any experience running Windows and WSL2 with Podman Desktop? I'm running W11, with fedora in WSL2. What I've been doing with Docker Desktop is running Docker Desktop on Windows startup, and that gives me access to the docker machine in both powershell and inside my wsl2 environment (for the latter, docker-desktop installs a binary inside fedora-wsl2 at `/usr/local/bin/docker` that communicates to the host)
So far, I uninstalled Docker-Desktop and installed Podman-Desktop, and now I can run `podman` from powershell but not from fedora. I'm about to try `sudo dnf install podman` and hope it connects to the podman-machine? I dunno, it's not exactly clear
> now I can run `podman` from powershell but not from fedora. I'm about to try `sudo dnf install podman` and hope it connects to the podman-machine? I dunno, it's not exactly clear
One thing you could do is just symlink or wrap the Windows podman.exe as `podman` on your WSL guests, and rely on WSL interop at the CLI instead of sharing a socket. This is probably what Docker Desktop does, based on your description (I thought it (used to?) share(s) a socket from the dedicated WSL guest it creates to your other WSL guests).
Alternatively, if you install Podman Desktop via Scoop and have WSL interop enabled for your guests (so that your Windows binaries appear on your Linux guests' PATHs), I think you'll get the same `podman` as your Windows PowerShell sessions access onto your WSL guests for free.
1. In the past, created a managed WSL vm to run containers.
2. At some point, it included the option to use a WSL distro instead, but you had to tell it explicitly.
3. Nowadays it detects whether a default WSL distro is present and uses it to run containers automatically. Otherwise it creates a managed WSL distro just to run containers.
As far as I understand, Podman Desktop still is at (1). You can't tell it to use your own WSL distro.
> I'm about to try `sudo dnf install podman` and hope it connects to the podman-machine? I dunno, it's not exactly clear
I think podman-machine is meant to be executed on the host, but worth a try anyway
Docker Desktop provides some compelling options for lowering the barrier of entry for selfhosting. Today you can take a Windows laptop and run just about any Linux service, with pretty fast networking and host disk access. No CLI experience required. The only major missing piece is inbound networking from the internet. Tunneling[0] is my preferred solution to that problem. Or if you don't want to expose your services to the internet there's already a Tailscale extension[1].
It's exciting and important to have an open source alternative in this space.
What value is added by swapping Docker out for an alternative on ECS? It's (at least as I understand it) basically an implementation detail, and well beneath the "covered by AWS support, don't touch" surface
You're right in a way, I really don't care what AWS does under the hood. The bigger issue is that I don't want any avoidable discrepancies between what I'm testing with on local and what I'm deploying. Obviously it's never 1:1 because my local isn't an AWS cloud infrastructure, but up til now the container engine isn't a variable.
I mean, that’s a worthy goal but it’s not necessary: discrepancies like that are uncommon and since there are many other potential differences between your desktop and the server environment you need to have a robust test suite in any case to make sure you aren’t accidentally over-coupling your code to your development system.
Are you saying differences between Podman and Docker are uncommon? My untested assumption is that they're not uncommon just given how new Podman is. Testing is nice but you don't want to have to wait for that to run to find issues that you could make immediately obvious without it.
Yes. I’ve been using Podman for ages and a lot of the Kubernetes world is no longer using dockerd, and it generally just works. You can shoot yourself in the foot with policy choices but there’s a 0% chance that the ECS team would ship something like that without heavy testing.
It’s also not something you need instant feedback on continuously while you work. The kind of things switching container runtimes will expose are unlikely to get caught by your local tests anyway so you’re most likely to only find it on your servers with lots of data and higher activity anyway.
Bear in mind that there are alternatives: JavaFX and Compose for Desktop are the ones I know best. They can be used from high level and popular languages. JavaFX is particularly good for desktop apps and can be compiled down to purely native code that starts as fast as an app written in C++ (likewise for Compose but the experiments with that are newer).
There are some downsides: fewer people know them than with HTML. There are a few tweaks like window styles on macOS it could use to be more modern. On the other hand, it's easy to learn and you benefit from a proper reactively bindable widget library, like table and tree views if you need those. For developer tools such widgets can be useful.
CfD uses Material Design of course, but you can customize it.
Having written desktop apps of varying complexity in all these frameworks, I can't say Electron is clearly superior. It is in some cases (e.g. if I was wanting to write a video conferencing app then it makes sense to re-use Google's investment into Hangouts/Meet for that), but it's also worse in some cases. For instance the multi-process model constantly gets in the way, but you can't disable it as otherwise XSS turns into RCE.
You mean developers who not only know 4-5 different language-runtime-GUI stacks well, but also casually maintains 4-5 different versions of the same app? I don’t think they’re “gone”, I think they spend their time in better ways.
Until liability finally becomes a thing in computing like in other industries, then lets see how much everyone cares.
It is slowly happening, lawsuits against failed consulting projects, returns in digital stores, fixes free of charges due to warranties and security regulations changes,....
I hate Electron apps, too, but I've not heard people complain about security before. What's the security problem with Electron apps?
I understand that you're basically running a Node.js instance for each app, but why is that more insecure than running, say, a GTK app? Since GTK and Node.js are written in C and C++, respectively, my gut instinct would be to assume they're equally likely to have security bugs.
The difference is that no-one would load remote-user-supplied C++ code at runtime and expect it not to result in vulnerability, but people remotely source JS, or allow remote-user-supplied HTML to interact with their JS, and the stakes are higher than they would be in a web browser, because Node (and therefore Electron) has APIs to e.g. read and write to your filesystem and browsers have sandboxes to block that.
(Electron offers sandboxing too, but you're likely to actually want to use the OS features so it's not as simple as always disallowing the access.)
It effectively is if you're ready to ship your dependencies. If you're happy to depend on major versions that come with the system, qt isn't bad either.
Does the podman VM have Rosetta 2 acceleration or does it still use the slow qemu runtime on M series Macbooks? This is the main reason I switched to OrbStack which is a very promising alternative to all of this. Free for now but looks like they intend on making it a paid app in the future. The speed difference is significant.
Qemu is the hypervisor in this situation. Which doesn't necessarily preclude having Rosetta acceleration of AMD64 binaries within the ARM64 Linux guest itself.
That said, as far as I know, the only official way to use Rosetta inside a Linux guest is using Virtualization.Framework, which allows mounting a Rosetta binfmt handler via Virtiofs. So it's also going to use Qemu inside the VM to handle running amd64 images, not Rosetta.
Somewhat unrelated (but maybe it's actually related). I've been trying to setup a local development environment for Kubernetes on mac. On mac, docker does not allow for connecting directly to the pods, you need to either setup a tunnel or service. The problem for my use case is that each pod could have a different port and local address and a load balancer does not make sense for what I'm building which requires connecting directly to a specific pod. Does Podman solve this problem? I think the problem is dockerd on mac. As of now, I just have a completely separate environment in the cloud that I run test on but it's very inefficient.
I tried switching to nerdctl and containerd but the problem is that I have existing workflows that make use of docker_init files and nerdctl breaks those.
>docker does not allow for connecting directly to the pods, you need to either setup a tunnel or service
That isn't docker, Kubernetes is designed to work that way. To be infinitely horizontally scalable, and automatically handle pods (and servers) going down, caring about which instance you are talking to is generally a bad thing.
While this may not solve your problem, I use docker-network-connect which lets me connect directly to containers running in the 172.x.x.x range. By default, macOS does not support routing from host to containers by IP https://github.com/chipmk/docker-mac-net-connect.
In the kubernetes case (I am using k3d on my Mac) pods aren't directly routable but with metallb, load balancer IP addresses can be connected to directly and there's no worry about port conflicts as there might be with k3d's default servicelb implementation (this is my bootstrap script https://github.com/andrewmackrodt/k3d-boot).
If you need to connect to specific pods directly and it doesn't make sense to change your pod config, kubectl port-forward may suffice?
We are a Podman shop. Podman itself works fine, but for Compose we kinda have to use docker-compose (and very soon "docker compose" due to the former getting deprecated in a week). I found several major and minor issues when I fed our Compose files to podman compose, I even created a pull request for one that was easily fixable with a single line change, but I'm weary of using it for production right now. Hoping Podman compose gets some love from a big player, b/c it seems RH is only focused on Kube.
These days Podman implements the docker API, so it's easier to use docker-compose. I am not sure if there's any compelling reason to use podman-compose any more.
I suppose it would be nice if podman.socket was enabled by default. Socket activation means that it wouldn't actaully launch podman-system-service unless something connected to it.
For me the problem with podman-compose is that it will be continuously chasing the moving target of what docker-compose is able to do. Hence there will forever be developers trying it out, finding out that "it doesn't work" and, in wanting to just get on with their jobs, going back to docker-compose... and in the process filing "podman" away in that little box in the back of their head of stuff that doesn't work.
Implementing the docker API seems to have worked out much better, I suppose it too is a moving target but it seems to move slower; I can at least download some demo docker-compose.yaml files and process them with docker-compose talking to podman and they worked out of the box!
podman-compose was never official, though- was it?
I'm still using it, because the last time I had time to mess with it, docker-compose still required a lot of fiddling to work with podman. From the other replies to your comment, I guess it must have gotten better, so I'll guess I'll try docker-compose again soon.
I’ve been using this on Mac for a few months now. It’s great to have an alternative to Docker desktop although it doesn’t feel fully there yet, on Mac at least. What is available in the UI is slick - but so many features are missing I often just end up back in the shell. Am experiencing a lot of strange behaviours in terminal sessions via the UI also, particularly hung sessions and it being way too easy to accidentally close the session by switching Ui tabs.
Earlier today I was browsing the Podman desktop website and pretty sure it was at 0.15 version, and then telling some people after that Podman desktop will probably reach stable sooner rather than later. However, does not expect it to be this very soon and it's now suddenly stable at version 1.01. Apparently container software and ecosystem do move at a blazing speed.
Podman (Desktop) on Windows ate all my RAM when I left it up for a few days. Somehow it leaked wsl.exe / podman.exe process handles every time it ran the "podman machine" commands it uses to keep track of its VM.
I think this is in reference to RedHat prematurely ending support for CentOS 8 and basically abandoning its former model of being a slow-moving, stable RHEL clone[0]
The big question is whether Podman Desktop will be (a) stable and not a memory-hog, (b) make container workflows on Mac/Windows as simple (conceptually at least) as on Linux, and (c) be a sustainable effort for the community that's still extremely Red Hat-centric.
To that last point, I still see very little adoption outside of the Red Hat ecosystem. It seems like `docker-ce` is still installed on most servers, `docker-compose` for lightweight app orchestration, and when people use Kubernetes, few people know or care what underlying container management daemon is running.