Hacker News new | past | comments | ask | show | jobs | submit login

I run apps like that inside a docker container, which is a little bit better, I think:

- security library updates update with the base OS, which I update religiously and images are rebuilt (via "make") whenever the base updates, too

- the apps are effectively "somewhat" sandboxed

- I only bind-mount the directories I want the app to have access to (i.e. -v "$HOME/Pictures:/home/user/Pictures", -v "$HOME/.config/appname:/home/user/.config/appname" etc)

While this isn't ideal for graphical apps due to needing sharing the X11 socket for things to work well, which comes with its own type of problems...

... at least no app can change _my_ bashrc, simply because it can't even see it, nevermind edit it.

Going one step further, some bind mounts can also be mounted ":ro" to ensure the app cannot change the contents.




I also use certain desktop programs via docker, but like you say, it isn't ideal. Not just for the X11 socket (which is a potential security issue on its own), but also for audio (both ALSA and Pulseaudio can be made to work, but you have to bind-mount correct sockets, use correct user IDs and set correct environment variables) and video (usually hw-accelerated, so you have to install correct opengl libraries for your hardware and keep them on correct versions to match the drivers your host is running.

It all works, and provides some benefits (it is also fun, if you're into sysadmining!), but it kind of breaks one of the basic promises of "dockerizing" - that the containers are independent of what the host looks like.

e.g. I cannot just take the Dockerfile for a video streaming app container from my desktop with a Nvidia GPU, and use it as-is on my laptop with an Intel GPU.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: