Hacker News new | past | comments | ask | show | jobs | submit login
Heroku-style deployments with Docker and Git tags (ricardoanderegg.com)
124 points by polyrand on April 25, 2021 | hide | past | favorite | 40 comments



Since it's not mentioned somehow, I'll recommend Dokku. It does exactly what this author wants, and I use it to deploy all my side projects.

Dokku has been around for years but still gets significant improvements. It doesn't support scaling to multiple boxes (at least not without some work of your own), but it's great for smaller deployments.

https://dokku.com/


Yes! I have personally used CapRover[0], which is similar. This post also has the objective of teaching some git concepts and showing other ways to do things, but I would also recommend using something like CapRover/Dokku to start with.

[0] https://caprover.com/


+1 for Dokku, had so much pleasure using it for running a lot of smaller projects within one of my previous companies.


I can't praise Dokku enough. I am coming up to two years of running all my infra on it, not one problem that I hadn't caused in that time. Highly recommended.


In the docker-compose.yml file it has:

      # expose port to localhost too
      - "8000:8000"
I never used ufw-docker but normal behavior with Docker here would publish 8000 to the outside world allowing someone to bypass your proxy and directly visit http://example.com:8000. Does ufw-docker not do that? The comment hints that they probably want to use "127.0.0.1:8000:8000" instead of "8000:8000" to be explicit, or at the very least should call out that ufw-docker is doing something special to block it because by default using 8000:8000 is quite dangerous to use with Docker and iptables.

I'm also curious about this quote:

> Docker does not play well with iptables, so I use ufw-docker to set up the firewall.

I never had any issues using iptables with Docker. What doesn't play nicely?


There's no problem if you use only basic iptables rules but ufw use iptables in a very complicated way. If you want ufw and docker to play well together, you have to disable docker's iptables manipulations or use ufw-docker as the author. More about that at ufw-docker documentation https://github.com/chaifeng/ufw-docker


The solution mentioned in chaifeng/ufw-docker is not beautiful too, because now you have to configure UFW each time after you deployed a new container since the IP address of each container is different. And you probably don't want to use an IP range for UFW because that will bring you right back to square one.

A better solution is to use an ingress controller such as Trafik and expose it to the Internet. Then when deploy later containers, don't use `-p` at all, instead, ask Trafik to route the traffic to the container.


I am curious about the strategy to build the prod container on the prod infrastucture.

What level of concern is there about the reproducibility and reliability of such builds? I've had docker containers stop building due to unknown dependencies disappearing or ageing out.

And then presumably depending what kind of in code base it is, you could have some very intensive tasks and / or large containers involved in doing that build - which may itself cause some degradation of service or require a bigger server than you would need just to run the app.

I'd be interested in seeing a version of this that pulled the image from a CI/CD container registry - or does that defeat most of the purpose?


I would definitely not do it that way in a more serious project. But for side projects that don't have a ton of traffic I found it's perfectly fine (and faster) to just build the container on the prod server.

The website for which I'm using this method is doing a multi-stage docker build compiling SQLite in the first stage. Everything is working perfect so far.


For a simpler option without Docker, but still with GIT Heroku-style, check out Polybox[0].

Polybox[0] is an itty-bitty PaaS that uses git push to deploy micro-services and websites on your own servers.

[0]https://github.com/mardix/polybox


The current state of deployments (and dev environments in general) are a hot mess, especially for people who are not active developers, and just ocassional programmers. I update a few different web apps every year (or two), and each time, it is a nightmare. Unless, of course, I'm doing a simple drag and drop into Filezilla. In that case, it's effortless. The difference? The ridiculous tools I need to use just to get the code compiled and ready to deploy, much less somehow figure out how to actually connect to the server. This can make the absolute most simple changes not worth it, because I don't want to lose a day to debugging the inevitable problems that happen when I try to get the dev environment started. And yes, sadly, they are inevitable. Especially after letting things sit for a year.


I have found that investing in writing a README saves a lot of time when revisiting of projects. Pinned dependencies ensure I rarely have to deal with returning to a broken, or non-reproducible, development environment.


The Filezilla/(s)FTP approach still works just fine for simple usecases such as yours. For anything else remotely mission critical... I seriously cannot long for the days of figuring out the hard way that someone, sometime, nobody knows when, fixed something straight in prod over FTP and I just overwrote it by just pushing the last file in repo. I've been burned by this crap way too often compared to how young my career is...


The wonderful thing is that all the tools that worked 10 years ago largely still do. It's still absolutely fine to stand up a rails site behind nginx on a single host and expand from there when you need to (or, more accurately, when you can, because redundancy always was a thing).


But the same ruby version from ten years ago probably won’t just work anymore and that’s the problem


Why would the Ruby interpreter suddenly stop running?


Not a huge fan of Kubernetes myself but Deployments solve that problem out of the box with much less code.


I also use Kubernetes at work and somewhat agree with your point, but there are a few downsides.

The method I explain in this post takes 5 minutes to set up in any VPS, 1 minute if you have a template to copy-paste. You can maybe do the same using k3s, but I still think there's more complexity involved.

This method also builds the Docker containers in the same VPS, so you don't need a container registry or a build server.

I came up with this way of doing things because I was already doing all that by hand. Then I learned about git hooks and custom remotes and I thought it was a handy way to automate it without crossing the k8s line.

I personally find it a lot simpler. If you remove the comments and wrap a couple of things in bash functions it's quite straightforward.


> k3s, but I still think there's more complexity involved.

which complexitiy do you had with k8s?

> If you remove the comments and wrap a couple of things in bash functions it's quite straightforward.

ah yeah...


Right and it does it in a way where everything doesn't fall apart if your stack is more involved than a web server.

For example, in the author's script wanting to run a background worker ramps up the complexity by a lot but with Kubernetes this would be adding 1 more deployment and you're done.

For most of my own stuff I just run 1 copy on 1 server and configure nginx to queue up requests that fail due to a 502 and then release them in the order they were received when the back-end is available again. This way you don't have hard down time. While your app restarts during a deploy the user only gets a busy mouse cursor for a few seconds while your app boots up. No load balancer needed. Lua scripts and nginx are a powerful combo.


Hi! Author here. It's true that my method can get complex very fast depending on what you want to do. I also wrote this to explain some concepts about git / docker.

If I were running more than 1 web service I would maybe take the reverse proxy out of docker-compose to manage it separately. Each service would be its own git remote.

If I had even more stuff, I would probably run containers separately and explicitly create docker networks. But yes, now I may have reached a point where a standard k3s deployment is easier. This is just a method I found useful for my use case and I believe that to just run 1 or 2 web services on a VPS it's easier to set up than k8s/k3s.

At the same time, I would argue that running a single monolithic service on a single (powerful) VPS is more than enough for more cases than people believe.

I would also like to learn more about your approach!


> At the same time, I would argue that running a single monolithic service on a single (powerful) VPS is more than enough for more cases than people believe.

This is very true and I'm in the same camp as you in that regard but a lot of popular web framework tech stacks include both a web server and a background worker to process tasks outside the request / response cycle. The background worker isn't exposed over an HTTP port. It's a process that uses the same code base / Dockerfile as your web server but runs a different command. It would need to be up during deployments and also get updated to the new version during your deploy.

Even with a monolithic app a lot of apps using Flask, Django, Rails, Laravel and others will at least end up having a web + worker, and then you have the usual postgres / mysql + redis too.

Then there's also certain frameworks like Rails where you may want to run a separate websocket service that also uses your same code base but runs in a dedicated process to handle broadcasting websocket events. This one would also want to be proxied since it's accessible over the internet in order for it to work.

I'd be curious to see what your shell script and overall strategy looks like with the above set of requirements because IMO the above (minus the Rails websocket server) applies to a huge array of apps out there which use the web + worker + db + redis + maybe websocket server combo.


I'd love to read more about your approach.


That's because you don't deal with state


If you need persistent state is there much of a difference with this approach and a StatefulSet with replicas=1?


we're working on TinyStacks to solve this - we've built the fastest way to deploy and maintain your Docker app on AWS.

In one click, TinyStacks takes your app code on GitHub and spins up all the necessary infrastructure with a fully automated pipeline - all on your AWS. Just git push.

We just started onboarding a few customers on Fri and would love to onboard a few more. Email me: safeer@tinystacks.com


Git push, so it’s a post-receive hook? I do the same with my deployments. I also like to set up make files because most newcomers to the project can grok a makefile. Without having to learn a new build tool.


I often wonder if this is going to how the future of CI/CD will be like. Instead of platform specific yaml files, you could leverage Multi-stage dockerfile. What would be cool is certain stage could run in parallel, may be you could skip certain stages. You could use the "gitops" style to deploy. And viola, you have a build system, that builds locally and remotely independent of the CI/CD platform.


This gives me a couple of ideas for piku (https://github.com/piku)...


This is very interesting! Is it a Dokku alternative?

I wrote a similar thing yesterday:

https://gitlab.com/stavros/harbormaster

Mine is basically Balena, though, so it lets you deploy a bunch of Compose repos and doesn't handle ingress. Maybe I could switch to Piku instead.

EDIT: Ah yeah, it looks like Piku is geared towards web servers and handles ingress, mine is more geared towards consumer stuff and doesn't handle ingress itself.


You don’t really need to do web services on it. I have lots of batch/cron-like stuff on my deployments...


You don't need to, but (Dokku, at least) has a lot of ingress setup and Procfiles and things that you need to add, and you pay that cost for nothing if you don't use it.

The other thing that makes Dokku unsuitable for non-web things for me is that it's explicitly geared towards the web setup, with ingress/web server/database, and you can't easily add arbitrary services and link them to your container. I remember having to do quite a bit of hackery to deploy one server for the backend and one for the frontend on the same domain.


Oh, I only add stuff that isn’t overly complicated. The ingress with nginx, for instance, is the simplest thing that could possibly work...


I love piku! I’ve never had the chance to use it, but I’ve read a significant part of the code, and its philosophy has influenced my approach to devops!


Thanks! I tried to keep it concise and straightforward...


I made a tool that does exactly this [0] wrapped up in a tiny little server! Main reason I did that was because I found myself creating "compose apps", where it was some self-hosted thing that I spent a lot of time managing without version control or in a deterministic manner.

For ingress, nginx-proxy (or traefik or caddy) listens for when a container starts on the network (auto attached by pcompose) but you still have all of the flexibility of docker-compose. Build happens on push if you have that defined in compose, so on and so forth.

Was able to even create some simple "convenience" ssh methods where you can trigger docker-compose commands or tail logs/exec into container directly over ssh. Definitely super crude and not really for production, but works great for apps that I just want to run and forget about.

[0] https://github.com/antoniomika/pcompose


I found caprover to be great for simple web apps. the git push and build on prod server feels like a remnant from before proper ci/cd with docker built separately was as available.


Friggin love CapRover! It's definitely helped take the anxiety out deployments for me. OPs solution is cool & novel but still looks like too many moving parts for my taste. I find CR too be a great balance between fully rolling your own with scripts and custom commands, and the full blown automation of kubernetes.


Checkout https://GetPorter.Dev it’s basically Docker on Kubernetes.


This is already not the best solution, there if you want “heroku style” then: 1. Install podman 2. Setup some container registry 3. Setup GitHub actions to build and push container 4. Setup podman for auto update 5. Push code and wait...

Much easier than hacking with bash and git. Pushing source code to a remote server is bad!




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: