Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For my API SaaS I run everything non-critical (marketing website, analytics, customer dashboard) on a single server, either part of a single django monolith or using an official docker image with some command-line arguments. Only the API gets duplicated and load-balanced with Cloudflare.

Dedicated servers, specced to provide lots of capacity for spikes. A 20-node k8s cluster could fit on 2 beefy dedicated servers for about the same cost. Decreased infrastructure redundancy but massively increased operational stability through simplicity.

Everything runs in a docker-compose project: one for the API, another for the everything else monolith. I've worked for a couple of small companies that ran with docker-compose, so have a good sense of the weaknesses and footguns (breaking your firewall, log rotation, handling secrets, etc).

CI is running `make test` on my dev machine. Deployment is `git pull && docker-compose up --build`. Everything sits behind haproxy or nginx which is set to hold and retry requests while the backend is down, so there aren't any failed requests in the few seconds a deployment takes, just increased latency. I only deploy to the API once or so per week, that stability reduces headaches.

DB backups are done with cron: every hour a pgdump is encrypted then uploaded to backblaze. Customer subscription data is mirrored from stripe anyway so an out of date DB backup isn't the end of the world. Error if the backup fails after a few retries.

Sentry on everything for error alerting. All logs go into New Relic.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: