Hacker News new | past | comments | ask | show | jobs | submit login

Typically, you want to supply the container with a mount point that is outside the container. This way, if the container is replaced, your data isn't impacted.



So redeploying means switching database to a different docker, and means interruption (thinking of traditional relational DBs here)?


The traditional high-availability method is to run the database servers in pairs, and redeploy using the failover-failback method. You have DB servers A and B, with A as the primary and B mirroring A.

1. Promote B to primary and switch the clients over so that they write to B.

2. Redeploy A, and wait for A's replication to catch up to B.

3. Promote A back to primary and switch the client writes back to A.

4. Redeploy B and wait for B's replication to catch up to A.

5. Have a drink.

Responsible ops practice is to follow this procedure on every deploy, because the failover process has presumably been designed, engineered, rehearsed, and tested in production – as it has to be, because it might happen at any moment during an emergency – whereas the redeployment you're about to do has never been tried in production before and you can never be certain that it isn't going to take down your database server processes for a millisecond or an hour.

Docker doesn't really help or harm this process, though it does subtly encourage it, because the adoption of Docker and the adoption of an immutable-build philosophy often go hand in hand.

If you don't have firm confidence in your database failover procedure, you don't want to host your database in a Docker container.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: