Hacker News new | past | comments | ask | show | jobs | submit login

At TransLoc we use a multi-master setup. At its heart it's two nodes replicating from each other. Our data is divided into several databases. Each database "belongs" to only one master at a time. For example if we have nodes A and B, and databases w, x, y and z, we would have A be responsible for writes to w and x, while B would be responsible for y and z.

If B fails, we have a monitoring system in place to tell A that it is responsible for w, x, y and z at once. The monitor sets read-write permissions for databases y and z on A (through user permissions), and then notifies all of our application servers that things have shifted. The applications for their part include a piece of common code that monitors for changes in the cluster and allows the application code to cope with these changes. For web requests, if failover happened in the middle of a transaction, the request fails. For long running processes, the process will have to go to the top of the event loop and request a new database connection, etc.

So far it's worked fairly well. We are able to achieve high availability with it, since our master nodes are in two different data centers. There are definitely issues with this approach in general, but it works for our work load.




You're kind of in an active-passive multi-master setup which is find for now but as replication queries increase between the two nodes, you might start to see load issues. You're going to have to drop in a third machine and a decision is going to have to be made on how to replicate your data.


Yes. The other big problem with this setup is that a single node must be able to handle all queries if the other node fails. Fortunately, we are nowhere near saturating the capacity of our nodes under normal operation (we are only doing about 1,200 queries/second). We are also able to offload read-only queries to slaves that are replicating off of the masters, which should help with the read capacity, which is our biggest demand.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: