Hacker News new | past | comments | ask | show | jobs | submit login

You horizontally scale for high availability as well as scalability.

And primary-secondary failover in my experience is rarely without issues.

There is a reason almost every new database aims to be distributed from the beginning.




>> There is a reason almost every new database aims to be distributed from the beginning.

That's partly because you can't compete with the existing RDBMSs if you're single node: they are good enough already. Nobody will buy your database if you don't introduce something more novel than PostgreSQL, whether that novelty is worth it or not.


Primary-secondary is simple and robust. If I had a dollar for every time I saw split-brain clusters....

---

And to respond sibling comment about "noticeable" downtime....

Primary-secondary failover in <1m is very feasible. And each minute downtime is a mere 0.002% for the month.

Primary-secondary isn't what is hurting your availability.


The experience for at least some of us is that failover is not robust. At all. And that < 1m is best case scenario that still requires a person to be monitoring the process.

And given that the entire industry has moved to a distributed model despite its complexity gives you a hint as to which way the wind has been blowing for the last decade.


You don't need to be that arrogant. The number-one reason why there are no new (No)SQL-Databases for one node is that the existing databases are great and you can't monetize them.

Failover is automatic for PG when using e.g. Patroni. Of course you lose active transactions and that might be a showstopper, but monitoring failover? I am curious when you'll have to do that.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: