Hacker News new | past | comments | ask | show | jobs | submit login

I frequently see folks fail to understand that when the unicorn rocketship spends a month and ten of their hundreds of engineers replacing their sharded mysql from setting ablaze daily due to overwhelming load, it is actually pretty close to the correct time for that work. Sure it may have been stressful, and customers may have been impacted, but it's a good problem to have. Conversely not having that problem maybe doesn't really mean anything at all, but there's a good chance it means you were solving these scaling problems prematurely.

It's a balancing act, but putting out the fires before they even begin is often the wrong approach. Often a little fire is good for growth.




You really have to be doing huge levels of throughput before you start to struggle with scaling MySQL or Postgres. There’s really not many workloads that actually require strict ACID guarantees _and_ produce that level of throughput. 10-20 years ago I was running hundreds to thousands of transactions per second on beefy Oracle and Postgres instances, and the workloads had to be especially big before we’d even consider any fancy scaling strategies to be necessary, and there wasn’t some magic tipping point where we’d decide that some instance had to go distributed all of a sudden.

Most of the distributed architectures I’ve seen have been led by engineers needs (to do something popular or interesting) rather than an actual product need, and most of them have had issues relating to poor attempts to replicate ACID functionality. If you’re really at the scale where you’re going to benefit from a distributed architecture, the chances are eventual consistency will do just fine.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: