Hacker News new | past | comments | ask | show | jobs | submit login

Yes, the idea is to scale up as much as economically makes sense and only then scale out, but due to Moore's law we're still scaling up.

Not everyone has google-like problems that are betters solved by a battery of cheap boxes.




Actually most of the websites today are better solved by a battery of cheap boxes.

You simply put a load balancer (Nginx/ELB/HAProxy etc) in front of a fleet of smaller web/application servers that dynamically scale depending on traffic. That way it is cost effective, far more reliable, easier to scale and you can tolerate DC outages better.


As with most systems, the data store is the hard thing to scale out. That requires intelligence at the application level, and that might include CAP issues.

So for us, the SQL data store is the real “up” part of the equation. We have a fair amount of headroom there, so if we can keep sharding (and other data strategies) out of the codebase, so much the better.

(Load-balancing HTTP requests “out” is not a big deal and we are doing that.)


Marco/Stack Overflow here.

Simply put, this is not true at all. Most of the websites today run happily on a single box and the choice of architecture is frankly irrelevant because of the abysmal amount of traffic they get.

Large, and by large I mean top 100 sites, are large and typically complex systems. Each one is different and bespoke. Making broad generalizations doesn't really work, because typically they have parts that need to be scaled up, parts that need to be scaled out, etc.

Finally, if you read the article you would know that we already use a load balancer with multiple web servers. What we are discussing here is whether it's better to have 100s of cheap/cloud boxes versus a few big ones.

In our use case (we want fast response times) the second solution is demonstrably better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: