Hacker News new | past | comments | ask | show | jobs | submit login

Discovery mechanisms can help with location coupling to some extent, but of course not solve it. Protocols and queueing help with time coupling to some extent (like ZeroMQ).

Split brain issues are harder to solve, but of course there are protocols to deal with that too.

Small nitpick:

> And since CPU clock speed won't really be getting any faster, we have to scale out if we're going to scale at all.

What does CPU clock speed have to do with all this? It certainly doesn't affect communication latency and is a rather poor indicator of computing performance.

While clock speeds have stagnated, single core (thread) performance has been steadily increasing. Compilers are just not fully exploiting additional computing power yet.

Then there's always NUMA (non-unified memory access, multiple CPUs and memory subsystems networked together at hardware level) and in a larger scale, RDMA (remote DMA).




My intention with the CPU speed comment was to illustrate that we can't just throw more on single cores. Even distributing work across cores comes at a latency and efficiency penalty. And furthermore, even that isn't enough for some of the web scale applications that have to be spread across thousands of computers distributed across the world in order to handle the load and remain at reasonable latency.

So my point was mostly that distributed applications are unavoidable because you just can't scale up past a certain point.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: