> I don't think that Facebook/Google developers are foolish or incompetent.
Nobody in this thread is saying that. Parent to you said:
> they could just stop being absurd instead [of building more DCs]
implying FB could build fewer DCs by scaling down some of their per-page complexity/"absurdity". Basically saying their needs are artificial or borne of requirements that aren't.
> conglomerate entities are fundamentally opposed to my right to privacy
That's a common view, but it's not on topic to this thread. This thread is mostly about the tech itself and how WikiMedia scales versus how the bigger techs scale. It has an interesting diversion into some of the reasons why their scaling needs are different.
You could instead continue the thread stating that they could save a lot of money and complexity while also tearing down some of their reputation for being slow and privacy-hostile by removing some of the very features these DCs support (perhaps) without ruining the net bottom line.
This continues the thread and allows the conversation to continue to what the ROI actually is on the sort of complexity that benefits the company but not the user.
I was the one saying absurdity and I think you’re missing the context. Work out how much processing power is worth even just another 1 cent per thousand page loads and perfectly rational behavior starts to look crazy to the little guys.
Let’s suppose the Facebook cluster spends the equivalent of 1 full second of 1 full CPU core per request. That’s a lot of processing power and for most small scale architectures likely adding wildly unacceptable latency per page load. Further, as small scale traffic is very spiky even low traffic sites would be expensive to host making it a ludicrous amount of processing power.
However, Google has enough traffic to smooth things out, it’s splitting that across multiple of computers and much of it is after the request so latency isn’t an issue, and it isn’t paying retail so processing power is little more than just hardware costs and electricity. Estimate the rough order of magnitude their paying for 1 second of 1 core per request and it’s cheap enough to be a rounding error.
Nobody in this thread is saying that. Parent to you said:
> they could just stop being absurd instead [of building more DCs]
implying FB could build fewer DCs by scaling down some of their per-page complexity/"absurdity". Basically saying their needs are artificial or borne of requirements that aren't.
> conglomerate entities are fundamentally opposed to my right to privacy
That's a common view, but it's not on topic to this thread. This thread is mostly about the tech itself and how WikiMedia scales versus how the bigger techs scale. It has an interesting diversion into some of the reasons why their scaling needs are different.
You could instead continue the thread stating that they could save a lot of money and complexity while also tearing down some of their reputation for being slow and privacy-hostile by removing some of the very features these DCs support (perhaps) without ruining the net bottom line.
This continues the thread and allows the conversation to continue to what the ROI actually is on the sort of complexity that benefits the company but not the user.