Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You can do quite a bit of processing per page load without issue. Facebook and Google just take it rather past that point into near absurdity, while still being highly profitable.


To be fair, there's a bit of a combinatoric effect of scale * features going on there. I'm sure you could build most of a Facebook equiv. 100x-1000x cheaper if it only served one city instead of the whole planet.


The effects of scale are less combinatoric than you might think. Most people on my Facebook feed are from the same city anyway, even though Facebook is global.


The effects and scale of sales (ads) are very combinatoric, though.


Yeah why do they keep spending billions to build new datacenters when they could just stop being absurd instead?

The contempt on here is crazy sometimes.


The idea of marginal value/marginal cost is that companies will generally continue spending one billion dollars to add size and complexity, as long as they get back a bit more than a billion dollars in revenue.

So it wouldn't necessarily be contradictory if most of their core functionality could be replicated very simply, yet the actual product is immensely complicated. I forget where I first read this point, but probably on HN.


Or maybe you're just reading too much into "absurd" which can just be a colorful word for "an extremely huge amount"


I don't think that Facebook/Google developers are foolish or incompetent. That would be contempt. Instead, I think that Facebook and Google as conglomerate entities are fundamentally opposed to my right to privacy. That they make decisions to rationally follow their self-interest does not excuse the absurd lengths to which they go to stalk the general population's activities.


> I don't think that Facebook/Google developers are foolish or incompetent.

Nobody in this thread is saying that. Parent to you said:

> they could just stop being absurd instead [of building more DCs]

implying FB could build fewer DCs by scaling down some of their per-page complexity/"absurdity". Basically saying their needs are artificial or borne of requirements that aren't.

> conglomerate entities are fundamentally opposed to my right to privacy

That's a common view, but it's not on topic to this thread. This thread is mostly about the tech itself and how WikiMedia scales versus how the bigger techs scale. It has an interesting diversion into some of the reasons why their scaling needs are different.

You could instead continue the thread stating that they could save a lot of money and complexity while also tearing down some of their reputation for being slow and privacy-hostile by removing some of the very features these DCs support (perhaps) without ruining the net bottom line.

This continues the thread and allows the conversation to continue to what the ROI actually is on the sort of complexity that benefits the company but not the user.


I was the one saying absurdity and I think you’re missing the context. Work out how much processing power is worth even just another 1 cent per thousand page loads and perfectly rational behavior starts to look crazy to the little guys.

Let’s suppose the Facebook cluster spends the equivalent of 1 full second of 1 full CPU core per request. That’s a lot of processing power and for most small scale architectures likely adding wildly unacceptable latency per page load. Further, as small scale traffic is very spiky even low traffic sites would be expensive to host making it a ludicrous amount of processing power.

However, Google has enough traffic to smooth things out, it’s splitting that across multiple of computers and much of it is after the request so latency isn’t an issue, and it isn’t paying retail so processing power is little more than just hardware costs and electricity. Estimate the rough order of magnitude their paying for 1 second of 1 core per request and it’s cheap enough to be a rounding error.


Every request at FB is handled in a new container. This isn’t absurd, it’s actually pretty neat :)

Edit: I don’t know what I’m talking about. Happy Monday!


What? Are you calling the context of a HHVM request a container just to confuse people?

Also, there's way more than just the web tier out there.


Wasn’t my intention to confuse, just repeating something I’ve been told by FB folks.

Everyone, please listen to Rachel and never ever me.


Wow that sounds interesting, does anyone know if this is true?


I'm not on the team that handles this, but I highly doubt that this is the case.


is not neat... is freakish




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: