Hacker News new | past | comments | ask | show | jobs | submit login

In one sense yes, but in another sense even though Google is large their threshold for something being worth fixing based on efficiency justifications is correspondingly large.



Frequently though, you're dealing with a death by a thousand paper cuts. Perhaps that call comes from some common library whose implementation requires reflection.


If you're measuring in terms of time, electricity, or hardware, then the threshold would tend to be lower for Google, not higher. A 0.1% savings on a million machines is ~1 million times more valuable than a 0.1% savings on a single machine.

OTOH if the measurement is risk of breaking something, then it's not nearly so clear, since larger organizations tend to have more software and hardware, and therefore more opportunities for things to get broken.


You're saying the opposite of what the other guy said, though. They said that at scale small things are big. I'm saying that at scale .001% is still .001%.


It isn't though. A typical developer will never care about 0.1% of unnecessary CPU usage in their application, but at Google, depending on what component it's in, that can easily be a million dollar find.


> In one sense yes, but in another sense even though Google is large their threshold for something being worth fixing based on efficiency justifications is correspondingly large.

not everything is a linear relationship




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: