Hacker News new | past | comments | ask | show | jobs | submit login

I used to do data-center and virtualization consulting- we always designed our enterprise client's systems to handle the baseload in-house and burst to public cloud. Not rock science. We even automatically live-migrated load from and shutdown onsite hypervisors at night when the system had surplus capacity...

I think the real issue is that people assume all workloads are public cloud workloads. The bigger and less dynamic your workload the less that is true.




This is a great approach. A company I worked at was running mostly on dedicated hardware, and due to the pain of moving jobs around and getting new hardware, we focused hard on optimizing our code to run within our existing capacity whenever we ran into problems. After a couple years of customer growth, I'm sure we would have been running on at least 10x as many machines if we had never done that. We then set up cloud spot instances for when we just needed extra short-term capacity.


> I'm sure we would have been running on at least 10x as many machines

Seeing the opposite trend.

Because your own machines are so expensive and annoying to manage, you only get 500GB memory 40 cores servers. You don't optimize and you run stuff randomly on whatever seem available.

Whereas in the cloud, you put VMs per role, with appropriate sizing. And when someone asks for 5 machines with 16 cores, you can be like, WTF are you running with all that power???


I would believe that, but we were hitting the limits of our huge servers. There's only so much QPS you can throw at a single box. These servers were also "pets" as opposed to "cattle", and a big part of the prep for cloud was treating them as cattle.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: