Hacker News new | past | comments | ask | show | jobs | submit login

I know this will get downvoted but clouds suck and this is just one more manifestation of why they suck. Unless you have very spiky workload save yourself long term pain and don't go this route (applies if your monthly AWS/GCE/Azure bill is over few K)



If you're in a position where you can spin up servers and get the job done, that means you're just using the cloud as rented servers, in which case you are absolutely right.

If however you're using the cloud as intended, and using all of the services it actually provides, I highly doubt you could run 23 data centers around the world with databases and firewalls and streaming logging and all they other stuff they provide at even a fraction of the cost.


In reality they provide remarkably little with a lot of strings attached :). Let's take such basic service as transport care to compare the cost ?


I'm not seeing why not. Your data center could go down for a myriad of reasons (ISP goes down, HDs, tripping on power cable, etc). If that happens you're pretty much screwed. You could compensate by having multiple data centers with different infrastructure providers. If you do, you're probably spending more than the few K you referenced in your post.

Yes, it's bad that apparently all of the regions failed. Google will hear about it. People will get in trouble. But a screw up at this level is rare. If you use cloud, or even a VPS provider like Linode, you get auto-fail over and someone that is contractually obligated to deal with failures.


Or the Fbi raids the coloc, and rips out everything that looks like a computer becuase another tenant was operating a silk road clone.


You are paying penalty in complexity, latency and poor tenant isolation when running on "cloud infrastructure" and when things blow up you have no recourse.


Do you have any examples of poor tenant isolation in AWS, GCE, or Azure?

Cloud complexity is also lower because you don't have to worry about power, cooling, upstream connectivity, capacity budgeting, etc. If 99.9-99.95% availability is fine for your application then you probably don't have to worry about your provider either.


AWS Netflix consumes enough resources that if they spike 40-50% everyone is screwed. The software required to run the cloud like AWS is orders of magnitude more complex then what avg project would need and results in major screwups. Both major AWS outages were due control plane issues second case was result of massive Netflix migration that triggered throttles for everyone in affected AZs. The throttles in the first place were put in due to the major outage that lasted for many hours.


> Do you have any examples of poor tenant isolation in AWS, GCE, or Azure?

I hate to feed a troll, but ...

Noisy neighbors are a problem all the way from sharing a server using VMS to top of rack switches.

An if you try hard enough, you can always escape your VM and "read somebody else's mail."




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: