Hacker News new | past | comments | ask | show | jobs | submit login

They calculated the costs based on 3 years running at the hourly rate.

That's kinda weird. How about including multi-year discounts? These are available to everyone.

[1] https://azure.microsoft.com/en-ca/pricing/reserved-vm-instan...

[2] https://aws.amazon.com/ec2/pricing/reserved-instances/

[3] https://cloud.google.com/compute/docs/instances/signing-up-c...




Elastic rates are really what you should be comparing when using cloud IaaS services, though. That's where the price works out in favor of using cloud IaaS hosts in the first place, after all.

If you have a stable set of instances and a known lifetime for them, then, before trying to calculate whether AWS or GCP is cheaper, step back and plug those same numbers into a regular non-cloud DC managed-hardware-leasing pricing page.


> a regular non-cloud DC managed-hardware-leasing pricing

You have to factor in the total cost of ownership (TCO) to make a fair comparison, which in almost ALL the cases, you are more than likely to overspend on bare metal boxes on your own DC. Some of the TCO components are:

- DC staff salaries

- Electricity

- Networking bandwidth

- SLA guarantee (yes this a hidden cost, e.g. if your DC power is out, you owe your customers fees depending on your SLA).

- etc.


Note, I didn't say "your own DC", I said managed hosting. As in, leasing a physical 2U server from a DC provider (just not a cloud DC provider), that you "temporarily own" (sort of like you "temporarily own" a condo you're leasing), but where the DC staff still has BMC access to the box, and will handle hardware going bad, etc., so you never have to drive out to the DC.

You know, the primary offering of DC providers like Softlayer, Hetzner, etc.

With a managed service, "utilities" (salaries, electricity) are factored into the lease. And bandwidth, as it turns out, is cheap-enough that many DCs will give it to you unmetered, since you can't use enough through the limited links they give you to dent their uplink.


Bit of an aside, but a lot of people in AWS or Azure can't run their workloads in Hetzner, OVH or what-have-you for compliance / paperwork related reasons.

Now SoftLayer I'm not so sure about - interested to hear from anyone offering services to say, gov or health from managed hosting and how that compares cost and experience-wise to AWS, Azure, GCP.


I've never had to deal with this, but there are tens of thousands of managed providers out there, so I figured some of them must have this type of compliance.

The first two that I looked at, Hivelocity and ReliableSite both seem to have a number of certifications, as does our current provider, LeaseWeb.

Is there a specific certification that's really sets AWS/Azure/GCP apart?


> - SLA guarantee (yes this a hidden cost, e.g. if your DC power is out, you owe your customers fees depending on your SLA).

But Amazon and GCP can go down too and their SLA does not necessarily fully insure against your SLA.


Yes, everyone could and will go down at one point or another. Are you saying that your in-house team can manage infrastructure equally well or better than these big public Cloud vendors? They have thousands of site reliability engineers, don't they?

The crux of this is how do you compare in managing site reliability. Perhaps you have a world class team you could do that better than yes my point there is moot. But 99% of the time, it's not.


Amazon has done a fantastic job of making people think the choice is between them or managing your own infrastructure. Your concerns make no sense in the context that your parent presented: a managed DC.

And, frankly, your parent was being generous. Even if you only look at elastic workloads, the workload has to be a) extremely elastic and b) not fall into some pretty common patterns for cloud to make any kind of sense.


To match the reliability of something like an aws managed service, you usually need two or three managed DC's.


A number of managed providers have multiple DCs within a region as well as DCs in multiple geographic locations.

Also, many have been in operation since before AWS was a thing, and some are larger. So I can't imagine what AWS knows about running a datacenter that others don't.

Now maybe in theory if you can build something to be fully one with the cloud, considering all edge cases, and limiting yourself to only cloud-zen tools (or building your own, or doing vendor lock in). In theory, with enough money, I guess the cloud lets you maybe achieve higher reliability.

The fundamentals of EC2 (lack of dual NICs, dual power supplies and BBU RAID + virtualization and general complexity) means that a single instance is way less reliable (let alone, much worse value) than a single dedicated box. The complexity you need to throw on top of that building block (in the shape of lock-in, compromise, money, latency, application complexity or a combination of these) is pretty significant.


It's adorable that you think batteries and dual PSUs are something that makes a node more reliable, rather than less reliable.


What's the point of dual PSU if not reliability?


I'm not sure what the point is, actually. The variety of things that can go wrong with them is astonishing. For one thing, among many others, BMC cards will power-cap the max clock speed of CPUs when a machine is running on only one PSU, which can cause a degradation that's worse than if the machine had just halted. There are a zillion other edge cases like that.


Managed service (like S3) is different than managed infrastructure. EC2 VMs only run in a single zone too and require cost and complexity for more redundancy.

Building out your application across multiple regions in AWS is not much different than using multiple DCs from a managed host. The clouds provide live migration, spot instances, and fast global VPC networks that can make it much easier, but you also pay the premium for it.


well, kind of. I think it is pretty common to have a base amount of compute you need, but still need the flexibility to elastically scale up when required. So real workloads are generally a mix of elastic and reserved rates. But how much of each really depends on the workload, so cockroachdb did the easy thing, and just compared the on-demand prices.

Also, there are a lot of things that the big cloud providers manage for you. I haven't dealt with DC managed-hardware, but I imagine that you probably have to do a lot more to set up networking, provision VMs on raw hardware, etc.


> If you have a stable set of instances and a known lifetime for them

I do but I don't even know how to assemble my own computer, much less deal with bare metal for servers. I want to stay as far away from hardware as possible.


You can just rent servers. My problem is that I don't know the CPU time I need to run my app. If you say an app needs 100 or 100,000,000 CPU hours a month, I wouldn't be able to really verify that.

I don't know how cloud providers even measure the CPU-time. Probably from VMs. What about services like logging, health-checks and load balancer? Is there a position for that too?

At the end of the month I get an invoice from my providers that say that I needed X processing power. I have to believe it and just accept the price if it is worth it.

I am sure there is elaborate performance monitoring software out there, but I doubt many developers really verify the bills they get.

Providers could just randomly add a few dollars on my bills and I heavily doubt that I would notice. Not wanting to give them any ideas here...

So a rented server in the end gives you much more control about unknowns related to costs. Doesn't mean it has to be cheaper and is as easy to maintain.


What's so difficult about clicking buttons on a web form?


What's an elastic rate?


The price without sustained use or commitment discounts. The raw cost of a compute unit for a random day/hour/minute.


Pay-as-you-go rate - either by the minutes or by the hour.


> That's kinda weird.

Not really.

On-demand capacity is (a) what the cloud is known for and (b) a reasonable common denominator.

Reservations get a lot more complicated with policies around usage accounting and transferring and selling them.

Obviously there is a lot more available than hourly VPS (reserved, interruptable), but that's a decent benchmark to start with.


Yeah, and without comparing spot/interruptible prices, these results are meaningless for me.


You wouldn't typically run your database VMs on spot/interruptible instances.


You wouldn’t typically run your databases in a cloud environment on your own VMs - you would use a managed service.


That depends on several factors. The database you might want to run may not be available for instance.


I'm still waiting on that managed PostgreSQL 12 service....


Almost....

PostgreSQL 12.0 Now Available in Amazon RDS Database Preview Environment

https://aws.amazon.com/about-aws/whats-new/2019/11/postgresq...


I would never use a managed database service again.


Managed databases are quite good. RDS is very mature and does a very good job with fail over, backups and is very easy to setup.

I don't want to manage anything that needs to be clustered. Technologies like that often need trained engineers with good knowledge of how the product behaves if we want to manage it ourselves.


Alas it's the only way to get decent efficiency on certain cloud platforms. However, you often can't get the custom plugin and modules. On amazon bare metal ephemeral disk instances are decent high performance alternatives but you can't beat aurora for most pg usecases.


This sounds like the first line in a novel.


Unfortunately, "Nor would I use cloud-based document storage" is the next, and last, sentence.


Well, plenty of people do...


Why?


But you would run expensive machine learning jobs on them, and these (GPU instances) tend to offer more significant absolute savings when they're pre-emptible.


why not? Just kidding, but it's getting more common :)


In distributed databases with appropriate grace under churn... Sorry I wasn't clear in my earlier post


Yeah but my database costs are a tiny sliver of my overall expenses and the instances will be located in whatever cloud provider hosts the rest of my infrastructure.

How many people are running a cloud account with ONLY a cockroachdb in it?

Your point seems pretty obtuse tbh.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: