Hacker News new | past | comments | ask | show | jobs | submit login

I bet that compared to an equivalent load hosted on AWS, that lovely box pays for itself in full every month, if not every week...



I did some napkin math (could be very, very wrong) but this server costs ~ 225,000 USD according to Dell's webpage.

AWS does not have a 100% similar VM, but you could have something close for ~ 20,000 USD monthly. Not that bad.

However, storage costs alone would be astronomic. Like > 100,000 USD / month.

I have no idea how much outbound traffic Let's Encrypt serves, but that also could be a quite relevant expense.

OFC I also don't know how much Let's Encrypt pays for energy, cooling, operations, real estate, etc... but:

> I bet that compared to an equivalent load hosted on AWS, that lovely box pays for itself in full every month

I would not take the other side on that bet


$225K for this sounds bonkers:

  24x 6.4TB Intel P4610 NVMe SSD = 24 x $310 = $7440
  2x AMD EPYC 7542 = 2 x $1300 = $2600
  2 TB DDR4 ECC RAM ~ $13700 (estimate from a couple of Google results)
Those add up to something like $25K. Sure, there's also the price of the motherboard, chassis, maybe some other peripherals like external network cards, assembly + support + warranty etc. but that doesn't explain an 800% markup.


We recently purchased a similar server.

One thing to note is that a VAR (as mentioned elsewhere) will knock 75% off the price listed on Dell’s website.

Another this is that that is way too cheap for those SSDs. Enterprise SAS (not plain SATA) SSDs are a lot more than $310. Our 7.68TB drives are about $2k each, but worth it if they stay problem free.

Even on Newegg, SAS SSD of that size are $900-2000, so add warranty and service on top of that.


> One thing to note is that a VAR (as mentioned elsewhere) will knock 75% off the price listed on Dell’s website.

Makes sense.

> Another this is that that is way too cheap for those SSDs. Enterprise SAS (not plain SATA) SSDs are a lot more than $310. Our 7.68TB drives are about $2k each, but worth it if they stay problem free.

I was able to find these two enterprise-grade NVMe SSDs on Newegg:

  https://www.newegg.com/p/2U3-0005-000J8 ($474, 7.68 TB, 1 DWPD)
  https://www.newegg.com/p/2U3-000S-000Y8 ($360, 6.4 TB, 3 DWPD)
Is there some kind of catch I'm missing?


I am not too much of an expert on enterprise hardware, but those are PCIe interface. I don’t know how possible it is to rack up 24 of those in a single server (you would run out of lanes).

This is something more similar to what is in those Dell servers (and there are 24 of them):

https://www.newegg.com/samsung-pm1643a-7-68tb/p/2U3-0005-000...

https://www.newegg.com/p/0N7-0133-00003

There is certainly a markup with Dell, but it’s sort of like a cloud vendor - pay for the warranty and service, and be (somewhat) hands off if something breaks.


Threadripper and Epyc has been smashing the pcie lane limit for a while now. That's why Epyc is kicking intel's ass in server applications.

My personal workstation at the side of my desk has 6 pcie nvme ssd's and I can add 4 more without breaking the lane bank.


> I don’t know how possible it is to rack up 24 of those in a single server

that's the one of points of article: Epic cpus have 128 lanes for a while, and that's how they upgraded to 24 NVMEs.


Oh yeah, the article. I guess this thread got sidetracked on the topic of Dell's pricing :). I wonder how common a 24-drive NVMe server is.

I don't know all the ins and outs of SAS vs NVMe. Maybe someone else can chime in. I am at the end of my knowledge now.

I suppose one benefit is the availability of hardware RAID controllers, as hinted in the article. But it does seem interesting that NVMe is cheaper than SAS, while theoretically having higher bandwidth.


Yeah, I feel SAS is obsolete tech, and will be replaced by NVMe everywhere going forward.


> > One thing to note is that a VAR (as mentioned elsewhere) will knock 75% off the price listed on Dell’s website.

> Makes sense.

Does it? Adding a middleman that needs to take a cut to exist reducing the price makes sense to you?


For a critical system, you should really have two, for HA purposes. Do they?


Yes. Let’s Encrypt has two locations, each of which has fully redundant hardware, so that’s a minimum of 4. We actually have a few more.

(I work at Let’s Encrypt)


The hardware may have cost more in 2021 when this article was written.


This did not cost us $225k. About half that. Nobody pays the website price, you pay a lot less via a VAR.

- ED of ISRG / Let's Encrypt


Nobody is paying base price for a box like that. I’d probably bid them against HPE and pay ~90-100k.

If Amazon looks reasonable for something like this, the math is wrong. They’re renting boxes at 60-70% margin.


> These are expensive servers, crossing into six digits, but not $200k.

https://news.ycombinator.com/item?id=25865967


It’s closer to $80k, if you went via a VAR.

I spec’d a current gen server with:

- 2x more cores

- 50% more RAM

- 50% more storage

For only ~$80k, and that’s not even with a discount and is a more powerful current gen server with more cores/ram/storage.

It’s $41k if I matched the specs exact (but that’s a bit unfair to do because the Dell server is 2-3 years old).

https://www.siliconmechanics.com/system/rackform-a335.v9


> AWS does not have a 100% similar VM, but you could have something close for ~ 20,000 USD monthly. Not that bad.

Is that the on-demand cost, or the reserved cost? For comparing to buying a server outright, you should be comparing the reserved cost. I’m not sure exactly which instances you’re looking at to get $20k/mo, but I see some instances with 64-128 cores/1-2 TB memory for <10k/month.

For storage, I’m not sure how you’re getting >100k… I plugged in the highest IOPS I could for io2 volumes for 150 TB of storage and got 30k/mo. Also worth considering here that you don’t have to provision all 150 TB up front - you could start with 5 TB and increase in size as you grow, for example.

Still gonna be hella expensive but all of this changes the calculus quite a bit from your estimates.


They're also using ZFS in Raid1+0, so 38.4TB of usable storage. $14,566/month on AWS io2 with max IOPS.


I'd also bet money that this has higher disk I/O performance than the rough equivalent on Amazon.


> AWS does not have a 100% similar VM

does it have similar instance in principle: you rent dedicated server with lots of ssd attached and with no fear that instance will be stopped any moment for whatever reason?..


Absolutely.

Never understood why people are so infatuated with "cloud" options. Yes, it is convinient, but you are absolutely paying at least order of magnitude more for the same amount of compute/storage.


Because there's no such thing as a free lunch, running your own datacenter (or managing datacenter managed services) is work, and the clouds are better at providing exactly the compute you need when you need it than buying a ton of excess hardware you might or might not use (or can't scale quickly enough if you _do_ use it all)

A datacenter makes sense if your usage profile is steady state and hardly ever changes, or if growth rate is predictable and capacity can be procured in advance. Any other use case is better suited for the cloud, IMO.


there are several middle points between full cloud and full own datacenter: rent dedicated server, rent rack in collocation datacenter.


You still need to manage the underlying database and security which requires a lot of knowledge to do well.


correct, and managed solution will have cost too, quality issues: 3p services have bugs and are hacked too and that's something you can't control, and added complexity. To me it looks like if you have small traffic/storage requirements, then cloud may look good. But if you have lots of data and need compute then running beefy bare-metal server can be very cost beneficial.


I still think that you need to manage some security things even if you are using AWS.


People forget that Colo & Leasing servers exist.

And it costs like 1/5th of cost of “cloud”, and the later still gets you “the cloud”.


Correct, but, again, free lunches don’t exist.

Now you’re in the business of maintaining/patching/imaging/securing your operating system and all of the libraries/builtins it comes with...along with your app's stack.

That's not nearly as simple as "Hey, here's my code, Lightsail; ship it and present me its URL. Thanks!"


In my experience, in-house groups racking and configuring and maintaining boxes often become a priesthood of negative value bickering over pet machine names.

While cloud overcharges for McD product, I know what I get.


Not to mention that you get it within minutes of asking for it. I've worked at places where the internal bureaucracy managed to take double digit weeks to deliver a new database instance.


It‘a very depressing hearing this. Anything cloud at my company takes exponentially longer to setup as we have layers and layers of bureaucracy for any data that leaves our network.


The grass is always greener on the other side of the fence I guess. :)


I've worked at places where the internal bureaucracy managed to take double digit weeks to deliver a new database instance.

That wouldn't happen if the internal bureaucracy for firing the laggards wasn't so excruciatingly slow.


It's gotten absolutely bonkers with all the demand for GPUs and the cloud companies are just raking it in hand over fist.

On AWS reserved instances with 8x A100s or H100s (if you can even get them) cost more per year than the total upfront retail price of the equivalent pods from Lambda Labs. The on-demand price is even more absurd.


Because cloud infatuation is largely a myth. Every middle or large organization does periodic cost analysis and select cloud based on that analysis not on convenience. e.g. See this: https://aws.amazon.com/solutions/case-studies/emirates-case-....


> See this

that looks like some marketing material without any specifics..


I used to lead a widely used and influential platform eng team at Adobe. Parent comment is right. We had our own datacenters and multiple cloud providers, and growing the public cloud infrastructure had a demonstrably higher return on investment than the private cloud.


If you don't need to host something big, where you can get away with one good VPS, the cloud has the benefit of offering cheap bandwidth.

Once you have the bandwidth at your location and you don't need to be present in multiple locations, it's cheaper to self-host.

Next step would be colocation, but for a start, using cloud offerings is a cheap way to be a part of the internet.


Cloud bandwidth is definitely not cheap. If anything, it's where they rip you off the most.

You can get "baremetal"/dedicated servers from places like Hetzner and OVH that give unmetered gigabit connections for like $50.


You can not get actual unmetered 1gb/s for anywhere close to that. If you start pushing anywhere close to that much bandwidth, you will be throttled / have your account closed. For example, Hetzner caps your bandwidth at 20TB per month.

Additionally, if you are actually pushing close to that much traffic, you can negotiate guaranteed commit prices w/ AWS that are competitive (especially when you consider the quality of bandwidth. I can only get ~100 mb/s to my hetzner server because of how bad their peering is. I can easily saturate my 1GB connection to anywhere in AWS.)

---

(Having said that, this does only apply to egress via cloudfront. Things like charging for intra A-Z bandwidth within the same region is insane, and for many workloads may be surprisingly expensive.)


> For example, Hetzner caps your bandwidth at 20TB per month.

looks like your information is obsolete by 5 years: https://www.hetzner.com/news/traffic-limit


Your random blog post is the outdated one.

Their actual docs: https://docs.hetzner.com/robot/general/traffic/

And reports of throttling once you actually hit certain limits: https://lowendtalk.com/discussion/180504/hetzner-traffic-use...


your link says exactly what blog post said: root servers with 1gb/s uplink have unlimited traffic.

Other offerings (e.g. 20gb/s uplink) and small cloud servers have traffic limitations.


Yes, I was pointing out that many of their products have explicit limits.

But more importantly, the second link I posted shows how that despite being "unlimited" - it's not uncommon for hetzner to throttle / close your account if you go over unstated traffic limits.


> Yes, I was pointing out that many of their products have explicit limits.

No, you clearly stated that 1gb/s servers have 20tb cap, but no such products have such limitation, no links say anything like that.

It makes total sense to cap 20gb/s, because such unlimited bandwidth is clearly too much.

> But more importantly, the second link I posted shows how that despite being "unlimited" - it's not uncommon for hetzner to throttle / close your account if you go over unstated traffic limits.

you can argue that, though they didn't close or throttle account, they sent warning, and cap was 12 times higher than what you previously stated.


What you pay for is the ability to provision hardware for new R&D and projects in minutes rather than days, weeks or months. Companies are willing to spend millions a month in cloud fees to accelerate hundreds of millions in revenue.


I think there are ton of great use cases for the cloud, but people should try and think for themselves and decide if their circumstances and workload is really a good fit.

A ton of people forget that a bunch of servers across a few colocations can pay for itself in months, especially if you go for second-hand gear that is dirt cheap.

Again, going (back) to colocating hardware could not be good fit. But with modern management tools and datacenter services like 'remote hands' I think people should not reject it upfront.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: