24x 6.4TB Intel P4610 NVMe SSD = 24 x $310 = $7440
2x AMD EPYC 7542 = 2 x $1300 = $2600
2 TB DDR4 ECC RAM ~ $13700 (estimate from a couple of Google results)
Those add up to something like $25K. Sure, there's also the price of the motherboard, chassis, maybe some other peripherals like external network cards, assembly + support + warranty etc. but that doesn't explain an 800% markup.
One thing to note is that a VAR (as mentioned elsewhere) will knock 75% off the price listed on Dell’s website.
Another this is that that is way too cheap for those SSDs. Enterprise SAS (not plain SATA) SSDs are a lot more than $310. Our 7.68TB drives are about $2k each, but worth it if they stay problem free.
Even on Newegg, SAS SSD of that size are $900-2000, so add warranty and service on top of that.
> One thing to note is that a VAR (as mentioned elsewhere) will knock 75% off the price listed on Dell’s website.
Makes sense.
> Another this is that that is way too cheap for those SSDs. Enterprise SAS (not plain SATA) SSDs are a lot more than $310. Our 7.68TB drives are about $2k each, but worth it if they stay problem free.
I was able to find these two enterprise-grade NVMe SSDs on Newegg:
I am not too much of an expert on enterprise hardware, but those are PCIe interface. I don’t know how possible it is to rack up 24 of those in a single server (you would run out of lanes).
This is something more similar to what is in those Dell servers (and there are 24 of them):
There is certainly a markup with Dell, but it’s sort of like a cloud vendor - pay for the warranty and service, and be (somewhat) hands off if something breaks.
Oh yeah, the article. I guess this thread got sidetracked on the topic of Dell's pricing :). I wonder how common a 24-drive NVMe server is.
I don't know all the ins and outs of SAS vs NVMe. Maybe someone else can chime in. I am at the end of my knowledge now.
I suppose one benefit is the availability of hardware RAID controllers, as hinted in the article. But it does seem interesting that NVMe is cheaper than SAS, while theoretically having higher bandwidth.
> AWS does not have a 100% similar VM, but you could have something close for ~ 20,000 USD monthly. Not that bad.
Is that the on-demand cost, or the reserved cost? For comparing to buying a server outright, you should be comparing the reserved cost. I’m not sure exactly which instances you’re looking at to get $20k/mo, but I see some instances with 64-128 cores/1-2 TB memory for <10k/month.
For storage, I’m not sure how you’re getting >100k… I plugged in the highest IOPS I could for io2 volumes for 150 TB of storage and got 30k/mo. Also worth considering here that you don’t have to provision all 150 TB up front - you could start with 5 TB and increase in size as you grow, for example.
Still gonna be hella expensive but all of this changes the calculus quite a bit from your estimates.
does it have similar instance in principle: you rent dedicated server with lots of ssd attached and with no fear that instance will be stopped any moment for whatever reason?..
Never understood why people are so infatuated with "cloud" options. Yes, it is convinient, but you are absolutely paying at least order of magnitude more for the same amount of compute/storage.
Because there's no such thing as a free lunch, running your own datacenter (or managing datacenter managed services) is work, and the clouds are better at providing exactly the compute you need when you need it than buying a ton of excess hardware you might or might not use (or can't scale quickly enough if you _do_ use it all)
A datacenter makes sense if your usage profile is steady state and hardly ever changes, or if growth rate is predictable and capacity can be procured in advance. Any other use case is better suited for the cloud, IMO.
correct, and managed solution will have cost too, quality issues: 3p services have bugs and are hacked too and that's something you can't control, and added complexity.
To me it looks like if you have small traffic/storage requirements, then cloud may look good. But if you have lots of data and need compute then running beefy bare-metal server can be very cost beneficial.
Now you’re in the business of maintaining/patching/imaging/securing your operating system and all of the libraries/builtins it comes with...along with your app's stack.
That's not nearly as simple as "Hey, here's my code, Lightsail; ship it and present me its URL. Thanks!"
In my experience, in-house groups racking and configuring and maintaining boxes often become a priesthood of negative value bickering over pet machine names.
While cloud overcharges for McD product, I know what I get.
Not to mention that you get it within minutes of asking for it. I've worked at places where the internal bureaucracy managed to take double digit weeks to deliver a new database instance.
It‘a very depressing hearing this. Anything cloud at my company takes exponentially longer to setup as we have layers and layers of bureaucracy for any data that leaves our network.
It's gotten absolutely bonkers with all the demand for GPUs and the cloud companies are just raking it in hand over fist.
On AWS reserved instances with 8x A100s or H100s (if you can even get them) cost more per year than the total upfront retail price of the equivalent pods from Lambda Labs. The on-demand price is even more absurd.
I used to lead a widely used and influential platform eng team at Adobe. Parent comment is right. We had our own datacenters and multiple cloud providers, and growing the public cloud infrastructure had a demonstrably higher return on investment than the private cloud.
You can not get actual unmetered 1gb/s for anywhere close to that. If you start pushing anywhere close to that much bandwidth, you will be throttled / have your account closed. For example, Hetzner caps your bandwidth at 20TB per month.
Additionally, if you are actually pushing close to that much traffic, you can negotiate guaranteed commit prices w/ AWS that are competitive (especially when you consider the quality of bandwidth. I can only get ~100 mb/s to my hetzner server because of how bad their peering is. I can easily saturate my 1GB connection to anywhere in AWS.)
---
(Having said that, this does only apply to egress via cloudfront. Things like charging for intra A-Z bandwidth within the same region is insane, and for many workloads may be surprisingly expensive.)
Yes, I was pointing out that many of their products have explicit limits.
But more importantly, the second link I posted shows how that despite being "unlimited" - it's not uncommon for hetzner to throttle / close your account if you go over unstated traffic limits.
> Yes, I was pointing out that many of their products have explicit limits.
No, you clearly stated that 1gb/s servers have 20tb cap, but no such products have such limitation, no links say anything like that.
It makes total sense to cap 20gb/s, because such unlimited bandwidth is clearly too much.
> But more importantly, the second link I posted shows how that despite being "unlimited" - it's not uncommon for hetzner to throttle / close your account if you go over unstated traffic limits.
you can argue that, though they didn't close or throttle account, they sent warning, and cap was 12 times higher than what you previously stated.
What you pay for is the ability to provision hardware for new R&D and projects in minutes rather than days, weeks or months. Companies are willing to spend millions a month in cloud fees to accelerate hundreds of millions in revenue.
I think there are ton of great use cases for the cloud, but people should try and think for themselves and decide if their circumstances and workload is really a good fit.
A ton of people forget that a bunch of servers across a few colocations can pay for itself in months, especially if you go for second-hand gear that is dirt cheap.
Again, going (back) to colocating hardware could not be good fit. But with modern management tools and datacenter services like 'remote hands' I think people should not reject it upfront.