1800 Watts for a full rack? That’s bordering on useless. You can run maybe two real servers on 1800 W.
You need to provision enough power for simultaneous startup after a power outage, unless you have some really smart PDUs and automation. We have a 10 year old DC with 30A@208V per rack and we have to leave racks half full because modern servers are so power-dense.
Gatekeeping server power!? We run 1U on 250-350w (230v).
So yes, odd power allocation but servers are real way before 900w. We're running dual Xeon Gold low core count machines (because of Microsoft licensing) and they're pretty decent.
Absolutely, I'm with you. I just meant that 900w isn't where "real servers" start.
If you're at the point where you're experiencing issues with scalable power on you could disable that feature and script ipmitool to power the servers on once ipmi is reachable but in a queue. Could help you eek out a bit more density.
You could possibly run this fictional daemon on your management switch :)
My 900W per server number comes from experience with “hyper-converged” infrastructure where each 2U node has two filled CPU sockets, gobs of RDIMMs, and is stuffed with flash-based storage.
I think this is the most common “enterprise” datacenter server type in 2021, mostly due to licensing constraints from VMware/Microsoft/Red Hat/Oracle/etc. Such servers give the most “bang” per dollar when licensing costs are included.
We're running VMware with vSAN on 1u nodes with 2x Xeon Gold 8 core cpus @ 3.6ghz base, something about the Microsoft SPLA licensing as we're mainly virtualizing Windows machines. 384 GB RAM per node, and we're just pulling about 400.
You need to provision enough power for simultaneous startup after a power outage, unless you have some really smart PDUs and automation. We have a 10 year old DC with 30A@208V per rack and we have to leave racks half full because modern servers are so power-dense.