> Components like RAMs, USB, and M.2 SSDs don't use 12v <snip> but 3.3V or 5V. The motherboard will be responsible for managing voltages for these components.
Surely this will simply move part of the complexity from the PSU to the motherboard? Resulting in more efficient and simple PSUs, but more inefficient and complex motherboards? I've only ever had motherboards fail because of leaky capacitors or busted voltage regulators. The only PSU failure I've ever had was on account of a broken fan. Won't this lead to more motherboard failures?
The cynic in me thinks that Intel would be fine with that, because that way they can sell more motherboard components.
I'm not too worried about the complexity. What I *am* worried about is that your number of SATA power ports on your motherboard will become the limiting factor for how many SATA drives you can connect. The number of SATA *data* ports on the motherboard was not as much of a limiting factor because you can add data ports through M.2 / PCIe adapters.
For example, the "MSI Pro H610M 12VO" hyperlinked in the article has a manual that shows it as having two SATA power ports, and an image shows one such port connecting to a single drive. Now I imagine this is a simplified illustration and maybe each port will have enough power to supply 3-to-4 HDDs (depending on current rating of the HDDs of course), but that's still a max of 8 HDDs. Meanwhile the non-12VO PSU I use has four ports available that can each connect to 3-to-4 HDDs each, so a max of 16 HDDs, and it definitely has enough spare power to support them because I sized the PSU for that need.
In the post PATA days the available data connectors all map to disks 1:1, it knows how many power connectors would be needed for them. Expansion cards could probably supply power as well as data? I suppose other uses for those connectors would need to provide shims, much like the molex shims they use now.
A 12VO PSU can still choose to provide SATA power ports, and I imagine that at some point there will be PCI-E to SATA power adapters for PSUs that support 2x GPUS and users who want 1x GPU and a lot of SATA.
I suspect that you will have to get add in cards or drive caddies that also do power conversion, but these already exist (although expensive).
More than 2-4 drives is starting to hit a point where you should start thinking about a different platform anyway.
So two PSUs / power bricks? I'll just stick with non-12VO PSUs and boards as long as I can, and if enough people think like me then mobo manufacturers will continue stalling on 12VO.
Does the power brick come exclusively with the motherboard connector? I'm pretty sure you will be able to get one with extra connectors for power-hungry peripherals.
Your adapter board that will need an extra pair of switching power supplies. I doubt you'll even need an extra board.
(Not that I expect anybody to actually adopt the standard. People will continue to drag their feet, mostly because there aren't enough gains to switch.)
That timestamp is where they address this issue. It seems the argument is that in sleep mode (low power modes), the system will draw overall less power with ATX12VO vs today's standards.
All these more and more complex sleep modes on desktop seem like they're solving problems people don't have. I thought we solved suspend-to-RAM and suspend-to-disc years ago; wasn't the point of the +5VSB rail to keep stuff like RAM refresh going? Maybe on laptops where people are regularly suspending them, it's a bigger deal, but those are using bespoke power solutions and so ATX12VO solves nothing.
If it's so desperately vital that they have some "it's low-power, but still doing something like checking for notifications" mode, couldn't this be done with a mini-coprocessor with the specs of a RasPi 1 running on +5VSB?
Half the mainboard is already just a PSU, this just means the big initial step down from 240v/120v is in a separate shielded/grounded box
I somewhat wished they’d gone to 48vdc instead of 12vdc, though to further reduce cabling, but I guess that would be a big change, the 12vo seems much simpler
this is a computer not a car, 48v seems overkill and dangerous.
components use 12v, 5v, or 3.3v in pcs currently. why do the step down from 48->12 instead of running 12v naturally? 12v is also what is used by the most power hungry components of a PC (GPU).
In fact the exact opposite is true, it is safer in practice. In a true fault condition into flesh the currents involved aren’t substantially less safe (4x microamps is still microamps). With protection circuitry and current limiting, the permitted currents are that much lower, which is easier to deal with.
48v seems vastly safer. There's 1/4 the thermal heating for equivalent power. The new high density 12VHPWR connector for GPUs has been melting down & had to get basically redesigned. That would not have been a problem at all on 48V.
There shouldn't be an intermediary step down to 12V. Google's been doing direct step down from 48V->~1V with GaN fets for half a decade, which is vastly more power efficient than the two step 277V->12->1V that many server systems go through.
It'd also be amazing to have usb-c extended power (48v) be something we could just add for free, essentially. (Alas this is complicated by needing to step down to lower voltages too, necessities some semiconductors in the delivery path to usb-c, which would have some voltage drop. This strongly implies to me something more like 52V or 54V would be ideal instead of 48V.)
For anything that typically runs under 100W, I agree that 48V is ridiculous. But I would love love love to see some motherboards for ThreadRipper that have GaN fet for direct conversion from 48V, or stuff like that. It'd be pretty niche. But it should be an option!
One of the big potential winners could be people with solar; being able to run your system directly off your solar bank would save double digits percent of power, versus inverting then converting back.
You're right that in the immediate term we'd see a lot of intermediary conversions. I'd love to have a better game plan, for not just motherboards but other components to expect either 48v or wider voltage inputs. Non-trivial to support 10-60V but if that was a market expectation & not a niche need we'd see very affordable solutions spring up practically overnight. There's already a range of usb-c focused power converters that are exceedingly cheap that could step in for this need!
Edit: seems like Google is using two stage conversion primarily, but not necessarily 4:1 48V:12V. Their fixed ratio converter seems to have a 6:1 mode (8V) target as well, or be configurable for even bigger ratio (they said up to 12:1 in their talk). As of 2019, a bit of change of tune from original 2016 ambitions. Good talk? https://youtu.be/aBkz2JR4UVs
Power regulators that can efficiently produce the 1-1.5v output with the most minimal noise generally have lower maximums on their input voltage as well. In designing some of my own IoT devices I've looked into a fair number of power regulator ICs and 36V is a common maximum input voltage. The efficiency curves on the datasheets also usually show the efficiency gets lower the higher the input DC voltage. In a power regulator, inefficiency means even more heat you have to dissipate, not great in a GPU. I'm sure the GPU would love to take even lower voltage if it could, but 12V is the compromise we've decided on to get decent efficiency without requiring insanely thick conductors.
48V is simply too much for most electronics. Finding parts to handle 12V or even 20V is pretty trivial, but at 48V you're going to have serious trouble sourcing it at a reasonable price. Maybe this will change as 240W USB-C becomes common, though.
An additional issue is that the sweet spot for voltage conversion is around the 1:10 ratio. Going 120V -> 12V -> 1.2V makes a lot of sense, 120V -> 48V -> 1.2V less so.
Nah, you just have to size things correctly. Passives are fine.
You use a 100K resistor if you really need to keep things at 0402 for 100V. (P=V^2/R = 0.1 watt)
A 0.1uF 0402 ceramic cap rated at 100V is half a cent.
The biggest problem are your power transistors with max Vgs. That requires a bit of care at 24V; requires quite a bit more care at 48V; requires very specialized topologies at 96V.
A lot of the complexity already moved to motherboards. CPU and ram voltages are generated on the motherboard by necessity --- they don't run at 5v or 3.3v, they are adjustable in small steps based on load and configuration.
The problem with the current design is how to divide the power between the 12v, 5v and 3.3v rails.
If you put most on 12v but you have a motherboard pulling more from 5v, you have a problem. Currently power supplies divide it based on average motherboard usage, but these slowly change, and your old power supply will stop having the optimum mix.
For at least ~20 years, the typical solution for ATX power supplies has been to make one big 12V rail, and then tack on up to about 120W of converters to step down from 12V to 5V and 3.3V. In practice, this has meant that anyone running less than a dozen disk drives has excess capacity on their 5V rail, but no capacity from the 12V rail was sacrificed to provide that.
There was a time when the 5V and 3.3V rails on ATX power supplies weren't vestigial, but that era was shorter than the current era of all the major high-power components using a 12V supply.
For a while, some PSUs were dividing their single 12V rail into separate outputs with individual current limits, to more closely adhere to older ATX standards. Consumers were not interested in managing the complexity of balancing power between virtual rails while CPU and GPU power were skyrocketing.
USB uses 5V, 9V, 15V, 20V, 28V, 36V or 48V. M.2 uses 3.3V. But those are all interface voltages, they're almost always stepped down on device to the actual voltage used.
RAM doesn't have an interface voltage, it uses the voltage supplied, typically around 1.2V.
As I understand it, 48V makes sense for the long cable runs of PoE and server racks, as well as the varying voltage needs of cars. But a PC doesn't have such runs, and 12V is already higher than any PC component's voltage. It seems like a good tradeoff.
This is correct. I see a future where there is an optional 48VDC line that is run in our homes to power stuff like LED lights and dedicated USB ports. This can become especially important for homes that use solar and stored electricity where you might be having to go from 48VDC -> 120/240VAC -> 12VDC -> 5VDC (see note) with each conversion resulting in the loss of some power due to the fact that these cannot be 100% efficient.
Note: a lot of flyback-style transformers are setup to step down the AC voltage 1/10th (12V in 120VAC, 24V in 240VAC) because transformers are cheap/abundant for that. It’s cheaper often to put a buck converter that takes any voltage from 12-24V and convert it to a fixed required voltage.
240VDC is way too dangerous for residential use. And really it's not super practical to drop that down to 5v for charging your widgets. 48V makes much more sense.
And for that matter, I wouldn't want 240VDC on utility poles either. If a line goes down, it's so much more dangerous.
The more sane solution would be a high capacity DC converter installed next to the line transformer that feeds your street. DC infrastructure inside the home makes total sense at moderate voltages. City scale DC at line voltages is honestly terrifying.
Irrelevant. It’s virtually the same risk, the only minor difference between 240V AC and DC is a hypothetical arc is harder to extinguish in DC (AC benefits from the oscillation slightly), but if were to the point that wires are arcing we’ve already got major problems.
Why not just throw in a DC/DC converter from 48 to 12V? You need protection anyway (so that failure won't take down your whole 48V microgrid) and perhaps even management.
No, 12V is already too low for the most power-hungry PC components (GPUs), as witnessed by the trouble nVidia has had with power connectors on the 4090.
If they're going to go to the trouble of coming up with a new standard, it's asinine to base it on 12 volts, which is already known not to be sufficient. They need to use 48V.
That's not how it works. The GPU receives power at 12V via the PCIe 8/6+2 connector at 20-30 amps or so. It then steps that down to ~1 volt at the point of load, at hundreds of amps, but that's irrelevant.
The problem is that safely handling "20-30 amps or so" cannot really be done with cheap consumer-level Molex connectors, or at least should not be done, ideally. More expensive connectors are needed, so what they are doing now is asking for trouble. However, the voltage doesn't determine how hot it gets; only the current influences that. If they were distributing power at 48 volts, the current would be about 1/4 what it is now, and overheating at the connector would not be an issue.
says 600W GPU, that's 50A. Screw based connectors would do just fine - delivery 48V would require to step it down to some other (sub 20) voltage first. That part would be at the GPUs.
Yes, certainly, using better connectors would head off overheating problems. That's not free. Something else that isn't free is extra copper in the wire harnesses from the PSU.
Meanwhile, it's true that point-of-load regulation from 48V directly to 1V will be less efficient (well, maybe; see nimish's link below). But by the same token, regulation in the PSU would be more efficient if it didn't have to go all the way down from 150-300 VDC on the primary side down to 12 at the output.
A good compromise would probably be the geometric mean between a few hundred volts and the voltage at the load with the highest current. Assuming 250 volts on the primary as a compromise between 120/240VAC operation and 1V all the way downstream, that would be about 16 volts. Doesn't seem like a big improvement, but it would be enough to lower the current at the GPU input connector by about 1.8x compared to 12 volts.
Either way, sticking with 12 volts in any new standard is just stupid. 48 has another advantage, which is that automotive is going to (finally) start moving to it over the next few years. There could be a lot of room for parts-sharing between the two industries.
48V -> 1V (for the cpu) would be super extra funny, though. The power stages would need to be high voltage mosfets.
No idea if a single conversion 48 ->1V at 400A+ would make any sense.
Also the capacitive coupling for 48v is quite more dangerous.
Note that the paper you linked is still doing a two-step conversion - they essentially just removed the capacitors between the two steps. It doesn't do 48V -> 1.5V, it does 48V -> 8V -> 1.5V. They even specifically point out the downsides of single-stage conversion topologies.
"It leverages the high performance
of low voltage DrMOS devices and multiphase buck control", drmos is driver and mosfet.
Like mentioned (by a sibling comment), it's not a single conversion, but the efficiency is not the prime issue regardless - it's the reaction time and the voltage sag. Load insertion causes the voltage at the CPU (inside) to drop and be significantly lower than the one measure at at the VRM, due to losses at such high currents.
The lack of voltage simply causes the systems to crash.
You convert it to a lower voltage by switching it on and off very rapidly and smoothing out the result. That's going to behave a lot more like AC than DC.
That needs a two stage converter down to chip voltages, doesn't it?
It's a reduction in cables, but I wouldn't say drastic. An 8-pin with good components can handle over 500 watts at 12 volts. Just modernizing gives you most of the possible benefit.
Eh, 500 watts at 12 volts is 42 amps. Assuming half of those 8 pins are positive, the other negative thats 10 amps per pin.
You’d better not have any corrosion in that connector, or a loose wire if you want to be able to do that without fire risk. And you’ll get non-trivial resistive heating in many cases over time.
Doing the same at 48 volts is 10.5 amps total, 2.6 amps per pin. Hard to mess that up.
I mean, the much smaller pins on the 12-pin got rated for 8.3 amps. And yes that connector does have issues, but they're around bad latching much more than current.
I was disappointed they did not make the jump to 19V, which is standard in laptops. A far less drastic change which is already handled by component manufacturers.
This seems dumb to me, it just moves parts of the power supply to the motherboard. Now if there's a power problem you have to replace the entire motherboard instead of just the power supply. In a couple of years you'll have an old motherboard with no new replacement options, if it fails you basically need a new computer because newer motherboards aren't compatible with your current CPU/RAM...
Most of the motherboard's power consumption is dedicated to supplying the CPU, and that already relies on the motherboard to step down from 12V to ~1V at a very high current. The demands on the 3.3V and 5V rails are pretty small in comparison.
This change is unlikely to make motherboards significantly more failure-prone than they are now.
To be fair the CPU voltage regulators are already there, all this would require is adding a small 5V and 3.3V stage on as well. Since most of the power goes to the CPU.
If you're designing a new power delivery method from scratch, I wonder if 12V is the right voltage.
It seems you could take a page out of the laptop handbook and go to USB-C 20V standard. There must already be a whole ecosystem of 20V capable PMICs / SMPS already, so this doesn't seem like it would be a drastic change.
Heck, it seems like we should just jump to the USB C power delivery standard for desktop PCs. The high end is currently 48V / 240W (which I'm guessing is a limitation of the connector + max safe voltage).
Either go with a different design for the connector to get above 240W, or just parallel up 2-4x USB C connectors internally. That would be more than sufficient.
I'm sure you'd be taking advantage of some of the production scale of laptops.
And this would allow easy extension to stuff like power external monitors etc from the desktop PC directly. Being able to charge a laptop and have an external display (or two) all connected to the desktop PC instead of their own dedicated power bricks would be awesome.
This is a far cry from the AT days when you could swap the motherboard power connectors and let all the magic smoke out. Those were the days. I never did it myself, but I watched a guy who was doing a working interview for a job (to be fair I was doing the same!) smoke a computer that way during his interview. They hired him anyway, and they also hired me. It ended up being the worst job I'd had to date. I don't miss the crappy computer shops of the late 90's. The good ones though, those were gold.
> Power supplied on the bus is bulk unregulated +8 Volt DC and ±16 Volt DC, designed to be regulated on the cards to +5 V (used by TTL ICs), -5 V and +12 V for Intel 8080 CPU IC, ±12 V RS-232 line driver ICs, +12 V for disk drive motors. The onboard voltage regulation is typically performed by devices of the 78xx family (for example, a 7805 device to produce +5 volts). These were linear regulators which are commonly mounted on heat sinks.
I'd love to see some 12VO mini-itx boards and SFX or flex PSUs. The most challenging thing when it comes to building a small form factor PC is routing the ATX power cable and that connector is so stiff god forbid you ever need to disconnect it to do some maintenance. Small form factor PCs have benefited a lot since the move away from IDE drives but things could still be significantly improved to get more power into a small space.
Ironically this might actually be worse for mini-itx. In the article's images, everything to the right of the RAM slots is essentially voltage converters. Chop off everything below the PCIe slot to get a mini-itx board, and it's still 30% bigger than we're seeing today.
It might allow for a slightly smaller PSU, but you get a substantially larger motherboard back in return as it needs to include voltage conversion for all the peripherals you could potentially attach. If you're making a truly tiny build you won't have much peripherals anyways so all those parts are wasted and something like a Pico PSU might already be a better option today.
With the rapid uptick in power draw by CPUs and GPUs perhaps its time for ATX to go and an entirely new standard to come in[0], ditching the antiquated 12v rail and step up to 24 or 48v. We've got the twin kludges of the ever growing main power plug to the motherboard and the growth of supplemental power to the GPU first a 6-pin then 8, then two 6's etc etc and now with 12vhpwr and its attendant teething pains. When is enough enough?
While we're dreaming, could we also replace M.2 slots on desktop boards with U.2 or OCuLink connectors? M.2 is optimized for thin, light portable devices like laptops and tablets; it makes zero sense in a desktop, where it basically just wastes board space and makes cooling more awkward.
Businesses seem to prefer incremental changes over large ones. Look at IPv4 vs IPv4. The two are completely incompatible and decades later we still haven’t changed.
12v is a good compromise because PSUs can be manufactured to support both the new and old standard and motherboards could even accept ATX power inputs and just ignore the 5v and 3v rails.
It allows for a transition period that’s much smoother than a complete revamp. It’s the more pragmatic solution.
I'm not sure if this was a typo or deliberate, but either way the irony is beautiful.
>12v is a good compromise because PSUs can be manufactured to support both the new and old standard and motherboards could even accept ATX power inputs and just ignore the 5v and 3v rails.
The biggest sticking point with 12VO is that it's physically incompatible with ATX. 12VO doesn't provide the voltages ATX expects (obviously), nor does 12VO provide compatible physical connections (this incompatibility is a selling point).
ATX's biggest selling point and why we still use it to this day is its backwards and forwards compatibility. Outside of certain outliers you can just expect an ATX PSU to work with ATX equipment, regardless of when you buy or bought the parts; the virtues of a well-followed industry standard.
I agree with your larger point. That said, here in 2023 in the US, my phone only gets an IPv6 address from its LTE network. Of course there's some NAT6to4 somewhere that makes this all transparent to me, and I can still access v4-only resources just fine.
It pains me that in my home server setup, I have solar panels that produce DC, then an inverter converts that to AC for the wall outlet, then my UPS converts that to DC to charge its battery, which converts it back to AC to plug my servers into, then the server PSU converts it to DC to power the components. All that switching has to waste a lot of energy (DC->AC->DC->AC->DC).
Supposing each conversion is 90% efficient, you're looking at 65.6% overall. If your server is around 300W, running 24/7 (which is what mine was before my power-reduction project), you're looking at 17kWh of waste/wk. That's quite a lot. Even if you go up to 95% it's still 13kWh...
90% is rather low today. BYD has a battery system that has a round-trip effiency of 96%. Solar inverters often have >99% effiency - although it varies across their operating range, dropping at the low end as they are optimized for peak power.
Surely this will simply move part of the complexity from the PSU to the motherboard? Resulting in more efficient and simple PSUs, but more inefficient and complex motherboards? I've only ever had motherboards fail because of leaky capacitors or busted voltage regulators. The only PSU failure I've ever had was on account of a broken fan. Won't this lead to more motherboard failures?
The cynic in me thinks that Intel would be fine with that, because that way they can sell more motherboard components.