News The 12VO power standard appears to be gaining steam — new standard will reduce PC cabling and costs

Status
Not open for further replies.

bit_user

Polypheme
Ambassador
IIRC, @InvalidError made the case that 24V would've reduced overall cost, system-wide. Since nothing uses 12 V, natively, it would've made PSUs cheaper to go with 24 V, and wouldn't have made much difference at the motherboard level.

I wonder if the prominence of 19 V power bricks might be why they didn't go with 24 V. It's probably a stretch of an explanation, but you do see those on mini-ITX systems, sometimes.
 

InvalidError

Titan
Moderator
IIRC, @InvalidError made the case that 24V would've reduced overall cost, system-wide. Since nothing uses 12 V, natively, it would've made PSUs cheaper to go with 24 V, and wouldn't have made much difference at the motherboard level.
IIRC, what I have said in the past is that 24V is the highest voltage that can be handled with inexpensive MOSFETs. Once you go beyond 24V, parts start getting significantly more expensive. That makes 24V the highest voltage that could be cost-effectively used to boost power supply and distribution efficiency.

Another reason for 24V is that if PCs gain many USB4+ ports in the future, each of those port will need 20V capability and it is much easier to implement those as buck-only converters from 24V than buck-boost converters from 12V.

Not sure what laptop adapters have to do with this. The 19V there is simply because it is a convenient voltage to charge 4S NMC/NCA battery packs from: 19V from the adapter - 0.5V battery-ORing diodes - charging inductor losses - charging PWM FET losses - current measurement shunt losses - wiring losses - etc. = 17V (4.25V/cell) end-of-charge voltage. Most VRMs in a laptop run off of whatever the system power rail is at, which is whatever the battery voltage is while the laptop is on battery power, so you already have plenty of examples of VRMs able to take 10-21V input voltage. Making those compoinents 24VO-compatible would only require bumping the absolute maximum voltage tolerance from the 23V or so they already have to about 28V.
 

edzieba

Distinguished
Jul 13, 2016
470
460
19,060
12VO is a good stopgap to make forward and backward compatibility with ATX PSUs and boards easier though - just need a dumb adapter one way and a low power 5v/3.3v buck converter the other. I'd have been happier with 48VDC - standard for datacentre DC distribution, so no shortage of components available to handle it. Quarter the current for the same power draw (e.g. 300W cards from PCIe card-edge power).
 

bit_user

Polypheme
Ambassador
Thanks for the info, as always.
: )

Glad to see you around. Happy holidays!
☃️ 🎅 ✨

Not sure what laptop adapters have to do with this.
Sorry, just making a random guess why they went for 12 V.

Perhaps a better one would've been for compatibility with existing PSUs? Would it be conceivable to use an ATX PSU with one of these new boards via a simple intermediate connector, or does the existing ATX PSU spec not require enough current on their 12 V rail?

If not that, do you have any theories about why they went with 12 V instead of 24 V?
 

Amdlova

Distinguished
Can 24 V really be that dangerous, if USB is using 20 V?
99.99% of times can't kill
But you never see what a dumb user can doo :)
20231226-120035.jpg
 

bit_user

Polypheme
Ambassador
99.99% of times can't kill
But you never see what a dumb user can doo :)
20231226-120035.jpg
Even if they press on a PCB with pointy barbs hard enough to puncture their skin, that's still not very dangerous. What makes current dangerous is when it's across your heart muscle.

My dad taught me, whenever you're handling a potentially live wire (talking about 115 VAC, here), to keep one hand in your pocket. That will help you remember not to potentially touch a ground with the other.
 
Thanks for the info, as always.
: )

Glad to see you around. Happy holidays!
☃️ 🎅 ✨


Sorry, just making a random guess why they went for 12 V.

Perhaps a better one would've been for compatibility with existing PSUs? Would it be conceivable to use an ATX PSU with one of these new boards via a simple intermediate connector, or does the existing ATX PSU spec not require enough current on their 12 V rail?

If not that, do you have any theories about why they went with 12 V instead of 24 V?
PCIe is on 12V if I'm not mistaken, stepping down this one at the mobo level would increase costs.
 
  • Like
Reactions: jlake3 and Amdlova

bit_user

Polypheme
Ambassador
I'd have been happier with 48VDC - standard for datacentre DC distribution, so no shortage of components available to handle it.
Again, what I think @InvalidError said about this is that 48 V wouldn't offer much/any cost-savings in the PSU, but would add significant cost to the motherboard. So, the least system-wide cost/complexity would seem to be at 24 V.

Quarter the current for the same power draw (e.g. 300W cards from PCIe card-edge power).
On a related note, just imagine how much grief could've been avoided if PCIe used 24 V instead of 12 V, for the 12VHPWR connector!
 

InvalidError

Titan
Moderator
Glad to see you around. Happy holidays!
☃️ 🎅 ✨
Happy holidays to you too!

If not that, do you have any theories about why they went with 12 V instead of 24 V?
Simple: inertia. By sticking with 12V, they avoid the additional friction of needing to refresh just about everything else for a voltage only the newest of new stuff supports. 12VO split the baby in half.

Again, what I think @InvalidError said about this is that 48 V wouldn't offer much/any cost-savings in the PSU, but would add significant cost to the motherboard. So, the least system-wide cost/complexity would seem to be at 24 V.
You can efficiently step 24V down to 1V using simple buck regulators but in a buck regulator, the high-side FETs need to be able to pass the full load current with a 4-5% duty cycle divided across phases. If a buck regulator outputs 50A per phase, you need high-side FETs able to pass 50A and those will be almost twice as big at 48V, which also means about twice as expensive to manufacture and twice as hard to drive. This isn't practical beyond 24V. If you want to efficiently step 48V directly down to 1V, you have to use voltage transformers instead of chokes, which significantly increases costs and complexity.

The most efficient 48V point-of-load regulators are built as PCB transformers: the windings are integrated as extra layers into the PCB and then ferrite cores are glued on both sides to complete the transformer. If you think PSUs and motherboards are expensive, wait until your PCBs get a dozen extra layers to do this!

That is why we only see 48V in telecomms and datacenter stuff where cost doesn't matter much.

My dad taught me, whenever you're handling a potentially live wire (talking about 115 VAC, here), to keep one hand in your pocket. That will help you remember not to potentially touch a ground with the other.
In tech school, one of my teachers in AV classes where we worked on open-frame CRTs told us that we are far more likely to injure ourselves by over-reacting to shocks than the shocks themselves. He also told us to keep our left hand behind our back when poking around something with a high probability of shock, which is practically the same thing as your dad.
 
Last edited:

jlake3

Distinguished
Jul 9, 2014
58
74
18,610
PCIe is on 12V if I'm not mistaken, stepping down this one at the mobo level would increase costs.
Oh gosh. You'd either need 600W+ of DC-to-DC conversion on every motherboard just in case, you'd have less than 600W and end up with this whole compatibility matrix of what cards can be supported by which motherboards, or you'd have to get PCIe card manufacturers (mostly GPU vendors, but there are some other niche devices) to all agree to make 24V versions of all their products with a different connector so no one tries to put the 12V version into a new 24VO system.

Dell I know pioneered a proprietary version of PSUs that only did 12V before the 12VO standard came out, and I really hope everyone gets on board with a standard version of the SATA power output from the motherboard. I got a couple systems as salvage where someone either damaged that cable or they pulled the cable when they pulled the drives, and that was so annoying.
 

atomicWAR

Glorious
Ambassador
I imagine this will help with airflow as well. 24pin ribbon cables, even when tucked properly, really throw off my airflow to some parts of my motherboard and an nvme drive (canned smoke tested). Obviously this isn't testing I should do on the regular as it is a hassle to do with proper ventilation but it has shown me in the past where/how it is best to have my cables tucked away. Regardless any improvements to airflow would be welcomed be it via 12VO or rear mounted mobo cables.
 
Last edited:

BillyBuerger

Reputable
Jan 12, 2021
177
101
4,760
OEMs like Dell and HP already have been using a form of 12VO for many years to increase efficiency. What 12VO will help is to hopefully remove these non-standard things they've been using so that you can more easily repair their systems with off the shelf replacements that are directly compatible. But I know Dell likes to use their own form factor for mounting their PSUs so I'm sure they'll still make it difficult.
 
  • Like
Reactions: bit_user
Again, what I think @InvalidError said about this is that 48 V wouldn't offer much/any cost-savings in the PSU, but would add significant cost to the motherboard. So, the least system-wide cost/complexity would seem to be at 24 V.


On a related note, just imagine how much grief could've been avoided if PCIe used 24 V instead of 12 V, for the 12VHPWR connector!
True. However, I think they kept 12V to allow "dumb" adaptors for PCIe 1x to PCI to work. We're talking beginning of the 2000's, at the time nobody thought PC components would use such wattage - the CPU was getting more thirsty, but GPUs worked off the bus (AGP) and most other components were getting integrated in the motherboard, reducing overall load : at the time, a gaming PC would contain a 90W CPU, a SCSI controller board, a CD burner, a DVD burner, a couple mechanical HDDs and a pair of graphics cards (2D and 3D) - the lot running off a 300W PSU.
 
  • Like
Reactions: bit_user

mickrc3

Distinguished
Mar 20, 2016
86
21
19,865
I wonder if I'm the only one who doesn't like the idea of giving up SATA power connections.

I've got a 2.5in SATA SSD dedicated as my Windows swap drive, two Blu-ray drives, four hard drives, and a USB/SATA/ESATA/Molex/Card Reader hub currently using my SATA power connections.
 

BillyBuerger

Reputable
Jan 12, 2021
177
101
4,760
I wonder if I'm the only one who doesn't like the idea of giving up SATA power connections.

I've got a 2.5in SATA SSD dedicated as my Windows swap drive, two Blu-ray drives, four hard drives, and a USB/SATA/ESATA/Molex/Card Reader hub currently using my SATA power connections.
I am perfectly fine with giving up direct SATA power cables. My desktop PC has no SATA. Single M.2 drive. I have done 2x M.2 drives with one for OS and the second for additional data. Bluray drive is USB and only connected when needed which is rare. Only SATA I use is in my server which has a single 6TB SATA drive which is plenty of NAS storage for me. If I need more, I could just replace it with a larger single drive. Backups are done over USB to a matching drive. A single SATA power off the motherboard would power that just fine even for this system. But since my server is generally my previous desktop system, that won't be going 12VO for many years.

It's also possible that a 12VO PSU could still have a smaller 12V->5V converter onboard still and could supply additonal SATA power connectors for drives, fans and other accessories. The motherboard would provide its own 5V and 3.3V but the PSU could still do some of that for other things. This doesn't remove SATA completely from PCs.
 

JamesJones44

Reputable
Jan 22, 2021
722
669
5,760
On a related note, just imagine how much grief could've been avoided if PCIe used 24 V instead of 12 V, for the 12VHPWR connector!
On a tangent, I've been curious as to why they haven't used opportunities like this to increase the amount of power available via PCIe ports to do away with extra cables to cards that draw more than 75 watts. Signal interference? Reduce costs (less pins)?
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
812
20,060
I imagine this will help with airflow as well. 24pin ribbon cables, even when tucked properly, really throw off my airflow to some parts of my motherboard and an nvme drive (canned smoke tested). Obviously this isn't testing I should do on the regular as it is a hassle to do with proper ventilation but it has shown me in the past where/how it is best to have my cables tucked away. Regardless any improvements to airflow would be welcomed be it via 12VO or rear mounted mobo cables.
This is why I wish for all the connectors on the edge of the MoBo to the Right/Bottom/Top to be "Right-Angle" connectors with proper labeling & Shielding, then you can have "Right-Angle" connectors to everything.

Imagine how much cleaner that would be if you got everybody to work together on the MoBo/PC Case/PSU side to all follow a simple design guide-line.

54Xdsow.jpg
Right-Angle heads for most cabling aren't new or special.
It's just a matter of getting everybody to agree with using it as a common standard.


I wonder if I'm the only one who doesn't like the idea of giving up SATA power connections.

I've got a 2.5in SATA SSD dedicated as my Windows swap drive, two Blu-ray drives, four hard drives, and a USB/SATA/ESATA/Molex/Card Reader hub currently using my SATA power connections.
I like having all options available to me.

That's why I'm a big proponent of U.3 being the next "Universal Connector" and letting the MoBo have all "U.3" connectors that we can adapt to whatever we want.
yl4p9J3.png
Many of those SATA/SAS/U.2/U.3 will need SATA Power Connectors on the other end.

On a tangent, I've been curious as to why they haven't used opportunities like this to increase the amount of power available via PCIe ports to do away with extra cables to cards that draw more than 75 watts. Signal interference? Reduce costs (less pins)?
I assume it's far cheaper to shield only the cables necessary for extra wattage over having to "Over engineer" every MoBo to support beyond 75 watts at the layer level.

I am perfectly fine with giving up direct SATA power cables. My desktop PC has no SATA. Single M.2 drive. I have done 2x M.2 drives with one for OS and the second for additional data. Bluray drive is USB and only connected when needed which is rare. Only SATA I use is in my server which has a single 6TB SATA drive which is plenty of NAS storage for me. If I need more, I could just replace it with a larger single drive. Backups are done over USB to a matching drive. A single SATA power off the motherboard would power that just fine even for this system. But since my server is generally my previous desktop system, that won't be going 12VO for many years.

It's also possible that a 12VO PSU could still have a smaller 12V->5V converter onboard still and could supply additonal SATA power connectors for drives, fans and other accessories. The motherboard would provide its own 5V and 3.3V but the PSU could still do some of that for other things. This doesn't remove SATA completely from PCs.
All that does is move the conversion from the PSU to the MoBo.
IMO, that's not necessary and just trying to force the problem onto the MoBo vendors.

Modern MoBo's are ALREADY too expensive with all the extra crap they put on, we don't need to give them another task / excuse to jack up the costs of MoBo's.
 
Last edited:
  • Like
Reactions: atomicWAR

bit_user

Polypheme
Ambassador
I wonder if I'm the only one who doesn't like the idea of giving up SATA power connections.
I'd imagine modular power supplies will continue to include SATA power support, for quite a while. However, what you'll probably find is that motherboards will provide exactly the same number of SATA power connections as SATA data connections. So, the only point where you should need to connect directly to the PSU is when using a PCIe SATA controller card that doesn't have its own SATA power connectors.
 

bit_user

Polypheme
Ambassador
On a tangent, I've been curious as to why they haven't used opportunities like this to increase the amount of power available via PCIe ports to do away with extra cables to cards that draw more than 75 watts. Signal interference? Reduce costs (less pins)?
Most likely it's an issue of costs, since having to build all boards to pass 675 W of power (56.3 A), when most people won't use nearly that much, will add a lot more unnecessary cost for everyone. Not only that, but imagine having to supply that amount of power to multiple slots!

As mentioned before, it wouldn't be so bad if they simultaneously increased the voltage. Each doubling of voltage would halve the amount of current the board would have to pass.
 

bit_user

Polypheme
Ambassador
This is why I wish for all the connectors on the edge of the MoBo to the Right/Bottom/Top to be "Right-Angle" connectors with proper labeling & Shielding, then you can have "Right-Angle" connectors to everything.
Those require a certain amount of room between the edge of your board and any obstruction. I think there are good reasons why it's not universally done like that.

That's why I'm a big proponent of U.3 being the next "Universal Connector" and letting the MoBo have all "U.3" connectors that we can adapt to whatever we want.
U.3 includes support for SAS and SATA, which makes it more expensive and complex for the host to implement. For U.2, you just wire it up to a PCIe link and you're done.

Worse, U.3 boards aren't backward compatible with U.2 drives. This relegates it to specialized server applications, where a backplane needs to support either U.3, SAS, or SATA drives. For most other purposes, it makes even less sense than U.2, which itself didn't really catch on - in case you haven't noticed.
 
Status
Not open for further replies.