News Report: Intel Pushing ATX12VO Power Connector for 12th Gen Alder Lake CPUs

I hope that this standard never takes off.
They still require PSU to produce 5V for SATA power connectors, but motherboard now has to get that 5V by DC-DC converter. So very efficient...
5V is main voltage of USB and that is not going to change, so motherboard will have those DC-DC active pretty much always, including "power off" state.
 
I'm all for this. The 24pin ATX connector is a huge mess of cables. The motherboard already has to convert the 12V power to usable voltages for things, makes sense for it to handle the 5V and 3.3V as well. If I'm not mistaken, Google did this in their data centers years ago for exactly the same reasons mentioned. It makes for a more simple efficient power supply. Sure, there will be growing pains as things have to change and adapters will be needed in the interim. But it will be worth it in the long run. As for SATA or other peripherals, I have very few of those left now with M.2 storage and very little need for optical media which I can do over USB.
 
  • Like
Reactions: purple_dragon
I hope that this standard never takes off.
They still require PSU to produce 5V for SATA power connectors, but motherboard now has to get that 5V by DC-DC converter. So very efficient...
5V is main voltage of USB and that is not going to change, so motherboard will have those DC-DC active pretty much always, including "power off" state.
My understanding is that most power supplies already convert from AC to 12V DC, then from there to other rails so it's mainly just moving where conversions happen. SATA wouldn't be that much of an issue since PCIe is slowly taking over with SSDs and HDDs are going to be fed with a cable directly from the power supply anyways.

USB is definitely going to be annoying, especially with standby voltages so your keyboard can still wake your PC, but I'm guessing it'll be a moot point with Point of Load converters such a big deal these days. I don't want to think about how much more complex the next USB protocol would be if they have to deal with anything other than 5V during the initial connection.
 
  • Like
Reactions: purple_dragon
It is the future with sata rapidly fading away due to m.2 storage, so adoption may as well start now. Frankly, I'm tired of cramming 24 pin cables in and prying them back out. Change is inevitable and hopefully for the better.
 
Intel is going at this the wrong way IMO: if it really wanted to make 12VO the default option in the future, it should start with getting the PCI-SIG to make a new PCIe version based on 12VO, likewise for USB-IF and perhaps a 12VO SATA replacement too. Once everything is 12V by default, then 12VO would make sense. Otherwise, you are just moving the 5V and 3.3V regulators from the PSU which has a fair amount of space to spare to the motherboard which is already quite cramped.

If you are going to screw around with long-standing power supply standards, may as well go all the way and convert everything to 20VO nominal for another 5% extra efficiency and be able to directly attach high-power external devices like monitors, eliminating the need to boost from 12V to 20V for devices that can use that. Make the spec tolerant to 18-25V, then you can eliminate intermediate DC-DC conversions in/out of the battery for another 5-10% higher efficiency in laptops using a 6S pack.

I don't want to think about how much more complex the next USB protocol would be if they have to deal with anything other than 5V during the initial connection.
There is a simple way of avoiding that: a new connector dedicated to 20VO high-speed devices. I'd propose PCIe Type-O using 20VO power and optical fiber for data speed starting at something like 32Gbps based on PCI hot-plug instead of USB.

It won't replace 5V for things like mice and keyboards but it would clean up all of the high-speed high-power devices. Legacy TB3/4, USB2/3/4, DP alt-mode devices, etc. could be accommodated by bridge devices as-needed for a smooth transition.

5V DC-DC converters for low-power (5W/1A or less) devices like 2.5" external HDDs, thumb-drives, mice, keyboards, etc. are cheap and don't take much space, I wouldn't be particularly worried about motherboard having to provide those until a 12/20VO USB successor comes along for those. Type-C was a missed opportunity to bump the baseline standard to something like 12V/500mA.
 
Should be pushed harder. That mobo cable is just old and clunky, one of my pet peeves with PC building. I also realized that used server power supplies just trash any existing PC PSUs
 
untill there are at least 4 NVMe slots on a mainstream board, it will still be quite a while for that to happen
With NVMe costing four PCIe lanes a pop, I don't foresee boards with heaps of M.2 slots becoming much of a thing outside the server space and maybe HEDT. M.2 slots also consume a ton of board space unless you shove them in a PCIe riser card.

Yeah, I don't foresee SATA going away any time soon for people who want to have heaps of affordable and reliable storage. It would be nice if "SATA" got at least one last upgrade to 8Gbps NVMe 3.0x1 with SATA3 backward-compatibility. Most modern chipsets have SATA ports powered by HSIO lanes that support USB3/SATA3/PCIe3 and could probably already do this with little more than a BIOS update to attempt 3.0x1 before SATA3 fall-back and better/shorter cables.
 
So Intel, who hasn't demonstrated any interest in efficiency for the last 10 years, suddenly wants to revolutionize other companies' products, and undermine the ATX standard? Maybe they should engineer a modern CPU before they get back on the soapbox. They have no authority left to lobby for such drastic changes - especially those that will negatively affect cost and compatibility for everyone building PCs.
 
  • Like
Reactions: Phaaze88
With NVMe costing four PCIe lanes a pop, I don't foresee boards with heaps of M.2 slots becoming much of a thing outside the server space and maybe HEDT. M.2 slots also consume a ton of board space unless you shove them in a PCIe riser card.

Yeah, I don't foresee SATA going away any time soon for people who want to have heaps of affordable and reliable storage. It would be nice if "SATA" got at least one last upgrade to 8Gbps NVMe 3.0x1 with SATA3 backward-compatibility. Most modern chipsets have SATA ports powered by HSIO lanes that support USB3/SATA3/PCIe3 and could probably already do this with little more than a BIOS update to attempt 3.0x1 before SATA3 fall-back and better/shorter cables.
yep, which is why on both amd and intels mainstreem chipsets (x570 and Z4/590 there are only 2 ? max m.2 slots ? i dont see sata going anywhere yet unless amd and intel add more lanes to their non HEDT platforms. i just picked up Asus' hyper X 4 slot m.2 card, while i can only use 2 of the 4 slots, i still replaced 2 mechanical hdds with 2 m2 ssds, but i still have 4 mechanical hdds on sata :)
 
  • Like
Reactions: Soaptrail
They have no authority left to lobby for such drastic changes - especially those that will negatively affect cost and compatibility for everyone building PCs.
There is a thing called 'progress' which calls for the eventual replacement of stuff that has outlived its usefulness. The ATX standard was crafted at a time where most high-powered chips could still be reasonably accommodated by linearly regulating the voltage from the nearest appropriate supply rail voltage. Now that nearly everything except motors runs at ~1V and require VRMs powered from 12V, intermediate supply rails are nearly obsolete, just like ATX's -12V and -5V lines that have been dropped from the spec. Don't be surprised if the ATX, PCIe and other specs get updated to flag 3.3V and 5V pins as obsolete in the coming years to pave the way.

Something like 12VO may take a while to phase in but is inevitable just like the move from XT/AT PSUs to ATX which enabled sleep-mode and software-controlled on/off. At some point 5-10 years from now, most primary IO ports will be Type-C or future connector with either programmable bus voltage or higher baseline voltage and that is when ##VO will go mainstream. Since USB-PD goes up to 20V nominal, it would make the most sense to choose a system voltage high enough that all other voltages only need buck regulators.
 
There are plenty of boards with three M.2 slots except the 2nd or 3rd slot usually share two HSIO lanes with SATA ports, which may limit the M.2 slot(s) to 3.0x2 depending on which SATA ports are used.
yep, but something usually ends up being disabled, or its performance reduced in some way. not really an ideal solution.
 
This might make sense if you're using all LapTop level parts or designed an entire platform around it.

But most people I know have alot more parts in it and moving around the 5V & 3.3V converters to the MoBo makes zero sense.

And my idle is ~ 216W, so ATX 12VO wouldn't do squat for me

ffbdf68d9eaef356f18c5fd9a9d106137900f1cb2e7af0883dced996cad803c1.png


+ PSU Power Rails:
+12.0V = MainBoard / CPU / GPU / VRM / Fans / PCIe / Molex Power Plug / SATA Power Plug
+ 5.0V = HDD / ODD / SATA SSD's / VRM / RGB / USB / Molex Power Plug / SATA Power Plug
+ 3.3V = RAM / Various Logic Chips / M.2 / _ _ _ / PCIe / _ _ _ _ _ _ _ _ _ _ _ _ _ / SATA Power Plug

Moving the 5.0V & 3.3V to the MoBo doesn't solve that many issues, and for most average users who has a random assortment of hardware in their PC, including older stuff, this won't really help since the idle is usually way past 100W.

80 Plus Titanium already calls for 90% Efficiency @ 10% load
80 Plus can always create new Tiers above Titanium that have 90% or better efficiency @ the 10% load factors.

Also you're creating more complications for MoBo makers by having them perform the 5.0V & 3.3V DC to DC power conversion.
 
Last edited:
There is a thing called 'progress' which calls for the eventual replacement of stuff that has outlived its usefulness. The ATX standard was crafted at a time where most high-powered chips could still be reasonably accommodated by linearly regulating the voltage from the nearest appropriate supply rail voltage. Now that nearly everything except motors runs at ~1V and require VRMs powered from 12V, intermediate supply rails are nearly obsolete, just like ATX's -12V and -5V lines that have been dropped from the spec. Don't be surprised if the ATX, PCIe and other specs get updated to flag 3.3V and 5V pins as obsolete in the coming years to pave the way.

Something like 12VO may take a while to phase in but is inevitable just like the move from XT/AT PSUs to ATX which enabled sleep-mode and software-controlled on/off. At some point 5-10 years from now, most primary IO ports will be Type-C or future connector with either programmable bus voltage or higher baseline voltage and that is when ##VO will go mainstream. Since USB-PD goes up to 20V nominal, it would make the most sense to choose a system voltage high enough that all other voltages only need buck regulators.

Then let progress come from those who are actually making it. Intel is asking for this to get idle power efficiency gains for a processor design they haven't even brought to market yet. There might be a benefit down the line, but not with the kind of desktop systems available right now, including from AMD. If Intel wants to change that, great... lead by example. When they (or anyone else for that matter) have a product that can really take advantage of 12VO benefits, I'd be more likely to listen. Also not convinced moving power regulation to the mobo is a good idea. DC-to-DC converters mean more complexity, more heat, and more failure.
 
When they (or anyone else for that matter) have a product that can really take advantage of 12VO benefits, I'd be more likely to listen. Also not convinced moving power regulation to the mobo is a good idea. DC-to-DC converters mean more complexity, more heat, and more failure.
Modern ATX systems are already pretty close to 12VO, all of the non-trivial loads use point-of-load regulators powered from 12V, 12VO is nothing more than the final stretch to ditch legacy cables.

What component in your PC works directly with 3.3V or 5V? Practically none since virtually all modern chips are sub-2V so even if an SSD, PCIe card or USB device draws current from 5V, that 5V is only feeding another point-of-load regulator that takes it down to 0.9/1/1.1/1.2/1.5/etc. volts. If you closely examined your motherboard, you will find well over a dozen DC-DC converters for all sorts of support circuitry. Take a look at a HDD, you will find 4-5 more DC-DC converters. Look at an SSD, 3-4 more on there. GPU? Around a dozen for all of the support rails responsible for making the PCIe, GDDR, display outputs, etc. work besides the obvious Vcore and Vgddr ones. The supply rail count goes up very quickly with those highly-integrated multi-rail controllers where low-power rails can have little more than a 0402 inductor and capacitor on it. One of those and your DC-DC converter total can go up by 5-10. All in all, your system may already contain around 100 DC-DC converters.

Nearly everything uses point-of-load DC-DC converters. Even DDR5 brings 12V DC-DC converters on-DIMM for improved voltage regulation performance and efficiency, there goes 3-4 extra DC-DC converters (Vccio, Vterm, Vpll, Vram, possibly more) per DIMM.

If there was a time to question the reliability of reasonably well made DC-DC converters, I believe we are way beyond that.
 
I think there are many points here that are not understood.

1. How is the Motherboard going to step down the Voltage, are they going to choose and inefficent method of using voltage dividers, this would make the space better, but you will have immediate losses on the use of Resistors. Otherwise the motherboards are all of a sudden going to have to house some transformers, which are going to spew Electromagnetic interference all ove the Motherboard and surrounding components. Or are they planning to try to dephase the voltage to try to split the voltage up. This could spell disaster, as they already split phases within the regulator section to power the complex out of phase circuitry.

2. 3.3v is the threshold of voltage efficiency loss benchmark. Voltages around and under 3.3v suffer from less loss due to electromagnetic induction caused in all wires as a response to current passing.

3. The Motherboard is going to have to have more thermal protection as the average voltages speeding around the system are going to be higher, until they hit the transformer that is going to drop the Voltages to 3.3v

4. As technology shrinks the Transistor size, the voltages required to switch the FET's can be designed to drop requiring the voltage to be less, and the current to be more, but the power envelope still remains the same, or distributing more 3.3v pins to the board so that it works like a signal extender to make voltage shrinking tech more viable.

5. The 12 volts were mostly needed for the HDD that used to have spinning platters. Other uses are within some of the sub systems, but if all are designed to use less voltage now, it may increase the wireing, but it will decrease the size.

6. Phones are a typical example of power from a 5v only range. They drive similar tech with far less requirements that the desktop equivalents.

7. People need to stop asking for over drive tech, were the power envelope is pushed to achive performance. It is diminishing gains, and phone tech is a proof that, if we are more patient, we can have 70% less performance, but massive increases in efficiency, in a world obsessed with CO2.

This move from Intel makes no sense in light of the CO2 reduction mindset. It just proves that the owners are all mouth about the "green new deal" and no action, because people just dont understand the connotations from an Engineering standpoint.
 
Then let progress come from those who are actually making it. Intel is asking for this to get idle power efficiency gains for a processor design they haven't even brought to market yet. There might be a benefit down the line, but not with the kind of desktop systems available right now, including from AMD. If Intel wants to change that, great... lead by example. When they (or anyone else for that matter) have a product that can really take advantage of 12VO benefits, I'd be more likely to listen. Also not convinced moving power regulation to the mobo is a good idea. DC-to-DC converters mean more complexity, more heat, and more failure.

This point is very true. This argument of DC to DC regulation was an age old one between Tesla and Edison, and we all know who won that one.

AC regulation is more efficient and keeps elctromagnetic interference down, there are less feedback currents and damage from poorly unmatched and terminated signal transfers. If people only understood that the gains are like that of the Memory modules, you may increase the speed, but the latency for stabalisation also increases. When DDR1 was in play Cas ras and tas were 2-4 now they are 20 odd, so yes they work faster, but it is marginal. Get the calulator out, and you will find that the new RAM is a few dactors over in speed, not the margin the direct numbers suggest.
The whole point of the DIMMs over the simms in the First place was to have the memory modules as perfect match pairs. Why dont they start making QIMMS and gain speed by stability without the massive power envelope increase.
 
Modern ATX systems are already pretty close to 12VO, all of the non-trivial loads use point-of-load regulators powered from 12V, 12VO is nothing more than the final stretch to ditch legacy cables.

What component in your PC works directly with 3.3V or 5V? Practically none since virtually all modern chips are sub-2V so even if an SSD, PCIe card or USB device draws current from 5V, that 5V is only feeding another point-of-load regulator that takes it down to 0.9/1/1.1/1.2/1.5/etc. volts. If you closely examined your motherboard, you will find well over a dozen DC-DC converters for all sorts of support circuitry. Take a look at a HDD, you will find 4-5 more DC-DC converters. Look at an SSD, 3-4 more on there. GPU? Around a dozen for all of the support rails responsible for making the PCIe, GDDR, display outputs, etc. work besides the obvious Vcore and Vgddr ones. The supply rail count goes up very quickly with those highly-integrated multi-rail controllers where low-power rails can have little more than a 0402 inductor and capacitor on it. One of those and your DC-DC converter total can go up by 5-10. All in all, your system may already contain around 100 DC-DC converters.

Nearly everything uses point-of-load DC-DC converters. Even DDR5 brings 12V DC-DC converters on-DIMM for improved voltage regulation performance and efficiency, there goes 3-4 extra DC-DC converters (Vccio, Vterm, Vpll, Vram, possibly more) per DIMM.

If there was a time to question the reliability of reasonably well made DC-DC converters, I believe we are way beyond that.

The core of the CPU may use the voltages you suggest because the core is shielded with Ground planing and is virtually free of interference. the I/O is another story, and when the CPU droupped in 2000 from 5v to 3.3 volts, most of the I/O circuitry still remains at the 3.3v

this is not a design flaw, it is for the signalling to be reliable. When signaling is too low, the SNR decreases when shielding is not enveloping, and 3.3v is about as low as they can go to keep the SNR as high as it needs to be to communicate externally.
 
Last edited:
This sounds like btx all over again.

or is it simply, when the technical talk comes out, you can't follow it.

What is see from the post above is conjecture based on some high level ideas, but not really dealing with the current tech in todays ideals and focus, relating to Environment and Consumer demand (being opposites)
 
most of the I/O circuitry still remains at the 3.3v

this is not a design flaw, it is for the signalling to be reliable. When signaling is too low, the SNR decreases when shielding is not enveloping, and 3.3v is about as low as they can go to keep the SNR as high as it needs to be to communicate externally.
LOL, no. The last time a major IO used 3.3V was PCI non-e. All modern serial IOs operate on 100-400mV differential signaling with 0.9-1.5V DC bias. The only major parallel interface left is DRAM and the last time that worked at 3.3V was PC133 SDR-DRAM over 20 years ago, DDR1 dropped memory voltages to 2.5V while standard DDR4 works at 1.35V and DDR5 pushes that down to 1.25V.

Although USB1/2 may have a data pin bias voltage up to 3.3V, the actual 1.2-480Mbos serial data is encoded as a 100-400mV voltage difference between D+ and D-. 0-3.3V is only used for single-ended low-speed out-of-band signaling like hot-plug notification and waking from sleep. On USB3.x 5+Gbps, the 5+Gbps pairs operate on 1.5V common-mode bias voltage and 100-400mV differential swing depending on cable length.