News Report: Intel Pushing ATX12VO Power Connector for 12th Gen Alder Lake CPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.

MasterMadBones

Distinguished
Intel is going at this the wrong way [...]
I absolutely agree. It makes the PSUs more efficient, which helps with regulations in some parts of the world, but the conversion still has to happen somewhere. Overall, this won't change system efficiency at all.

I'm worried for ITX motherboards because there is a good chance that they would have to sacrifice features to comply with ATX12VO. I think that's only acceptable if there is a smaller PSU standard (smaller than SFX but with more power than for example most Flex-ATX units) to offset the difference. Or perhaps mini-DTX will end up being more popular, which I suppose is fine.

EDIT: to clarify: I really like the idea of only having one 10-pin cable going into the motherboard, but until we can be absolutely sure that nothing requires 3.3V or 5V at all, it likely increases motherboard complexity and does nothing to improve efficiency.
 
Last edited:

InvalidError

Titan
Moderator
I absolutely agree. It makes the PSUs more efficient, which helps with regulations in some parts of the world, but the conversion still has to happen somewhere. Overall, this won't change system efficiency at all.
It eliminates quiescent power consumption for obsolete power rails as the number of devices connected to them approaches zero. Shaving 4-5W there makes a significant difference when system idle power is something like 30-40W. As I have written earlier, nearly everything is already powered by point-of-load regulators, so I wouldn't worry too much about 12VO adding that many since most are already there. USB ports that support USB-PD need per-port DC-DC converters to accommodate different voltages and current limits, which is where all USB ports will end up eventually.

As for ITX motherboards, there is plenty of space on the back side to cram more components if needed, so I wouldn't worry about running out of space there. Though it will make them more expensive from likely needing another two extra layers to accommodate the even more dense double-sided component load.
 
  • Like
Reactions: TJ Hooker

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
LOL, no. The last time a major IO used 3.3V was PCI non-e. All modern serial IOs operate on 100-400mV differential signaling with 0.9-1.5V DC bias. The only major parallel interface left is DRAM and the last time that worked at 3.3V was PC133 SDR-DRAM over 20 years ago, DDR1 dropped memory voltages to 2.5V while standard DDR4 works at 1.35V and DDR5 pushes that down to 1.25V.

Although USB1/2 may have a data pin bias voltage up to 3.3V, the actual 1.2-480Mbos serial data is encoded as a 100-400mV voltage difference between D+ and D-. 0-3.3V is only used for single-ended low-speed out-of-band signaling like hot-plug notification and waking from sleep. On USB3.x 5+Gbps, the 5+Gbps pairs operate on 1.5V common-mode bias voltage and 100-400mV differential swing depending on cable length.

And there he goes again. So the M2s and the SSDs dont use the 3.3volts lol. And the I/O doesn't draw from the 3.3v and the CPU doesn't draw... Good giref. The problem with the 12v power rail is the Current draw and the power losses. The 3.3v only finds losses from the Transformer primary and secondary.systems, but keeps the heat damages and the large Electromagnetic Induction away from the Phase spiitters bu the CPU

This argument has been running for ages (as you guys are looking)


and just listen to all the complaints. The guys at Tech power up are actually a lot brighter in general then the clients on THW. They simply know their electronics and actually provide the answers needed.

Not only is Intel jumping on the band wagon that Fujitsu already are doing and have been doing for a number of years, they are not learning the errors of it. It is better to keep the losses of the Power to the quality of the PSU that you buy. This is another way to capitalise the market as the T1 companies can now push the other PSU makers out of the market.

Take a look at all the electronics based problems with running 4 12v lines out of the same Transformer, it is largely a termination problem as the Transformer has to match the impedence of the Load for best efficiency. The use of 1 transformer to run 4 lines to make it more simple is actually a problem in itself.

Fujitsu seemed to be using this new approach to lock repairs to them, but now it seems like greedy intel are looking to make up their losses to AMD by attacking PSU makers.

And anyway, like it says in the article, it has power savings in the idle and low power since it is not powering up the M2's HDDs and SSD's from the Secondary transformers, but once the power goes up, so the efficiency diminishes. Maybe intel want to offset the issues of heat they generate with their new generation, but to create more heat on the PCB is asking for trouble.
 
  • Like
Reactions: Soaptrail

InvalidError

Titan
Moderator
And there he goes again. So the M2s and the SSDs dont use the 3.3volts lol. And the I/O doesn't draw from the 3.3v and the CPU doesn't draw... Good giref.
Everything that is on the motherboard that isn't already running its PoL DC-DC converter from 12V can easily be re-designed to take power from 12V and is a non-issue. Legacy IOs will eventually get replaced by something else and whatever replaces them will likely be 12VO-ready just like DDR5.
 
  • Like
Reactions: TJ Hooker

BillyBuerger

Reputable
Jan 12, 2021
168
87
4,660
..
And my idle is ~ 216W, so ATX 12VO wouldn't do squat for me
Holy crap, what the hell is your PC doing to use 216W idle? I remember when our old servers would use something like that before CPUs clocked down at idle. Hell, even some old P4 extreme edition CPUs which were crazy inefficient still used "only" 105W idle. It's very easy to have a powerful PC that still is around 50W idle. A normal PC can easily be under 20W idle. So maybe you are doing something that is eating up a lot of power doing nothing but the vast majority of PCs don't and could see real improvements from this.
 

InvalidError

Titan
Moderator
not like modular psu fixed issue of cables being messy.
Kind of hard to have clean wiring when using 2.5/3.5/5.25" drives requires the use of separate power and data cables with GPUs also requiring external power of their own. That's another reason why everything could use a 12VO refresh for consolidated power and data. A 12VO PCIe spec refresh could add a dozen 12V pins at the end of the x16 slot to let GPUs draw up to somewhere in the neighborhood of 250W from the motherboard and eliminate the need for a separate 12V AUX cable for most mainstream GPUs, though this would require 2-3 more 12V lines on the motherboard's 12VO connector.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
Everything that is on the motherboard that isn't already running its PoL DC-DC converter from 12V can easily be re-designed to take power from 12V and is a non-issue. Legacy IOs will eventually get replaced by something else and whatever replaces them will likely be 12VO-ready just like DDR5.

POL DC to DC converters are about as wasteful as a typical PI or T attentuator. These things use components that are typically good under half load, which is shocking when you look at the enthusiast market. These power supplies run at 3/4 load when paired with newer components. They are trying to sell a component on Half its metric, and that is low power mode, and then losses of 10-20 percent under half load. Full load, lets not talk about. Bad bad bad bad.

Poor idea from a team that cant seem to sort out its 10nm issues. And needs IBM to save its butt.

I/Os are still driven by the Manufacturer of the CPU, as its bus is half the hype as well as its interconnect. So good luck getting rid of lagacy.

Fujitsu started something and it didnt go well and now Intel want a go.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
Holy crap, what the hell is your PC doing to use 216W idle? I remember when our old servers would use something like that before CPUs clocked down at idle. Hell, even some old P4 extreme edition CPUs which were crazy inefficient still used "only" 105W idle. It's very easy to have a powerful PC that still is around 50W idle. A normal PC can easily be under 20W idle. So maybe you are doing something that is eating up a lot of power doing nothing but the vast majority of PCs don't and could see real improvements from this.
I have a ton of HDD's in my system, even Idling, they consume some power.

And I have other stuff in there as well.

Plus I run my System in 24/7/365 as my personal server.

I refuse to put excessive Electronic wear & tear on the Circuit paths on my Electronics through the On/Off Thermal Cycles that incur.

Ergo I always run my systems in 24/7/365 always on mode.

And the power consumption is fine, I've dealt with it and it's largely trivial in the grand scheme of things.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
What component in your PC works directly with 3.3V or 5V?

+ PSU Power Rails:
+12.0V = MainBoard / CPU / GPU / VRM / Fans / PCIe / Molex Power Plug / SATA Power Plug
+ 5.0V = HDD / ODD / SATA SSD's / VRM / RGB / USB / Molex Power Plug / SATA Power Plug
+ 3.3V = RAM / Various Logic Chips / M.2 / _ _ _ / PCIe / _ _ _ _ _ _ _ _ _ _ _ _ _ / SATA Power Plug
 

InvalidError

Titan
Moderator
+ PSU Power Rails:
+12.0V = MainBoard / CPU / GPU / VRM / Fans / PCIe / Molex Power Plug / SATA Power Plug
+ 5.0V = HDD / ODD / SATA SSD's / VRM / RGB / USB / Molex Power Plug / SATA Power Plug
+ 3.3V = RAM / Various Logic Chips / M.2 / _ _ _ / PCIe / _ _ _ _ _ _ _ _ _ _ _ _ _ / SATA Power Plug
I meant "component" in the sense of discrete chips, not interfaces. The 3.3V on M.2 slots feed the SSDs' VRM which provides all of the different voltages the different chips on the SSD need to operate. The 3.3V (optional) and 5V on SATA connectors feed the VRMs that feed the rest of electronics on whatever the drive or other device is. The RAM VRM can just as easily run from any available power rail and DDR5 brings the RAM VRM on-DIMM with 12V input, making it the first 12VO-ready interface.

My point was that DC-DC regulators / VRMs with related support components are pretty much the only essential chips left that directly touch raw PSU output voltages and they can all be designed for 12VO just as easily as any other voltage within reason.

As I wrote earlier, I don't expect 12VO to gain much traction until more interfaces have their next major revision and go 12VO along the way. Just like the last time interfaces got phased out, there will be no shortage of backward-compatibility dongles for people who want to continue using their old stuff in newer systems.
 
  • Like
Reactions: TJ Hooker

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
I meant "component" in the sense of discrete chips, not interfaces. The 3.3V on M.2 slots feed the SSDs' VRM which provides all of the different voltages the different chips on the SSD need to operate. The 3.3V (optional) and 5V on SATA connectors feed the VRMs that feed the rest of electronics on whatever the drive or other device is. The RAM VRM can just as easily run from any available power rail and DDR5 brings the RAM VRM on-DIMM with 12V input, making it the first 12VO-ready interface.

My point was that DC-DC regulators / VRMs with related support components are pretty much the only essential chips left that directly touch raw PSU output voltages and they can all be designed for 12VO just as easily as any other voltage within reason.

As I wrote earlier, I don't expect 12VO to gain much traction until more interfaces have their next major revision and go 12VO along the way. Just like the last time interfaces got phased out, there will be no shortage of backward-compatibility dongles for people who want to continue using their old stuff in newer systems.
The only way I see 12VO gaining any sort of traction is if we start seeing LapTop MainBoards & Parts being put into DeskTop systems more often and they need their own PSU.

Then it make sense to use 12VO at that point for a new MoBo standard that isn't ATX based.

Let's say "Nano-ITX" for LapTop MoBo as the new common standard.

That's 120mm x 120mm with room for a MXM Video Card attached as the primary interface.

MXM Video Cards are 105mm x 82mm with the 82mm side having the interface pins to slot into the MXM Port on the side of the Nano-ITX board.

That would allow Ultra SFF in a very tiny, thin, tall & slender chasis package.

Other than that, I expect 12VO to be about as long lasting as BTX was.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
Typical modern synchronous DC-DC converters are 90-95% efficient across most of their operating range. PI and T are filter network topologies, they do not provide any voltage conversion or regulation on their own.

PI to T filter their own signaling in an effcient way, and keep loads balanced to prevent signal bouncing. Come on, you can't just rread an article and think to understand it. When you have wasteful IC like voltage regulators that work in DC, you are typically looking at heavy Heat waste and Low efficiency.

The 93% claim for a DC-DC converter is about as laughable as Intel's Benchmarking stats. The 93% only comes from a Low load demand and in Lab conditions, and as every Engineer knows, these are never ever achieved by the lay Engineer. They are the best case metric based on perfect load and phase test equipment.

You Use the DC-DC converter use a simple efficiency test and send me the results. you will likely get 90% under small load, and that is when the component is cool (20 degrees C) add a bit of heat to 30, and oh dear! that metric is down to 85%.

DC-DC convertors are used now in preference to Linear line regulators that you find in Power supplies, which are lazy electronics in them selves. Compared to having a well designed Zenner Diode Bridge, the DC-DC converters are near useless. They are only used in small devices where batteries can drain current as quickly as they need, but voltage must be made the same. They save space in Phones etc, that is it! It would be near impossible to use large solenoids and ferrous cores in a phone, and the DC-DC converters use a Storage technique with capacitors and frequency to create a new voltage, but a timing circuit is needed to accomplish this (or a tiny Crystal)

Simply put, this is a technique that is not a good idea. The response from engineers about Fujitsu is enough to tell me everything I need..... Unless its a conspiracy. And Yes, you will find a PI or T filter in most modular electronics to connect 2 circuits with varying loads. And being as digital electronics is all modular and has various different IC's with CMOS and TTL mixing, then they are required.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
The only way I see 12VO gaining any sort of traction is if we start seeing LapTop MainBoards & Parts being put into DeskTop systems more often and they need their own PSU.

Then it make sense to use 12VO at that point for a new MoBo standard that isn't ATX based.

Let's say "Nano-ITX" for LapTop MoBo as the new common standard.

That's 120mm x 120mm with room for a MXM Video Card attached as the primary interface.

MXM Video Cards are 105mm x 82mm with the 82mm side having the interface pins to slot into the MXM Port on the side of the Nano-ITX board.

That would allow Ultra SFF in a very tiny, thin, tall & slender chasis package.

Other than that, I expect 12VO to be about as long lasting as BTX was.

Fujitsu have already been doing it for Years.

Sadly if intel says go then the companies will GO
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
God, I hope not.

I hope that Intel's reputation is enough of a shambles that companies won't blindly follow through with Intel's requests.

We only need to look at Intel's history for all the numerous failed projects & ideas.

I am not sure of the logic of their loyal fan base. They will continue to buy Intel with shouts of "hail Intel and Hail THW for their loyal support to Intel!"

Intel is being bailed out with IBM's research into 4nm Lithography. With all the RND spending from Intel, you would expect more, but you get less. This shows their ethos and the type of Engineers they employ. They stole AMD tech back in 2003-2005 and also took to unfair practice with the compiler over many years. But their loyal fanbase still cry "Hail Intel!"

Intel come out with gimmicks that eventually turn out to be shortcuts and full of security flaws.

AMD on the other hand, with a fraction of the R and D produce tech beyond intel, that not only kep the Company 10% behind for so many years (rather than fold!) to beating their buttocks into the ground.

This tech is another Illogical step that will happen. And Lets hope that AMD just keep to the multi voltage approach. Intel will then continue to heat up components with thier mad approach to technology that make little or no sense.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
I still remember Intel's Itanic and it's Itanium approach to 64-bit comuting.

The failed BTX standard

Intel's Pentium 4 and "Net Burst" architecture ever increasing Pipe Line stages for more MHz & heat.

Now we have Rocket Lake to look at, and then point at and laugh at!
 

InvalidError

Titan
Moderator
God, I hope not.
ATX-based systems have been slowly migrating toward 12VO for the last 20 years with ATX12V/EPS12 and PCIe AUX connectors being the first two steps. DDR5 is only the newest domino to fall in the march toward 12VO, USB may be next with Type-C requiring per-port DC-DC converters for full functionality and you can fully expect this to become the norm as motherboard integrate more Type-C USB4/TB4 ports.

Point-of-load regulators are more efficient than large centralized converters and 12V distribution is more power-efficient than 3.3V/5V distribution. Also, having regulators closer to loads allow tighter voltage regulation and better noise immunity which is becoming increasingly important as everything gets faster and needs to meet tighter tolerances all-around.

Most modern interfaces were designed when DC-DC converters were still relatively large and expensive due to low integration, relatively high Rdson and high switching losses. Now we have QFN-size PMICs that can pack high-efficiency multi-channel DC-DC converters operating at 1-3MHz in little more than a square centimeter of board space to power complex local loads like DDR5 DIMMs from 12V input voltage.

While progress toward 12VO may be slow, it is inevitable.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
I still remember Intel's Itanic and it's Itanium approach to 64-bit comuting.

The failed BTX standard

Intel's Pentium 4 and "Net Burst" architecture ever increasing Pipe Line stages for more MHz & heat.

Now we have Rocket Lake to look at, and then point at and laugh at!

This is exactly right. Intel tried to incease Frequency at the cost of bandwidth and in the end Failed. Rambus had its advantages, but could only have them once the Lanes were reduced to 16 bit, which was 1/4 of the DIMMs capabilities. Their ethos was relying on the 32 bit architecture staying around, but regardless of that, the DIMMS could use the same SIMMS matched pairing in 32bit mode and RAMBUS needed to use 2 modules of the 16bit addressing to access the same data that aDIMM could do in 1 on 32bit and still 1 in 64bit.

Invalid Error keeps Harping on about the DC to DC converters being efficient is laughable considering any DATA sheet you look at for any DC to DC converter plays the 93% card in ideal conditions, and when I had a look at the Graphs at the back, I saw that it can only perform that way in 20degrees (AMBIENT). But being as Intel are furnaces and that Desktops run processors at max thermals in overdrive mode (max tolerance), we can expect a poor efficiency from every DC to DC converter which has been reported by engineers testing the Fujitsu Rigs. Even with their relative modest 1-3 Mhz (which is their clocking frequency) they still pass alot of power through the chip, that invariably creates heat and causes efficiency loss.

Cut the story short, As Tesla proved to Edison that AC - AC regulation will always trump DC solutions (providing they dont use cheap auto-transformers), will yield you better results as the core that the solenoids are wrapped around are excellent heatsinks, and if properly spliced to reduce eddy currents, you can acheive 90%+ even on modest loads. But more power means more heat and always drops efficiency.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
https://en.wikipedia.org/wiki/DDR5_SDRAM
DDR5 DIMMs are supplied with bulk power at 12 V and management interface power at 3.3 V

DDR5 didn't get rid of 3.3 V. It's reusing the 3.3 V powerline for the management interface power.

So the role of 3.3 V changes with DDR5.

Don't count out the ATX standard just yet. Alot of the PC industry is built on the current ATX standard.

Despite what Intel may want.
 

InvalidError

Titan
Moderator
DDR5 didn't get rid of 3.3 V. It's reusing the 3.3 V powerline for the management interface power.

So the role of 3.3 V changes with DDR5.
The RCD, data buffers and DRAM chip all operate at 1.1-1.25V.

By "management" it only means reading RAM profile from the DIMM's SPD chip. That's a one-time-per-DIMM 5-10mA load for a fraction of a second once during POST each time you turn your computer on. Nothing worth having a bundle of 3.3V cables capable of carrying 40A from the PSU to the motherboard for.
 

ginthegit

Distinguished
BANNED
Nov 15, 2012
201
32
18,610
https://en.wikipedia.org/wiki/DDR5_SDRAM
DDR5 DIMMs are supplied with bulk power at 12 V and management interface power at 3.3 V

DDR5 didn't get rid of 3.3 V. It's reusing the 3.3 V powerline for the management interface power.

So the role of 3.3 V changes with DDR5.

Don't count out the ATX standard just yet. Alot of the PC industry is built on the current ATX standard.

Despite what Intel may want.

I am hoping your right, and it will larglely now rely on AMD to stick by it.