News Report: Intel Pushing ATX12VO Power Connector for 12th Gen Alder Lake CPUs

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The RCD, data buffers and DRAM chip all operate at 1.1-1.25V.

By "management" it only means reading RAM profile from the DIMM's SPD chip. That's a one-time-per-DIMM 5-10mA load for a fraction of a second once during POST each time you turn your computer on. Nothing worth having a bundle of 3.3V cables capable of carrying 40A from the PSU to the motherboard for.

Nonsense, the whole purpose of having the lower voltage is for the higher current. This is to enable the desktop to run under the Current overcharge that it does. Why Have 12v modules when you can use the 3.3 and use capacitors to increase voltage.

And you are still not getting the point. All DC-DC converters still have to pass all the power needed through the circuits regardless of the frequency. This creates heat and waste! The lower the voltage the lower the Electromagnetic response, and less waste (unless it is over long distance). And these systems requre alot of power, all in current, more so than voltage. Voltage is just for regulation and to make sure that power does not exceed power ratings and soley because current is draw as demmand allows, voltage is controlled by resistive or impedance factors.

And still, Using oscillators to decrease voltage is an old technology, it was something we were being taught in uni in the 90's, but this was more for creating impedance by clock pulsing capaitors, not to drop average voltages. What happens if the timing circuit gets damaged? you want to risk borking every component that is not 12v
 
Modern HVDC power transmission lines disagree.

AC only won back then because AC can work with plain iron core transformers while the technology for efficient solid-state DC-DC conversion didn't exist at the time.

Oh Dear, youve been reading Wikipedia again. Lol
Where are you getting your info from.


The above link is a very Pro DC to DC power conversion process, it explains (or rather doesn't really explain) the process. Now lets have a look what these crooks are saying.

AC is produced at power stations because AC is most efficent in the form of steam turbine, even in Nuclear. Then for some reason, they are saying that it is more efficent to have a huge converter substation that goes from some unknown distance to another before having to be converted back to AC to go over long distance because of the DC Bulk Losses over long distance. So what they are saying is AC to DC back to AC to go down long distance lines then back to DC then to DC then back to AC to feed houses. Sorry mate, I have always known the power companies to be full of **** when it comes to these technologies, but are you actually seein the problem here. These Large CApacitive banks would be like Nuclear bombs if something were to go wrong. Not sure this will ever be implemented.

These so called DC to DC converters are actually the Huge capacitors that they have been using for 100 years for balancing loads and synchronisation. Without them they were arcing too much power and power stations were melting. These are needed because demand fluctuates, there needs to be a place that the excess can be stored and isolated, as the power station cannot deal with demand quickly enough. COnspiracy nuts have been going on about this for years.

Sorry mate, you are barking up the wrong tree. This is just another way that these corrupt companies are trying to sell there crap while the green new deal will fund it.

I have wasted 2 hours of my life trying to find the numbers and field studies for these supposed systems and surprise surprise there are just claims and no real numbers.
 
Oh Dear, youve been reading Wikipedia again. Lol
Where are you getting your info from.


The above link is a very Pro DC to DC power conversion process, it explains (or rather doesn't really explain) the process. Now lets have a look what these crooks are saying.

AC is produced at power stations because AC is most efficent in the form of steam turbine, even in Nuclear. Then for some reason, they are saying that it is more efficent to have a huge converter substation that goes from some unknown distance to another before having to be converted back to AC to go over long distance because of the DC Bulk Losses over long distance. So what they are saying is AC to DC back to AC to go down long distance lines then back to DC then to DC then back to AC to feed houses. Sorry mate, I have always known the power companies to be full of **** when it comes to these technologies, but are you actually seein the problem here. These Large CApacitive banks would be like Nuclear bombs if something were to go wrong. Not sure this will ever be implemented.

These so called DC to DC converters are actually the Huge capacitors that they have been using for 100 years for balancing loads and synchronisation. Without them they were arcing too much power and power stations were melting. These are needed because demand fluctuates, there needs to be a place that the excess can be stored and isolated, as the power station cannot deal with demand quickly enough. COnspiracy nuts have been going on about this for years.

Sorry mate, you are barking up the wrong tree. This is just another way that these corrupt companies are trying to sell there crap while the green new deal will fund it.

I have wasted 2 hours of my life trying to find the numbers and field studies for these supposed systems and surprise surprise there are just claims and no real numbers.

https://www.gamersnexus.net/guides/3568-intel-atx-12vo-spec-explained-what-manufacturers-think

Nice
 
And here is the reason why they are pushing these standards! and it is the same reason why Apple is closing its list of repairers without voiding a warrantee. It is criminal. Worse is that it is Intels Patent from back in 1995 when they were competing with Cyrix and AMD, it did not work then, and hopefully not now either.


The above gives the reason quite well.
 
Nonsense, the whole purpose of having the lower voltage is for the higher current. This is to enable the desktop to run under the Current overcharge that it does. Why Have 12v modules when you can use the 3.3 and use capacitors to increase voltage.

And you are still not getting the point. All DC-DC converters still have to pass all the power needed through the circuits regardless of the frequency. This creates heat and waste! The lower the voltage the lower the Electromagnetic response, and less waste (unless it is over long distance). And these systems requre alot of power, all in current, more so than voltage. Voltage is just for regulation and to make sure that power does not exceed power ratings and soley because current is draw as demmand allows, voltage is controlled by resistive or impedance factors.

And still, Using oscillators to decrease voltage is an old technology, it was something we were being taught in uni in the 90's, but this was more for creating impedance by clock pulsing capaitors, not to drop average voltages. What happens if the timing circuit gets damaged? you want to risk borking every component that is not 12v
Try running 100A regardless of voltage through a 24 gauge wire some time.

I'm sure you'll love the results.
 
Try running 100A regardless of voltage through a 24 gauge wire some time.

I'm sure you'll love the results.

These ratings of amps are based on typical calculations and given by the manufacturer based on rounding down 240v to 3.3 v. But I agree, even though the system can support such demands, the wires cannot take it, but the potential is there.

Current is however controlled by demand from the components themselves where as the voltage is fixed. And this is why the 3.3 volt line has the least losses. However, because of semiconductor losses, the 3.3 isnt enough to run all the subsystems inside a modern processor, being as now so many of them have Graphics on Board.

Despite what power the Induction bases systems can provide (in terms of current), the whole system can generally ask for 3A to 10A depending on the system setup.

Note that Intel is only intending to make OEM low power systems work off the 12VO systems as the DC-DC converters must be placed on the motherboard, and this simply would not do for QUAD SLI systems etc etc.

This is a niche system that Intel only intends for OEM. Thats it... It can hope for more, and good luck with that. But considering the push back from FS and other manyfactures OEM products from their consumers about replacing these Power supplies, I can see this just staying in that sphere, and if the heat beocmes too much, (which it is in many cases), this will meet the BTX dust.
 
My understanding is that most power supplies already convert from AC to 12V DC, then from there to other rails...
I know that some do (for all rails but 5Vsb, which is handled separately so that the 12V converter can be switched off in stand by mode), but most of them?

USB is definitely going to be annoying, especially with standby voltages so your keyboard can still wake your PC, ...
Yes, the 5Vsb (which can also be used to charge USB devices when the computer is off) will require the PSU to stay on even when the computer is off.

Intel is going at this the wrong way IMO: if it really wanted to make 12VO the default option in the future, it should start with...
I agree with this!

If you are going to screw around with long-standing power supply standards, may as well go all the way and convert everything to 20VO nominal for another 5% extra efficiency...
Why settle with 20V when you can have 24V or more? And why require a 10 pin connector when just two pins are sufficient?

As I understand it Intel stick to 12V because the PSU will also need to have separate PCIe cables to the video card and (probably) also P4/EPS cable(s) to the CPU.
SATA/molex-cables for peripherals must come from the motherboard, requiring more space for those outputs...

Modern ATX systems are already pretty close to 12VO, all of the non-trivial loads use point-of-load regulators...
What component in your PC works directly with 3.3V or 5V? ...
Practically none since virtually all modern chips are sub-2V so ... feeding another point-of-load regulator...
Sure, but those components in production today can't (reliably) handle a 12V feed on the 3.3V input. As you wrote above: it will take some transition time to phase out the current generation of products in use and have them replaced with new products that can handle a higher input voltage.

1. How is the Motherboard going to step down the Voltage?
The same way mobos step down 12V to the ~1V used to feed the CPU: PWM!

6. Phones are a typical example of power from a 5v only range. They drive similar tech with far less requirements...
??? You charge modern phones at 21V and they drive much weaker tech in terms of power requirements. What phone GPU runs at 250W?
 
Why settle with 20V when you can have 24V or more? And why require a 10 pin connector when just two pins are sufficient?

As I understand it Intel stick to 12V because the PSU will also need to have separate PCIe cables to the video card and (probably) also P4/EPS cable(s) to the CPU.
SATA/molex-cables for peripherals must come from the motherboard, requiring more space for those outputs...
Why 20V? As I have written previously in #8: with 20V nominal and 18-25V input tolerance, you can power things directly from a laptop's 6S battery pack (25.2V peak charging voltage, 22.2V nominal, 18V over-discharge cut-off) without voltage regulation to eliminate one layer of DC-DC conversions. 20V also lines up with the nominal maximum voltage for Type-C/USB-PD.

Most changes happen gradually.
1- California mandates high-efficiency PSU that forces PSU manufacturers to go xxVO for pre-built "low expandability" PCs
2- the industry as a whole focuses future standards around xxVO to reduce future need for and cost of intermediate DC-DC converters
3- new standards designed around xxVO enter the market, we get PSUs that provide both ATX and xxVO connectors, in-between adapters to make ATX work with xxVO, xxVO work with ATX, motherboards with different mixes of xxVO PCIe and legacy PCIe, etc.
4- xxVO standards reach critical mass, legacy standards get abandoned by new designs, the number of legacy ports and slots on everything drops
5- legacy stuff gets dropped altogether and backward compatibility with legacy interfaces gets delegated to dongles and special adapters

It took about 10 years from PCIe's introduction for PCI slots to almost completely vanish from mainstream motherboards, don't expect ATX to go away any time soon. For the DIY and "high expandability" markets where PSU manufacturers don't have to meet California's requirements yet, we'll likely see ATX PSUs with both ATX/EPS12 and 12VO connectors for a while, then the ATX/EPS12 connector getting phased out but the PSU still providing 3.3V and 5V for SATA and AMP connectors until they get a proper 12VO replacement.

A migration from ATX or 12VO to 20VO would be almost exactly the same.
 
I know that some do (for all rails but 5Vsb, which is handled separately so that the 12V converter can be switched off in stand by mode), but most of them?

Yes, the 5Vsb (which can also be used to charge USB devices when the computer is off) will require the PSU to stay on even when the computer is off.
The 12VO spec does include a 12VSB rail, so we don't have to keep a 5VSB rail.

And why require a 10 pin connector when just two pins are sufficient?
For one, power distribution. The more you can spread out how much a current a cable has to carry to other cables, the better. Otherwise you have to make the cables thicker, which not only presents mechanical issues, but also cost concerns. Obviously there's a limit to how many you should spread it out to. Of course, we could just bump up the voltage but since everything already uses 12V, may as well stick with it for the time being.

For another, there are signals that the motherboard has to send to the PSU to switch on the other rails so the computer can "power on." On the 12VO spec, there's three other pins to handle this.

??? You charge modern phones at 21V and they drive much weaker tech in terms of power requirements. What phone GPU runs at 250W?
Using higher voltages is mostly to address the previous thing I mentioned about power distribution. Considering most USB cables probably only have a safe capacity for maybe up to 3-4A, the only way to squeeze in more power is to increase the voltage. It'll step down to whatever battery charging voltage is required once it reaches the phone.
 
The same way mobos step down 12V to the ~1V used to feed the CPU: PWM!

PWM is not an answer to the question and certainly does not deal with the complexity of the electronics. PWM only deals with Pulse width Modulation and not lessening the voltage of the signal. All PWM does is lessens the average Voltage over a preiod of time not reduce its actual value. So PWM is not an answer.

Do you mean PWM with a capacitive bank. This would work as this uses the Time constant of charging to control where the capcaitor stops charging and giving an average of voltage based on the period of charging T=CR. However you have to factor in the Natural Log for the growth of the T=CR. And even with that, you need to add a Power regulator to keep it at a constant value, and this is what drive the heat up.

I now understand why it is limited to the OEM crap. Because the motherboard will get bigger, need to dissipate more heat, and have less headroom for power.

You charge modern phones at 21V and they drive much weaker tech in terms of power requirements. What phone GPU runs at 250W

Do you charge your Phone at 21v... You must have one hell of a phone...

Mine gets charged by the typical 5v and 4A fast charger thank you...

and though Phones drive lesser power, the whole 12VO power supplies were also made for lesser power too which is why they are set yo run on OEM only at the moment.
 
PWM is not an answer to the question and certainly does not deal with the complexity of the electronics. PWM only deals with Pulse width Modulation and not lessening the voltage of the signal. All PWM does is lessens the average Voltage over a preiod of time not reduce its actual value.
PWM in one form or another is the underlying principle of all modern efficient electrical power conversions. A switching DC-DC regulator is just a PWM controller with an LC output filter and voltage feedback.
 
PWM in one form or another is the underlying principle of all modern efficient electrical power conversions. A switching DC-DC regulator is just a PWM controller with an LC output filter and voltage feedback.

And this is precisely the problem. Adding Small Value Inductors (mostly toriod) add heat and add noise, and are external to the DC -DC regulato, unless you are talking the very low power versios that the OEMs but not enthusiasts can use. This is well documented as a problem with the Manufacturers who have been using this method from 2011. Its not new, it has documented problems which relate to heat and size.
The efficiency of 93% is utter crap, as I have looked through several datasheets of various different modules who all state " The Max efficiency is only achieved when load is at a set value and the heat is also at a low room temp value. This simply is not possible in enthusiat PC's.
To prove it, there are so many forums complaining about the PSU on the 12vo continually breaking, because the protections to the power now built in to the motherboard now go totally overboard on the PSU to stop the motherboard getting fried.
It also appears that the size of the units for certain power envelopes will make it unwieldy for a motherboard, esspecially enthusiast models, which already run hot, to not risk shutting down or thermal throttling due to poor airflow.
This green new deal nonsense in indeed that. PWM based systems have limits, and as the power increases, so to does the DC-DC convertor making the Board size bigger, and having to integrate more passive and active cooling systems to mitigate the new problems.

And unsing LC (which are not filters in the case you are refering, they are quite the opposite, they are resonators, and we all know what happens with resonators , hell its part of the reason why they prefer crytal switching. And it is no suprise as to why the 12vo power supplies dont last very long.
 
And this is precisely the problem. Adding Small Value Inductors (mostly toriod) add heat and add noise, and are external to the DC -DC regulato, unless you are talking the very low power versios that the OEMs but not enthusiasts can use. This is well documented as a problem with the Manufacturers who have been using this method from 2011. Its not new, it has documented problems which relate to heat and size.
2011? Really? Switching power supplies using LC output filters have been around for over 40 years. I may even still have a working 150W PC-XT switching PSU somewhere in my spare parts inventory.
 
  • Like
Reactions: TJ Hooker
2011? Really? Switching power supplies using LC output filters have been around for over 40 years. I may even still have a working 150W PC-XT switching PSU somewhere in my spare parts inventory.

They are resonators, and even if they have the actions of a Filter, whether RL or RC, when it is LC it is a resonator. It forces the mechanism to lock into one frequency and forces a high Q factor. One could say a perfect band pass filter, but resonators do the opposite to filters even if they have the same type of effect. Its like saying because the Phase Behaviour of an Inductor is opposite of that of a capacitor, does not mean that they have the same operating dynamic. One being electrostatic and the other Inductive via magnetic effect. RL and RC are filters with low q factors to allow filtering of a range of frequencys. And band pass filters made properly for HiFi systems have deliberately low q factors and dont need to have both L and C components whether Passive or active.

150W wow, so now you are just proving my point that this whole article is misleading. These DC to DC converters are for OEM systems due to the low power envelope knowing that when Wattage gets too high, as I explained 5 times already, the efficiency drops drastically.

When you look through Data sheets, you get all these claims of efficiency, then you go to the back of the datasheet where all the graphs are where they show temprature effects on the device, and all of a sudden you get the idea that there are limiting factors to the claim. And a bit like politics, its not hidden, its just placed in a way that the standard Technician and low level engineer cannot understand.

And yes, 1995 was the date that Intel made the patent for the 12vo (which is why its being pushed. And 2011 was the first time the 12vo powersupply was being made from it with intel in mind, purely to lock out their competition or at the very least, lock in HP, FS etc to Intel and deprive AMD of these big market segments.
 
When you look through Data sheets, you get all these claims of efficiency, then you go to the back of the datasheet where all the graphs are where they show temprature effects on the device, and all of a sudden you get the idea that there are limiting factors to the claim.
Those efficiencies are very much attainable:
https://www.tomshardware.com/reviews/automotive-usb-adapters-tear-down,4705-5.html
From 1A to 3.7A continuous, the LDNIO adapter manages to deliver92+% efficiency using an all-integrated switching regulator with synchronous rectification. Power stages with integrated drivers and all-integrated switching regulators have improved even more since then thanks to all the demand for USB-PD, QuickCharge, EVs, etc. 92+% efficiency is nothing exceptional with modern components.
 
  • Like
Reactions: hotaru.hino
150W wow, so now you are just proving my point that this whole article is misleading. These DC to DC converters are for OEM systems due to the low power envelope knowing that when Wattage gets too high, as I explained 5 times already, the efficiency drops drastically.
All computer power supplies employ DC-DC conversion (given that the first thing that happens to the input voltage is rectification to DC), and high end ones can get ~90+% efficiency all the way up to 1600W. And of course you could design an SMPS that can deliver far more than that efficiently if you wanted to, but there's not really any point for use in consumer PCs.

I mean, obviously nobody is going to put a 1600W converter on a regular PC motherboard. But it seems like you're arguing that DC-DC converters in general are only capable of providing low power (efficiently), which is demonstrably not true.
 
Last edited:
  • Like
Reactions: hotaru.hino
They are resonators, and even if they have the actions of a Filter, whether RL or RC, when it is LC it is a resonator. It forces the mechanism to lock into one frequency and forces a high Q factor. One could say a perfect band pass filter, but resonators do the opposite to filters even if they have the same type of effect. Its like saying because the Phase Behaviour of an Inductor is opposite of that of a capacitor, does not mean that they have the same operating dynamic. One being electrostatic and the other Inductive via magnetic effect. RL and RC are filters with low q factors to allow filtering of a range of frequencys. And band pass filters made properly for HiFi systems have deliberately low q factors and dont need to have both L and C components whether Passive or active.
No, the simple fact that it's an LC circuit does not mean it's being employed as a resonator. In the case of an SMPS you want to pass DC, ergo the inductor and cap are configured as a low pass filter.
 
Last edited:
  • Like
Reactions: hotaru.hino
No, the simple fact that it's an LC circuit does not mean it's being employed as a resonator.
It would indeed be quite difficult to make an LC circuit resonate when it has diodes or synchronous rectifiers on the input side allowing energy flow through the inductor to go only one way. Even if it could go both ways, the LC resonant frequency of a 100+uH choke and 2000+uF of bulk output capacitance (5000+uF if you add capacitors on the motherboard's and other components' 12V rail) would be well under 400Hz vs 100+kHz for the input square wave of a (double-)forward converter or rectified sine wave of a quasi-resonant converter, so the resonant frequency is about three decades lower than stimulation frequency, not going to resonate much if at all. And then all of this gets heavily damped by output load.
 
Last edited:
  • Like
Reactions: TJ Hooker
No, the simple fact that it's an LC circuit does not mean it's being employed as a resonator. In the case of an SMPS you want to pass DC, ergo the inductor and cap are configured as a low pass filter.

I think you need to look at how the LC circuits work with their internal current flow. The LC components simply pass current continually between them until the balance is hit. This is a resonating effect that creates a balance between the componts, forcing the frequency to be a fixed value. Like I said LC generaly drive them selves amd force a frequency, rather than a LR or CR filter that collects other frequencys and attenuates them. Attenuation denotes filter. Resonance denotes resonator. Just because they have a filtering effect does not make them "not a resonator".
 
It would indeed be quite difficult to make an LC circuit resonate when it has diodes or synchronous rectifiers on the input side allowing energy flow through the inductor to go only one way. Even if it could go both ways, the LC resonant frequency of a 100+uH choke and 2000+uF of bulk output capacitance (5000+uF if you add capacitors on the motherboard's and other components' 12V rail) would be well under 400Hz vs 100+kHz for the input square wave of a (double-)forward converter or rectified sine wave of a quasi-resonant converter, so the resonant frequency is about three decades lower than stimulation frequency, not going to resonate much if at all. And then all of this gets heavily damped by output load.

And that was just complete waffle. The L and C component still impact each other , they just work very quickly with the current. I am not talking about Bi directional current here, hust the interation of electromagnetic balancing between the Inductor and capacitor. It is the same reasoning for a fly wheel diode in inductors with large power passing through them. We dont involve them in the maths because there function is not considered for operatiion, only to stop the collapse of the magnetic field creating a reverse current causing a short circuit.

Think about it, how do the C and the L interact. The whole purpose is to balance out the Phase and home into a resonant frequency. Even a resonator passed through and Opamp work with one way current, and the feedback does the rest (the opamp) But just as there is overshoot in overdriven components, so to is the balancing effect between the L and the C. If that i not so, please explain to me how the L and the C work to set a frequency with a high Q factor when in Parallel.
 
Those efficiencies are very much attainable:
https://www.tomshardware.com/reviews/automotive-usb-adapters-tear-down,4705-5.html
From 1A to 3.7A continuous, the LDNIO adapter manages to deliver92+% efficiency using an all-integrated switching regulator with synchronous rectification. Power stages with integrated drivers and all-integrated switching regulators have improved even more since then thanks to all the demand for USB-PD, QuickCharge, EVs, etc. 92+% efficiency is nothing exceptional with modern components.

Go on then. Try to create your own power supply using a DC to DC converter, with these components and prove to me what you are saying! Because as soon as you involve regulators, you have heat and waste. Look at the normal AC circuit regulators and you will find that one of the big culprits of the power loss bar the Ferrous core is the regulator. But please, do the experiment yourself, send me the video and I will believe this is possible to achieve.

And even the datasheets prove my point, they also state that the power efficiency is entirely related to the load, as in PCs, where even the processor can deactiveate unused circuits inside of it, changes dynamically with loads. This in itself means that the efficiency can fluctuate through the used of the power management components. Please show me a datasheet that claims otherwise, or please stop posting these claims. And I dont want to see intels claims, I want to see actual datasheets with all associated graphs.
 
There is no point in me building my own to prove something that has already been proven by billions of DC-DC converters in the wild like the LDNIO adapter review I linked previously.

As always, THD bias and always looking to the one or two reports that seem to support Intels claims, but ignoring the datasheets that Engineers go by the prove otherwise.
If the 12VO power supplies were so good, they would have been adopted, but they are only adopte by the manufacturers that are locked into Intel chips. And from what more unbiased sites say, the 12vo's often fail because they are over protected to stop any type of damage to the motherboard. The whole point in the Normal inductive power supplies is that they are Isoldated and don't require complex circuitry and motherboard bulking.
There is a reason why they are OEM only, and there is a reason why they will stay that way.
 
If the 12VO power supplies were so good, they would have been adopted, but they are only adopte by the manufacturers that are locked into Intel chips.
Modern ATX PSUs provide 3.3V and 5V from DC-DC converters powered from the 12V rail and have been doing so for nearly 10 years because group-regulated PSUs couldn't reliably cope with the large load transients of modern CPUs and GPUs. The only difference between a 12VO and regular ATX PSU is that 12VO gets rid of those extra converters and bumps the 5VSB rail to 12VSB which should make them more reliable since there are fewer potential points of failure and more room inside the PSU for spacing out components and improve airflow to keep everything even cooler, more efficient and reliable.