Intel's ATX v3.0 PSU Standard Has More Power for GPUs

Were there any estimates on when we might see these new PSUs hit the market, and/or info on the GPUs that require them?
My guess is that the PSU manufacturers have had the final specs for a good 6 months now to be able to make their new products. Same with AMD/nVidia for their GPUs. I suspect that we will see these PSUs come out with the next GPU and CPU releases with PCIe 5.0 support.

I do like that the new spec includes the CPU continuous power for HEDT CPUs. Makes the decision on a PSU much easier for people who need those CPUs. Also helps Intel as their CPUs are quite power hungry during boosting.
 
My guess is that the PSU manufacturers have had the final specs for a good 6 months now to be able to make their new products. Same with AMD/nVidia for their GPUs. I suspect that we will see these PSUs come out with the next GPU and CPU releases with PCIe 5.0 support.

I do like that the new spec includes the CPU continuous power for HEDT CPUs. Makes the decision on a PSU much easier for people who need those CPUs. Also helps Intel as their CPUs are quite power hungry during boosting.

The over power safety IC logic is a lot more complicated and has to be a lot more accurate. 200% power for 100us 10% duty is a nebulous one as I don't think there's anything that fast out there. But also what happens when you start getting mixed spikes like 150% mixed with 200% and 110%?

Power = V*V/R. So you cut the resistance in half and your amps shoot through the roof. The cables in the spec will heat up QUICKLY. As the power grows, the amount of power lost to heat exponentially grows. So you are dealing with an exponential heat problem with mixed amperages and duty cycles. It will require a total waste heat power table that tracks over time.
 
The over power safety IC logic is a lot more complicated and has to be a lot more accurate. 200% power for 100us 10% duty is a nebulous one as I don't think there's anything that fast out there. But also what happens when you start getting mixed spikes like 150% mixed with 200% and 110%?

Power = V*V/R. So you cut the resistance in half and your amps shoot through the roof. The cables in the spec will heat up QUICKLY. As the power grows, the amount of power lost to heat exponentially grows. So you are dealing with an exponential heat problem with mixed amperages and duty cycles. It will require a total waste heat power table that tracks over time.

Yeah, maybe since I don't see any announced in conjunction with the spec finalization, it's probably a bigger task for the PSU OEMs along with little need for it now. That, combined with parts shortages, probably means we won't see these new PSUs until late 2022 or even 2023.
 
Apr 3, 2022
1
0
10
Peace! Thanks for readin'.
I wish we could get a proper write up on this like igorslab did (igorslab insight into atx 3.0) I am really... doubly so, shocked...that this is being championed, like more electricity for seemingly no real good reason is a good thing (along with 600watt cards), not to mention increased costs, increased complexity, complete redesigns, more faulty DOA components, list goes on. Those tables Intel put out are far out.

At the very least, I would have liked to have seen some questioning as to why this is a necessity and what some of the ramifications are going to be at the consumer level. Some of these things I have no idea how manufacturers will implement, and how they do it is the only thing I look forward to seeing. I mean, 200%. Why would I need 200% power for any amount of time, regardless of boost?? How do the components handle that? That's cutting amps in half. I feel like this is pretty much just starting over, not building upon.

Moore's Law has now become a doubling of power every 2 years. We'll call it More's Law. This is kind of what happens after you break physics, it collapses in on itself and goes inverse... Intel will soon get into the nuclear energy business to complement their CPUs needs. There are some things I like, but...what I would have liked more is some transparency and communication as to why the world needs this. Nvidia, Intel, AMD, et. al., always could have created 600-watt CPUs/GPUs. Actually, Pentium comes to mind... The real story here, I'm guessing, is that microarchitecture design and innovation has plateaued, and they all know more power will soon be needed to achieve any meaningful performance gain. Meanwhile, look at what DLSS and co. and a little (a lot) of hardware-software complimented innovation did for performance. Christopher Walken would approve, at least. Moar. Moar Pow-uh!

Never thought I'd need more than 1KW, what with the whole "efficiency" thing. Why not just make SLI, CrossFire, SMP work better at this point?! This is wardsback. Hello, a few thoughts on this. Mostly personal rambles.

[edited for the time allotted]
 
Last edited:
Peace! Thanks for readin'.
I wish we could get a proper write up on this like igorslab did (igorslab insight into atx 3.0) I am really... doubly so, shocked...that this is being championed, like more electricity for seemingly no real good reason is a good thing (along with 600watt cards), not to mention increased costs, increased complexity, complete redesigns, more faulty DOA components, list goes on. Those tables Intel put out are far out.

At the very least, I would have liked to have seen some questioning as to why this is a necessity and what some of the ramifications are going to be at the consumer level. Some of these things I have no idea how manufacturers will implement, and how they do it is the only thing I look forward to seeing. I mean, 200%. Why would I need 200% power for any amount of time, regardless of boost?? How do the components handle that? That's cutting amps in half. I feel like this is pretty much just starting over, not building upon.

Moore's Law has now become a doubling of power every 2 years. We'll call it More's Law. This is kind of what happens after you break physics, it collapses in on itself and goes inverse... Intel will soon get into the nuclear energy business to complement their CPUs needs. There are some things I like, but...what I would have liked more is some transparency and communication as to why the world needs this. Nvidia, Intel, AMD, et. al., always could have created 600-watt CPUs/GPUs. Actually, Pentium comes to mind... The real story here, I'm guessing, is that microarchitecture design and innovation has plateaued, and they all know more power will soon be needed to achieve any meaningful performance gain. Meanwhile, look at what DLSS and co. and a little (a lot) of hardware-software complimented innovation did for performance. Christopher Walken would approve, at least. Moar. Moar Pow-uh!

Never thought I'd need more than 1KW, what with the whole "efficiency" thing. Why not just make SLI, CrossFire, SMP work better at this point?! This is wardsback. Hello, a few thoughts on this. Mostly personal rambles.

[edited for the time allotted]

Power gating is a technique that powers down part of the gpu to save power. This allows it to run cooler. But reinitiating those stages of circuits often creates a massive power inrush. A GPU hardware scheduler can see certain circuits are needed and flip the switch to power up those sections well before the instruction is dispatched. But a high inrush current is needed to make sure said circuit is initialized in time.

Second some mimd matrix operations / tensor ops are massive power hogs.

These two things together result in current influx. When you have a massive use inrush of energy, resistance drops to zero. V = IR. That means voltage drops the zero and why you have all these capacitors which store up energy to reduce voltage swings. Voltage swings are what create instability, and in rare cases, damage.

Now these huge currents are nothing new. They have been around for a decade. However, the sudden burst though and sensitivity to them is new.

As circuits get smaller, the transistor threshold for operating voltage is smaller. Also the number of transistors firing goes exponentially up. Thus lower more sensitive voltage but more current.
 
Last edited: