The ATX v3.0 prepares the ground for the new GPUs, which will require new PSUs!
Intel's ATX v3.0 PSU Standard Has More Power for GPUs : Read more
Intel's ATX v3.0 PSU Standard Has More Power for GPUs : Read more
My guess is that the PSU manufacturers have had the final specs for a good 6 months now to be able to make their new products. Same with AMD/nVidia for their GPUs. I suspect that we will see these PSUs come out with the next GPU and CPU releases with PCIe 5.0 support.Were there any estimates on when we might see these new PSUs hit the market, and/or info on the GPUs that require them?
My guess is that the PSU manufacturers have had the final specs for a good 6 months now to be able to make their new products. Same with AMD/nVidia for their GPUs. I suspect that we will see these PSUs come out with the next GPU and CPU releases with PCIe 5.0 support.
I do like that the new spec includes the CPU continuous power for HEDT CPUs. Makes the decision on a PSU much easier for people who need those CPUs. Also helps Intel as their CPUs are quite power hungry during boosting.
The over power safety IC logic is a lot more complicated and has to be a lot more accurate. 200% power for 100us 10% duty is a nebulous one as I don't think there's anything that fast out there. But also what happens when you start getting mixed spikes like 150% mixed with 200% and 110%?
Power = V*V/R. So you cut the resistance in half and your amps shoot through the roof. The cables in the spec will heat up QUICKLY. As the power grows, the amount of power lost to heat exponentially grows. So you are dealing with an exponential heat problem with mixed amperages and duty cycles. It will require a total waste heat power table that tracks over time.
Peace! Thanks for readin'.
I wish we could get a proper write up on this like igorslab did (igorslab insight into atx 3.0) I am really... doubly so, shocked...that this is being championed, like more electricity for seemingly no real good reason is a good thing (along with 600watt cards), not to mention increased costs, increased complexity, complete redesigns, more faulty DOA components, list goes on. Those tables Intel put out are far out.
At the very least, I would have liked to have seen some questioning as to why this is a necessity and what some of the ramifications are going to be at the consumer level. Some of these things I have no idea how manufacturers will implement, and how they do it is the only thing I look forward to seeing. I mean, 200%. Why would I need 200% power for any amount of time, regardless of boost?? How do the components handle that? That's cutting amps in half. I feel like this is pretty much just starting over, not building upon.
Moore's Law has now become a doubling of power every 2 years. We'll call it More's Law. This is kind of what happens after you break physics, it collapses in on itself and goes inverse... Intel will soon get into the nuclear energy business to complement their CPUs needs. There are some things I like, but...what I would have liked more is some transparency and communication as to why the world needs this. Nvidia, Intel, AMD, et. al., always could have created 600-watt CPUs/GPUs. Actually, Pentium comes to mind... The real story here, I'm guessing, is that microarchitecture design and innovation has plateaued, and they all know more power will soon be needed to achieve any meaningful performance gain. Meanwhile, look at what DLSS and co. and a little (a lot) of hardware-software complimented innovation did for performance. Christopher Walken would approve, at least. Moar. Moar Pow-uh!
Never thought I'd need more than 1KW, what with the whole "efficiency" thing. Why not just make SLI, CrossFire, SMP work better at this point?! This is wardsback. Hello, a few thoughts on this. Mostly personal rambles.
[edited for the time allotted]