News GPU compatibility dilemma as more high-end power supplies ditch 8-pin connectors in favor of new 16-pin

The spread of this will likely force AMD into using that 12VHPWR connector.

As long as AMD treats that connector the way Nvidia did on the 30x0 series and not the 40x0 and 50x0 series of video cards by separating the power, AMD will be fine.

The 30x0 cards didn't melt. The connector isn't the problem, these power supplies aren't the problem. Nvidia is the problem. There isn't any reason they should have gone super cheap on this.
 
You really don't need a 1kW unless you're powering one of nVidia's stupid high power cards which will use these connectors anyways. So it makes sense that these PSUs focus on that. But also, if you wanted to power a more reasonable GPU that uses a couple 8pin connectors, you can just use a 12V-6x2 to multiple 8pin cable adapter. Yes, now you need an adapter to go back instead of one to go forward but at least doing it this way likely won't cause the same melting issues.
 
  • Like
Reactions: eliman
Asus ROG Astral RTX 5090 has per pin monitoring (called Power Detector+) which monitors amps and volts on each individual wire/pin of a 12VHPWR connector. A rough estimate is it might have cost Asus $100 on each card to do that because it is a custom thing. If Nvidia did it on all cards, the price would probably be less than $20 per card.

From a cost perspective, it doesn't seem possible for PC PSUs to do that for each wire in each pf its plugs. It would be great if PC PSUs could do it - but, it isn't likely to happen any time in the immediate future.

But, if this MSI PSU had that feature and it added the cost to just the 12VHPWR plugs, I guess that might happen.

To me, the reason that this is an issue is because 50amps is a mind-boggling amount of electricity for a run of the mill consumer to deal with. My electric oven and my electric clothes dryer pull 30 amps. Have you ever noticed how thick a clothes dryer electric cable is?
 
Last edited:
  • Like
Reactions: Sippincider
The connector isn't the problem...
I disagree.

Versus 8-pin, the safety factor was lowered too much and the gauge of wire is inadequate for the potential power draw across just one or two wires (if no extra shunt resistors are used).

The reason why the 3000-series FE 12VHPWR cards were ok is because of the extra shunt resistors NVIDIA had on all these cards. These resistors are NOT part of the connector spec and actually masked the inadequacies of the 12VHPWR connector.

If shunt resistors are required on the GPU to make a connector design safe, then the connector design itself is still at fault.
 
Last edited:
  • Like
Reactions: jlake3 and ezst036
These resistors are NOT part of the connector spec and actually masked the inadequacies of the 12VHPWR connector.

If shunt resistors are required on the GPU to make a connector design safe, then the connector design itself is still at fault.
I can see the logic. That explains very well why the RTX 30x0 series never melted. (or at least, was so one-off and so random that its practically unheard of)

Still doesn't explain why Nvidia has not abandoned their terrible decision to use that connector in the first place. Besides, there's nothing that prevents Nvidia from going off-spec to protect its own customers. They did it initially for 30x0, why not keep doing it.

For reference:
https://www.tomshardware.com/pc-components/gpus/nvidias-rtx-5090-power-cables-may-be-doomed-to-burn

MJzPaCkPWQSKV8xTRmHUsj-970-80.png.webp


Melting video cards is not a good look. It doesn't matter who the company is.

I sincerely hope AMD requires the use of the three stage shunt design among its card distributors in order to diffuse the cable/power delivery so it doesn't melt following the model of the 30x0s. It would put AMD in a no-excuse situation to have melting video cards after all Nvidia has put its customers through.
 
  • Like
Reactions: eliman and alceryes
Regarding the PSUs not being compatible: if you're buying a PSU with two 12VHPWR/2x6 connectors you're not worried about 8-pin PCIe.

Regarding the spec:
According to Buildzoid the PCI SIG specifications for the connector is supposed to have all pins connected together. That means the design nvidia used with the 30 series wouldn't actually adhere to the specifications despite being a much better way to implement the connector. It's wild to me that manufacturing variance was apparently never taken into account when designing this specification.
 
Based on a review you guys wrote on PSU's, you recommended the MSI MAG power supplies. I decided to order this one for $70 from Amazon for my new build. It's getting an RTX 5070 Ti.

MSI MAG A750BE Gaming Power Supply - 80 Plus Bronze Certified 750W - Semi-Modular - Low Noise ATX PSU

 
The spread of this will likely force AMD into using that 12VHPWR connector.
Will never happen. Intel and AMD both examined the proposal way back when it was announced and decided against using it and they have been fully vindicated. It's a tragically bad design that operates with a pathetically poor 10%b margin of safety at best as opposed to 8 pin molex's 90% safety margin.

The last thing we need to embrace is yet another Nvidia proprietary standard. I'll bet AMD and Intel will develop a new alternative to 8 pin if they find the need to do so.
 
  • Like
Reactions: pf100 and eliman
The spread of this will likely force AMD into using that 12VHPWR connector.

As long as AMD treats that connector the way Nvidia did on the 30x0 series and not the 40x0 and 50x0 series of video cards by separating the power, AMD will be fine.

The 30x0 cards didn't melt. The connector isn't the problem, these power supplies aren't the problem. Nvidia is the problem. There isn't any reason they should have gone super cheap on this.
I feel it is not a problem. AMD GPUs generally don't draw more than 350W currently. The risk of this 12VHPWR only increases during very high power draw. I've used this connector on my RTX 4080 and subsequently a RTX 4070 Ti Super and there was no problems.
 
  • Like
Reactions: ezst036
I feel it is not a problem. AMD GPUs generally don't draw more than 350W currently. The risk of this 12VHPWR only increases during very high power draw. I've used this connector on my RTX 4080 and subsequently a RTX 4070 Ti Super and there was no problems.
While it is true that most AMD cards don't draw more than 350W, it still doesn't remedy the fact that a poorly designed connector is melting and killing both PSUs and GPUs.

I think AMD and Intel will eventually move off of the 8-pin PCIe. Whether it is to version 5 of the 12V-2X6 or something altogether new remains to be seen.
 
Will never happen. Intel and AMD both examined the proposal way back when it was announced and decided against using it and they have been fully vindicated. It's a tragically bad design that operates with a pathetically poor 10%b margin of safety at best as opposed to 8 pin molex's 90% safety margin.

The last thing we need to embrace is yet another Nvidia proprietary standard. I'll bet AMD and Intel will develop a new alternative to 8 pin if they find the need to do so.
Intel is heavily involved in the development of the 12VHPWR connector. It is officially part of their ATX 3.0 standard and they made revisions and recommendations for the connector before adding it.

Intel does use 12VHPWR for their datacenter GPU's. That's the expensive equipment that needs reliability which demonstrates Intel's belief in the spec. The highest TDP desktop GPU Intel had released is 225W. None of their cards exceed one 8 pin connector plus the PCIe slot's power limit. There's no reason for Intel to use the connector on the desktop until their TDP's increase significantly.
 
Intel is heavily involved in the development of the 12VHPWR connector. It is officially part of their ATX 3.0 standard and they made revisions and recommendations for the connector before adding it.

Intel does use 12VHPWR for their datacenter GPU's. That's the expensive equipment that needs reliability which demonstrates Intel's belief in the spec. The highest TDP desktop GPU Intel had released is 225W. None of their cards exceed one 8 pin connector plus the PCIe slot's power limit. There's no reason for Intel to use the connector on the desktop until their TDP's increase significantly.
2 8 pin molex can reliably deliver 400W with still significant safety margin. So with the 75 W from PCI-E, that's 475W. Intel won't be delivering any gpu that approaches that sort of draw in the next 5 years at least, even assuming dGPU's continue to be a thing for them. AMD could hit that with say a 9080XT if it were 33% more cores than 9070XT I guess. But would probably spec 3 8 pins connectors.

Maybe Intel has engineered their data centre connector differently to Nvidia that has chosen worst in class practices or has zero QC over the connector.
 
I can see the logic. That explains very well why the RTX 30x0 series never melted. (or at least, was so one-off and so random that its practically unheard of)

Still doesn't explain why Nvidia has not abandoned their terrible decision to use that connector in the first place. Besides, there's nothing that prevents Nvidia from going off-spec to protect its own customers. They did it initially for 30x0, why not keep doing it.

For reference:
https://www.tomshardware.com/pc-components/gpus/nvidias-rtx-5090-power-cables-may-be-doomed-to-burn

MJzPaCkPWQSKV8xTRmHUsj-970-80.png.webp


Melting video cards is not a good look. It doesn't matter who the company is.

I sincerely hope AMD requires the use of the three stage shunt design among its card distributors in order to diffuse the cable/power delivery so it doesn't melt following the model of the 30x0s. It would put AMD in a no-excuse situation to have melting video cards after all Nvidia has put its customers through.

honestly the issue isnt the connector its the way the 4090 and the 5090 are designed. and dont have enough shunt resistors

3000 series didn't melt because it had plenty of shunt resistors on the board.

4000 series had less but wasnt as power hungry on referance but on high spec models they had more issues.

5000 series design is absolute garbage they essentially took the breaks off and let the power just slam the gpu if the cable has any issues e.g cut 5 of the wires now you have 1 cable that will catch fire 4 out of 6 not meeting there target pins thats ok fire starter go.


View: https://youtu.be/kb5YzMoVQyw
 
2 8 pin molex can reliably deliver 400W with still significant safety margin. So with the 75 W from PCI-E, that's 475W. Intel won't be delivering any gpu that approaches that sort of draw in the next 5 years at least, even assuming dGPU's continue to be a thing for them. AMD could hit that with say a 9080XT if it were 33% more cores than 9070XT I guess. But would probably spec 3 8 pins connectors.

Maybe Intel has engineered their data centre connector differently to Nvidia that has chosen worst in class practices or has zero QC over the connector.
As someone already mentioned, two 8 pins plus PCI-E will get you 375W, not 475. Regardless, Intel not needing the connector is a long way from not using it because it was determined faulty as you falsely claimed. Pretty much everything you said is just made up.
Intel-Data-Center-Max-GPU-1100-Series-PCIe-at-SC22-3.jpg

Intel-Data-Center-Max-GPU-1100-Series-PCIe-at-SC22-6.jpg