News AMD Radeon RX 7000 Cards Won't Use 'Hot' 16-Pin Power, Says Industry Veteran

This new PCIe plug standard is quickly becoming a scandel. I won't be surprised if we see PCI sig reverse course on this connector. Honestly they should have stuck with the same size pins from previous gens. As soon as I saw these new tiny pins after release I cringed at the thought of how easy they'll be to damage. Unfortunately from the posts I am seeing about proper 32mm straight cable before bending only re-enforces my concerns. I am not sure even 90 degree connections can save this standard. I expect class action laws suits before its all said and done if more burnt plugs/cards keep popping up.
 
Last edited:
This new PCIe plug standard is quickly becoming a scandel. I won't be surprised if we see PCIe sig reverse course on this connector. Honestly they should have stuck with the same size pins from previous gens. As soon as I saw these new tiny pins after release I cringed at the thought of how easy they'll be to damage.
I doubt the size of pins is an issue. I suspect you would have the same issue or possibly worse with a bigger connector since that means an even larger path length difference between wires inside and outside the bend radius, which in turn translates to even more strain on the wires and crimps.
 
I doubt the size of pins is an issue. I suspect you would have the same issue or possibly worse with a bigger connector since that means an even larger path length difference between wires inside and outside the bend radius, which in turn translates to even more strain on the wires and crimps.

Either way, it's a Plug design issue and the lack of Strain Relief causing the internal Female pins to not be properly seated and making good contact with the male pins on the receptacle end.

I'm glad AMD is not using this 12VHPWR standard.

Being cautious & conservative about such big changes is good for AMD.

This is mostly a self inflicted wound from nVIDIA.
 
Either way, it's a Plug design issue and the lack of Strain Relief causing the internal Female pins to not be properly seated and making good contact with the male pins on the receptacle end.
The connector is fine, the problem is with the application: Nvidia, AIBs and others involved failed to consider the practical aspects of how those cables would actually get used inside PC cases when they chose their connector placement and cable designs.

No connector likes having a hard bend at the plug. Copper work-hardens so sharp bends should be avoided as much as possible to avoid brittle failure over time. If you need a sharp bend, use cables with an angled plug or appropriate adapter.
 
I doubt the size of pins is an issue. I suspect you would have the same issue or possibly worse with a bigger connector since that means an even larger path length difference between wires inside and outside the bend radius, which in turn translates to even more strain on the wires and crimps.

Greater surface area leads to better connections. But the real problem I suspect is the cable gauge. While technically cables that thin can support the requested current, as the article states, there's a minimum radius before there is internal breakage of individual strands. These are stranded wires and not solid core as stranded it more flexible. Stranded is also better with high frequency pulses due to skin effect. But it's subject to breakage in tight corners. A lower gauge # (thicker cable) has more resistance to sharp bends. Also, the increase in the number of fibers means higher current and better natural strain relief as the strands support one another.
 
Last edited:
The connector is fine, the problem is with the application: Nvidia, AIBs and others involved failed to consider the practical aspects of how those cables would actually get used inside PC cases when they chose their connector placement and cable designs.

No connector likes having a hard bend at the plug. Copper work-hardens so sharp bends should be avoided as much as possible to avoid brittle failure over time. If you need a sharp bend, use cables with an angled plug or appropriate adapter.
Makes me glad the old PCIe 6/8 pin have more forgiving tolerances to bending of the wires at the base.
 
  • Like
Reactions: purple_dragon
In the USA, the NEC (National Electrical Code ) has certain provisions regarding the sizing/gauge, radii of bends and connections _ primarily to prevent 'scrunching' of the PVC insulation and nylon jacket _ without a determent to the overall conductor ampacity.

Off the top of my head, the general rule is 6-8 times the diameter of the conductor inclusive of jacket and insulation. ( . . dang those egg-head electrical engineers! . . )
 
  • Like
Reactions: martinch
But the real problem I suspect is the cable gauge. While technically cables that thin can support the requested current, as the article states, there's a minimum radius before there is internal breakage of individual strands.
There is no miracle to be had from having bigger wires: the bigger the wires are, the more outer strands have to stretch and the more extreme strain will be at the terminal crimps from attempting to over-bend cables, which end-users are practically guaranteed to do no matter how much you attempt to thicken and stiffen them. If the choice of connectors and cables for your product force people to use extreme cable bends to fit your product in their computer, they will get kinked to hell and back, practically guaranteeing an elevated failure rate.
 
A well-written article that logically presents the evidence and what knowledgeable sources are saying. Thanks.

This looks like a bad mistake by NVidia. Although of course, in raw numbers, not many 4090s will be sold anyway. So beyond being a black eye in the tech community, this may not a big issue for NVidia in the scheme of things.
 
There is no miracle to be had from having bigger wires: the bigger the wires are, the more outer strands have to stretch and the more extreme strain will be at the terminal crimps from attempting to over-bend cables, which end-users are practically guaranteed to do no matter how much you attempt to thicken and stiffen them. If the choice of connectors and cables for your product force people to use extreme cable bends to fit your product in their computer, they will get kinked to hell and back, practically guaranteeing an elevated failure rate.

I'm not going to argue with you. I know I'm right. As they say invalid. Have a nice day. It's not worth it.
 
Well if AMD can use the older PCIe connectors, use less power and be faster in raster than the RTX 4090, they'll have a winner in their hands. They just need to hurry up and release the cards already.
The 4090 is ridiculous. If AMD can release a competitive card at the right price, many in the tech world will have appreciation. It sounds like they might have something that can take on the 4090, but it's rumor at this time. If they can do the same as even the 4070 or 4080 at much less power, it would be a huge win.

That being said, chiplets on a GPU have a lot of promise. The multi-core performance on Ryzen has been fantastic, and GPUs are very multi-core-like

A well-written article that logically presents the evidence and what knowledgeable sources are saying. Thanks.

This looks like a bad mistake by NVidia. Although of course, in raw numbers, not many 4090s will be sold anyway. So beyond being a black eye in the tech community, this may not a big issue for NVidia in the scheme of things.
I kind-of give Nvidia a little slack for adopting the new standard itself, but I didn't like the adapter that they came up with, and having that ridiculous port on the top of the card. It's a 3- or 4-slot GPU, and they couldn't run the power cable into the side of the card somehow instead?
 
Isn't this 12VHPWR/16-pin connector the same one used on the high end Founders Edition 3000 series GPU's?

There were no issues with those, if the new AMD cards don't draw the power that the new NVIDIA cards do then why wouldn't they use the more compact connector?
 
I doubt the size of pins is an issue. I suspect you would have the same issue or possibly worse with a bigger connector since that means an even larger path length difference between wires inside and outside the bend radius, which in turn translates to even more strain on the wires and crimps.
Can some technician explain these people why you need a robust connector and thick cables when running high current through them?
There could be up to 50A running through that thing. The fact it is so fragile is ridiculous.
 
Can some technician explain these people why you need a robust connector and thick cables when running high current through them?
There could be up to 50A running through that thing. The fact it is so fragile is ridiculous.
You shouldn't need "robust cables" inside a protective static enclosure in a generally controlled environment. The aggregate copper across the six wires is enough to pass ~100A with perfect current balancing, so there is a ~100% safety factor there, the amount of copper is nowhere near being an issue. Nvidia, the PCI-SIG and Intel just failed to take practical considerations of using that connector in real-world PC cases with actual 600W GPUs that make significant bends near the connector inevitable under most circumstances.

If you don't want to worry about current balance between a bunch of small wires, then yes, use a connector with bigger pins and wires. It won't magically solve issues with bends though, excessive or repeated bending will still cause wire strands to break along the outside radius.
 
digitalgriffin, you're full of it.
Stranded wires do have better performance at high frequencies due to skin effect as he said. Though it doesn't matter much in DC power distribution where local bypass capacitors are taking care of the high frequency stuff and you may actually want to use the skin effect to dampen high frequency ringing that could be present on the wires due to the PSU bypass capacitors, local bypass and wiring inductance forming a resonant circuit.
 
You shouldn't need "robust cables" inside a protective static enclosure in a generally controlled environment. The aggregate copper across the six wires is enough to pass ~100A with perfect current balancing, so there is a ~100% safety factor there, the amount of copper is nowhere near being an issue. Nvidia, the PCI-SIG and Intel just failed to take practical considerations of using that connector in real-world PC cases with actual 600W GPUs that make significant bends near the connector inevitable under most circumstances.

If you don't want to worry about current balance between a bunch of small wires, then yes, use a connector with bigger pins and wires. It won't magically solve issues with bends though, excessive or repeated bending will still cause wire strands to break along the outside radius.
Thank you for the explanation. I didn't know such wire can sustain even 100A. I thought it's like with starter cables. Bigger diameter means no power loss and you won't melt the cables.