News PCI-SIG Tells GPU Makers to Improve Testing in Response to Nvidia 12VHPWR Lawsuit

RichardtST

Notable
May 17, 2022
235
264
960
So, the PCI-SIG basically said "we just make the spec, it's your problem to make sure it is safe". That's just such an inane cya statement. The spec is clearly defective in this case. Nvidia is also liable because they ignored all the warning signs too. But PCI-SIG is not blameless here. Anyone with half a brain (or more!) can see that that connector is simply not capable of safely carrying that much current. The spec needs to be withdrawn and retracted. The responsible parties at PCI-SIG (and Nvidia) need to get the boot.
 

YouFilthyHippo

Prominent
Oct 15, 2022
166
83
660
Look at those tiny little pins on that stupid thing. You aren't SAFELY jamming 600W through all those tiny little pins on one connector, not even close. There is a reason we had 8 pin connectors. The 8 pins leave much more headroom and are far safer. There is still plenty of PCB real estate to include 4 8-pins. This 12-pin garbage is just not happening. How did that connector even pass quality control? If someone offered me a 4090 for free, I wouldnt use it. I would off it for 80% MSRP and tell the buyer: You wanna burn your house down, you do you. It's just not worth the risk. Stop delaying the inevitable and quit being so stubborn. Just get the recall over with already so we can move on. Let it be a lesson learned. Stick with the 8-pins. Don't fix what ain't broken. I hope NVidia gets pummeled in this lawsuit. They deserve it for senselessly allowing the safety of their customers to be compromised
 

InvalidError

Titan
Moderator
Look at those tiny little pins on that stupid thing. You aren't SAFELY jamming 600W through all those tiny little pins on one connector, not even close.
In one of GN's videos, they chopped off four out of the six 12V pins (only two pins left) and still couldn't get the connector to fail over 4+ hours of testing so the connector itself can actually handle 1200+W at 12V.
 
The materials science of the cable is solidly within margin for what it is designed to provide in wattage. What was up in the air with the connector was what was causing it to melt connectors. It was clear through GN's testing that the cable could handle the loads required of it, however, only when the cable is fully and properly seated in the port. There were many questions about what were essentially manufacturer defects that were left on the table as being a possible cause. It was quickly found that the cable itself was not inadequate for providing the power the cards required. This is also essentially vindication for JohnyGuru and his assertion that it was user error as a cause which was not plugging in the connector all the way. There were so many that called him an Nvidia shill and that he seppukued his reputation for Nvidia, shame on those who jumped to conclusions with no basis to do so.
 
Last edited:

aberkae

Distinguished
Oct 30, 2009
102
29
18,610
Like Steve at Gamer's Nexus said there should be a feedback from the system that should not allow a boot if the cable is not fully plugged in.
 
  • Like
Reactions: artk2219

PlaneInTheSky

Commendable
BANNED
Oct 3, 2022
556
759
1,760
"the cables we sold are melting"
meh, just ignore those customers

"people are saying it's a fire hazard"
meh

"a cable actually caught fire"
meh

"someone started a lawsuit"
ALL HANDS ON DECK. GET OUR LAWYERS. DENY EVERYTHING!
SEND OUT A PRESS RELEASE "SAFETY IS OUR PRIORITY!"
 
Last edited:

BillyBuerger

Reputable
Jan 12, 2021
167
86
4,660
... There is a reason we had 8 pin connectors. The 8 pins leave much more headroom and are far safer. There is still plenty of PCB real estate to include 4 8-pins...

Except that the 8-pin connectors only have 6 pins related to power. The other 2 are sense pins. Just like the 6-pin connector only has 4 power pins with 1 sense and 1 not connected (although usually connected to 12V). The 12VHPWR connector has 12 power pins + 4 sense pins. That's 2x the power pins of an 8-pin connector. Although that still means 2x the power per pin compared to 4x 8-pin connectors.

But then the 8-pin EPS connector for CPUs is rated for almost 400W of power. So the 12VHPWR connector at 600W would be similar current per pin to EPS and EPS hasn't been a problem. They definitely are pushing the limits of these and using more pins would definitely lessen the impact to these issues. But I think the power per wire isn't so much the issue. Just apparently bad connector design making it too easy for them to not be fully seated.
 
I still find funny how they blame the users, when before this 16 pin connector Ive never heard of anyone having melting issues when using 3x8Pins to power a big GPU (like many RTX 3090).

So people were smart enough to plug 3 connectors, but now they became dumb and can't even plug 1 correctly?

The connector is hard to plug and is hard to see if its done correctly. The board design and cooling system make it really hard to see if it was done right. And this is usually done inside a case, which make things a little bit harder.
 
  • Like
Reactions: artk2219

Wrss

Prominent
Dec 19, 2021
11
10
515
So, the PCI-SIG basically said "we just make the spec, it's your problem to make sure it is safe". That's just such an inane cya statement. The spec is clearly defective in this case. Nvidia is also liable because they ignored all the warning signs too. But PCI-SIG is not blameless here. Anyone with half a brain (or more!) can see that that connector is simply not capable of safely carrying that much current. The spec needs to be withdrawn and retracted. The responsible parties at PCI-SIG (and Nvidia) need to get the boot.
Let's not assume half a brain is sufficient to evaluate matters of electricity, yeah?

The connector itself is electrically and thermally fine. 50A across 6 pins, then the same on 6 return pins, with thermoplastic/receptacle conduction and open airflow. It is the wiring in between that is not necessarily as well cooled. But still, 8.33 amps across 1 mm diameter copper (18-gauge) would dissipate less than 1 watt per linear foot. That's just to give you an idea of how little heat we're talking about.

What it comes down to is who is responsible for defending against a partially plugged connector, where the heat levels can easily go up 10-fold. Is it the user for not knowing how to plug it in, or the connector designer for not designing a fail-safe, or the manufacturer for not making an intuitive latch or instruction manual?
 

waltc3

Reputable
Aug 4, 2019
420
223
5,060
Most connectors today (of the various kinds) are idiot-proof. Meaning two things to me: the user can only plug the connector in correctly as an incorrect attempt will simply not insert, and the second idiot-proof feature is a latch that snaps audibly with a satisfying, tactile "click" when properly seated.

This is not something new or strange and bizarre. Naturally, you wonder how any GPU company overlooks this entirely in a $1600+ product line, no less. If it is true that incomplete insertion is the cause of the electrical problem, then apparently no attention was given this fact during design and marketing. Since nVidia confirms that it was aware of the problem prior to shipping, it simply adds mystery to the situation. But, otoh, if the designer knew that incomplete insertions explained only a fraction of the "melting plug" problems, then that could explain why an idiot-proof plug was apparently not considered prior to shipping.

Just very surprising to see this, imo.
 

Xenx

Distinguished
Dec 31, 2007
7
8
18,515
Most connectors today (of the various kinds) are idiot-proof. Meaning two things to me: the user can only plug the connector in correctly as an incorrect attempt will simply not insert, and the second idiot-proof feature is a latch that snaps audibly with a satisfying, tactile "click" when properly seated.

This is not something new or strange and bizarre. Naturally, you wonder how any GPU company overlooks this entirely in a $1600+ product line, no less. If it is true that incomplete insertion is the cause of the electrical problem, then apparently no attention was given this fact during design and marketing. Since nVidia confirms that it was aware of the problem prior to shipping, it simply adds mystery to the situation. But, otoh, if the designer knew that incomplete insertions explained only a fraction of the "melting plug" problems, then that could explain why an idiot-proof plug was apparently not considered prior to shipping.

Just very surprising to see this, imo.
There is feedback, but it is subpar. There also isn't anything inherently incorrect with the way the plug is designed. It works the same way the 6/8 pin PCIE power cables do, just at a higher density. I think the oversite stems from the fact that they have a bunch of people that know what they're doing working on it. If you know what the feedback should be, and how it should connect, it's easy to test it correctly and harder to test it incorrectly. Just take a look at how much effort GamersNexus put into making it fail before they figured it out.
 

Wrss

Prominent
Dec 19, 2021
11
10
515
Most connectors today (of the various kinds) are idiot-proof. Meaning two things to me: the user can only plug the connector in correctly as an incorrect attempt will simply not insert, and the second idiot-proof feature is a latch that snaps audibly with a satisfying, tactile "click" when properly seated.

Part of this is that old connectors have undergone iterative revision. Early SATA data cables didn't even have a latch and would somewhat easily disconnect. That said, no connector is idiot-proof, and most PSU cables today still are too hard to insert to have an audible click - talking about ATX 24-pin and PCIe 8-pin, even the 4-pin "Molex". The reason for the tough insertions is to ensure low resistance.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,280
810
20,060
This is largely a UX / Design issue with the plug.

Through all the testing, they didn't account for the most incompetent of end users.

They also could've made things easier / more idiot proof.
7Hnqxxp.png
That would've made it that much harder to screw up insertion.

Make a latch design that has a "Nice Loud Latching Click Sound".

Or make the plug "High Contrast" so it's easier to see if the latch is even moving and going to clip onto the locking tab.
BxUxDAv.png
vjKFwOG.png
 
Last edited:
Apr 1, 2020
1,394
1,050
7,060
I'm unsure as to why they went with a plug as well instead of a connector where you plug and twist until it locks, akin to a BNC connector. Failing that, even something like a DisplayPort cable locking connector would have been better than what's being used, if indeed the problem is actually people not fully plugging in the connector (which I still doubt). It may be a bit more bulky, but when you're dealing with such high powered connectors, a little bit of extra security is never a bad thing,

bnc-rg6q_2000sq.jpg
 

PiranhaTech

Reputable
Mar 20, 2021
134
86
4,660
I agree.... to a small extent. The GPU makers should invest in testing

However, I would say to at least recommend something like "um, can you include a card that says to prevent your GPU from going up in flames, make sure it's fully plugged in."
 

Alpha_Lyrae

Commendable
Nov 13, 2021
18
15
1,515
Except that the 8-pin connectors only have 6 pins related to power. The other 2 are sense pins. Just like the 6-pin connector only has 4 power pins with 1 sense and 1 not connected (although usually connected to 12V). The 12VHPWR connector has 12 power pins + 4 sense pins. That's 2x the power pins of an 8-pin connector. Although that still means 2x the power per pin compared to 4x 8-pin connectors

12VHPWR is essentially the same as 2x8-pins. 8-pin has 3x12V/3xGND or 3x12V power rails. So, 2x8-pin has 6x12V power, which is exactly the same as 12VHPWR connector. The only difference is the rated amperage per 12V rail. 8-pin allows 4.16A and 12VHPWR allows up to 8.33A. 8-pin could also support 8.33A if updated with smart sense.

8-pin: 4.16A * 3 rails * 12V = 150W
2x8-pin: 4.16A * 6 rails * 12V = 300W
16-pin: 8.33A * 6 rails * 12V = 600W
12-pin: 6.25A * 6 rails * 12V = 450W
 
Last edited:
  • Like
Reactions: Amdlova

TJ Hooker

Titan
Ambassador
I still find funny how they blame the users, when before this 16 pin connector Ive never heard of anyone having melting issues when using 3x8Pins to power a big GPU (like many RTX 3090).
The RTX 3090 FE (and other high end RTX 3000 FEs) did use use this 16 pin connector (well, minus the 4 sense pins, that don't deliver power anyway). The 3090 Ti even had the same 450W TDP as the 4090.
 
Last edited:
  • Like
Reactions: panathas

watzupken

Reputable
Mar 16, 2020
1,007
507
6,070
How about them improving the connector such that there is no way for people to not fail plugging in properly to begin with? Since Nvidia basically ride on GN's testing to say that it is a user problem, but yet, failed to address Steve's feedbacks on improving the connector to make sure that it locks down properly, or have a way to detect poor connectivity. As I recall, Steve reported at one stage of the testing that he pushed the connector down firmly, and while it seems like it is seated/ connected properly, it is not.