News Igor's Lab Says Nvidia's 16-Pin Adapter Is to Blame for the RTX 4090 Fire Hazard

OK, no. As much as I'd like to tag Nvidia with this, the problem is still that the connector standard itself is to blame. Simply allowing that amount of power into such itty bitty little contact points is in and of itself a serious flaw. There is a reason that the wires of the four 8-pin plugs are that thick. They carry a lot of current and allow for a lot of tolerance. You are taking four massive power cables and mashing them all into one little connector. Doesn't work. The contact surface area of one single 8-pin connector is equivalent to the contact are of that entire 16-pin connector. It is, quite simply, a bad design.
 
  • Like
Reactions: Bamda and Loadedaxe
The fact Nvidia has not even sent out a warning, let alone recalled the products, is very disturbing.

Nvidia is weighing the chance someone's house catches fire VS the profits they would lose by saying the product is a fire hazard.

I am very uncomfortable with how this is playing out. I have not seen this in any industry, where a product is clearly unsafe and could cause a large fire...and the company flat out does not even respond, acknowledge or do any recall.

The moment these connections started overheating and melting, there should have been an immediate shoutout by Nvidia to stop using the product until further notice.

If this ever does cause a house fire, Nvidia is going to get sued into oblivion.
 
From what I understand, the 16-pin form factor is part of the issue though. After all, an 8-pin power connector is supposed to carry 150 Watt at most according to PCIe specs, isn't it? And from these 8 pins, 3 pins are for the power transfer, aren't they? So that's 12 pins needed for 600W, which get pushed onto 6 pins on the 16-pin socket, don't they?

And then using a power adapter, which is less robust than the 8-pin connectors (and also putting the GPU socket at a place where there is more warm air than elsewhere), such isn't helpful of course. But using only 6 pins to receive 600W, that is quite exceeding the standard, even if 8-pin connectors have shown to be able to carry more than 150W, isn't it?
 
From what I understand, the 16-pin form factor is part of the issue though. After all, an 8-pin power connector is supposed to carry 150 Watt at most according to PCIe specs, isn't it? And from these 8 pins, 3 pins are for the power transfer, aren't they? So that's 12 pins needed for 600W, which get pushed onto 6 pins on the 16-pin socket, don't they?

And then using a power adapter, which is less robust than the 8-pin connectors (and also putting the GPU socket at a place where there is more warm air than elsewhere), such isn't helpful of course. But using only 6 pins to receive 600W, that is quite exceeding the standard, even if 8-pin connectors have shown to be able to carry more than 150W, isn't it?
instead of speculating wildly in public, david, why don't you read the source article which was written by someone who knows what they are talking about and has analysed the situation very thoroughly?
 
  • Like
Reactions: jp7189
$1600-2000 for a card and they cheap out on the adapter. No excuse for this. They could easily raise the price for $20-40 and not have a house burn down

I didn't mind they using a new connector, however, I didn't expect these issues to crop up. I was a little nervous about the 4-to-1 connector, but it probably would have been fine... right? No. It really should have been a 90 degree one, or have the power connector more on the side of the card.

Hopefully Nvidia will make it simple and say "got card? Want adapter? Contact us"
 
  • Like
Reactions: gg83 and PEnns
In that case, why we don't hear such problems caused by Corsair PCIe 5 cables direct PSU to 16 pin? Or Seasonic ones?

Your n is going to be relatively small for those you mentioned. You would need equal numbers to see if a significant difference exist.
 
From what I understand, the 16-pin form factor is part of the issue though. After all, an 8-pin power connector is supposed to carry 150 Watt at most according to PCIe specs
The 6/8-pins AUX connector uses Molex' MiniFitJr pins and those pins are rated for 9A each so per Molex specs, each pin would be able to carry 100W at 12V and a 6/8-pins cable would be able to carry 300W within Molex specs. The PCI-SIG simply chose to run the connectors at 1/4 and 1/2 of the Molex spec.

The biggest issue I'm seeing pointed out in Igor's write-up is that the cables are soldered to 0.2mm-thick foil, which isn't going to stand up to much abuse from pushing or pulling on wires.
 
  • Like
Reactions: gg83
Any electrical expert know if this would've gone under the radar had Nvidia stuck to the 450w board power limit?
Some of the AIB models running higher than 500w probably would've been in trouble though.
 
Any electrical expert know if this would've gone under the radar had Nvidia stuck to the 450w board power limit?
Some of the AIB models running higher than 500w probably would've been in trouble though.
Igor and Jay have dissected their Nvidia HPWR adapter and appear to have just about confirmed the problem: the four incoming wires from the plugs get soldered to a 0.2mm thin strip and the corner wires appears to be extremely susceptible to ripping off from that strip, after which you end up with all of the power coming only from the inner two cables.
 
  • Like
Reactions: jp7189 and Kridian
Igor and Jay have dissected their Nvidia HPWR adapter and appear to have just about confirmed the problem: the four incoming wires from the plugs get soldered to a 0.2mm thin strip and the corner wires appears to be extremely susceptible to ripping off from that strip, after which you end up with all of the power coming only from the inner two cables.
I got that much earlier. But when 1 cable is ripped off, 450w is still safely doable, yes/no? If 2 cables were ripped, yes, I could see 450w being a disaster.
If the 600w limit wasn't a thing, the faulty plugs would've been less noticeable until at least 2 cables are ripped. That's what I was trying to ask.
 
I got that much earlier. But when 1 cable is ripped off, 450w is still safely doable, yes/no? If 2 cables were ripped, yes, I could see 450w being a disaster.
If the 600w limit wasn't a thing, the faulty plugs would've been less noticeable until at least 2 cables are ripped. That's what I was trying to ask.
IIRC, they said the "bus bar" strip is 0.2x2mm. 0.4sqmm of current-balance copper to bridge all pins is equivalent to #21, which is cutting it kind of close for 8A per pin. Just under 7A for 450W should still be fine with three working cables. In Jayz' dissection though, he couldn't get the first solder joint to fail but when he went to cut off the second wire, the strip ripped off on its own. If you messed around with the adapter enough to break one spot, chances are that others aren't far behind.
 
  • Like
Reactions: Phaaze88
OK, no. As much as I'd like to tag Nvidia with this, the problem is still that the connector standard itself is to blame. Simply allowing that amount of power into such itty bitty little contact points is in and of itself a serious flaw. There is a reason that the wires of the four 8-pin plugs are that thick. They carry a lot of current and allow for a lot of tolerance. You are taking four massive power cables and mashing them all into one little connector. Doesn't work. The contact surface area of one single 8-pin connector is equivalent to the contact are of that entire 16-pin connector. It is, quite simply, a bad design.

100%. These adapters are not adequate to carry that much draw, contacts are too small. End of story, they need to come up with a proper fix.
 
I can't wait to see a vendor eschew the proprietary NV connector for a 4x 8pin setup with a custom bios that wont limit GPU frequencies based on the connector/PSU used.
 
Igor and Jay have dissected their Nvidia HPWR adapter and appear to have just about confirmed the problem: the four incoming wires from the plugs get soldered to a 0.2mm thin strip and the corner wires appears to be extremely susceptible to ripping off from that strip, after which you end up with all of the power coming only from the inner two cables.
That's just one issue, the other issue is that according to Buildzoid of AHOC, the female terminal inside the plug is using Dual Splits on the inside.

CableMods and other 12VHPWR plugs are only using Single Splits which is a more expensive solution vs Dual Splits.

With extra user wiggling around of the adapter, he figures that the Dual Split female terminal will split right open over time and increase electrical resistance, ergo melting plastic / fire.
 
100%. These adapters are not adequate to carry that much draw, contacts are too small. End of story, they need to come up with a proper fix.
This isn't correct. A company called GALAX did a power test with the adapter, and hit a sustained 1500+ Watt load with reasonable temps(60-70C).

So it's not the power draw that's the problem.
 
  • Like
Reactions: peachpuff
On the positive side, now it is now known and people is aware about it, so they can anticipate and take action. That includes nVidia.

I still dislike the new power connector as it looks flimsy, but if it's built (the adapter) in a better way and the connector itself is attached better to the PCBs (I dislike wiggling of connectors a lot), this should become a non-issue going forward even with the lower tolerances as even Jay has demonstrated that temperature-wise, it's a non-issue even if it does get on the warmer side. I'm sure he (Jay) is happy his worries, directly or indirectly, were spot on.

Regards.
 
That's just one issue, the other issue is that according to Buildzoid of AHOC, the female terminal inside the plug is using Dual Splits on the inside.

CableMods and other 12VHPWR plugs are only using Single Splits which is a more expensive solution vs Dual Splits.
I'm not seeing how one terminal style would be meaningfully more expensive to make than the other, both seem like they should require three press steps to punch out of a metal strip.