News Corsair’s PSU Expert Jonny Guru Weighs in on Nvidia Connector-Gate

Page 4 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
That's why I not only listen for click on 4, 6 &8 pins but I will also visually check around the entire plug using a extending mirror and flashlight if I can see without it. I've had 8 pins click before and not be fully connected and the result was a dead not cheap GPU.
 
^^THIS^^

Although I know for a fact that Dielectric grease is useful for some specific situations where it can help improve connectivity, reduce corrosion, and also provide some protection from dirt & moisture, THIS AINT ONE OF THEM 😡

And blaming this issue on the users "not fully inserting the connector" sounds moar like
nGreedia paid JG to publish a statement that shifts the blame for a poorly designed/faulty connector away from them and onto the end users, in an effort to hedge off all of the upcoming lawsuits that they know are in the process of being filed as we speak....

Therefore, we must now consider the possibility that Corsair, JG, and nG are all in cahoots to try to make this issue go away, OR perhaps that nG has contracted with Corsair to design & manufacture a new, improved connector & cable that will....

As the saying goes: Removed by Moderator
Jonny Guru has been around a LONG time and has never shied from blaming manufacturers on PSU construction. I doubt this is any different. Chances are, the PSU just draws too much power (obviously) and power spikes present issues when a couple cables are overloaded (due to improper connections) more than this was an issue with any GPU in the past.

I've connected PSUs without auxiliary connectors. They either don't display on boot or crash when I'm using them--this issue is still with NVidia that it doesn't recognize when one of the MANY cables aren't even connected. Once again, the real problem is that it just draws a problematic amount of power.
 
...
  1. In my professional opinion, this is underrated for the application (assuming a constant 600W load).
  2. Robust electrical design dictates a DOUBLE derating meaning that if they are going to rate it for 600W, the connector should really be capable of 1200W.
  3. They care certainly riding the upper limits of the ratings with this design.
...
Which is what I keep getting from all of this. NVidia designed a card that could draw too much power and connectors can't reliably provide that (not at the necessary safety level). The problem for NVidia is that the only fix is new firmware that slows the cards down...and then they'd lose to the much cheaper Radeon 7000 series flagship, which they can't allow.

So NVidia just is staying quiet until they can work out some magical power profile in a driver that maintains performance while reducing power draw...which is probably impossible.
 
Isn't "one after the other" called "daisy chain"?

Regards.
No. Daisy chaining is when you connect multiple devices in series (e.g. if you were to have 3 PCIe cards in a system, Card 1 would receive power from the PSU, Card 2 would receive power from card 1, and card 3 would receive power from Card 2).

The 12VHPWR connector connects the conductors in parallel, which is the standard for high power low voltage connectors. Better than the previous ATX12V standards, which (due to legacy concerns) had a spaghetti of connectors that in theory would be connected to independent 12V rails. In practice, PSU manufacturers switched to single high current 12V rails because that's what modern systems require, which is why later ATX12V standards were updated to match that.

No connector can be made foolproof, as the universe will always supply a new standard of fool to compensate. It's preferable to design an optimised connector (and the 12VHPWR is already overbuilt, and capable of losing half its conductors without failure) and accept that some percentage of users will find "plug it all the way in" too difficult, than to produce an exquisitely overengineered connector with sprung-loaded busbars and rotary preload applicators and multiply redundant latches... that some percentage of users will still fail to plug all the way in.
 
  • Like
Reactions: TJ Hooker
. . .
Nvidia Remains Quiet
It has been more than two weeks since the the first 16-pin 12VHPWR melting adapter problem reared its smoldering head. While various third parties and tech experts have come forth with reasonable theories about the underlying issue(s), Nvidia and its partners have been conspicuously quiet.

The last we heard was that Team Green was still carefully examining melted samples in their labs. Given we expect to see continued use of the 16-pin adapter on Nvidia's upcoming RTX 4080 as well as the future RTX 4070 Ti, it would be great to get some official news about these issues from Nvidia and its board partners.
. . . .
Even forty years ago this practice was unacceptable. Since this time, Big Global Corp has managed to gut everything from usury laws to banking and consumer protections. Time for some serious anti-trust actions against the worse offenders ...
 
The problem for NVidia is that the only fix is new firmware that slows the cards down...and then they'd lose to the much cheaper Radeon 7000 series flagship, which they can't allow.
I strongly doubt this. We don't know how RX 7900 XTX performs yet, but all indications from AMD last week are that it will not compete with the RTX 4090. At all. (Except at 1080p and even 1440p where things become CPU limited.)

Just a few points to prove this out:
  1. 96 CUs and 12288 shaders @ 2.3 GHz vs. 128 SMs and 16384 shaders at 2.52 GHz
  2. AMD's RDNA 3 architecture did what Ampere did on the FP32, so now there's double the FP32 per CU but the same INT32
  3. Meaning, when we look at 61.4 TFLOPS for AMD vs 82.6 TFLOPS for Nvidia, they're going to be more comparable than RDNA2's 23.7 TFLOPS vs. Ampere's 40 TFLOPS
  4. And if that's not enough, 76.3 billion transistors vs. 58 billion, and AMD had to use some of that for the Infinity Fabric links between GCD and MCDs
I think 7900 XTX has a good shot at the 4080, particularly in non-RT games. But AMD effectively admitted it didn't think Nvidia would have a chip as big as AD102. In other words, Nvidia could have dropped power use by 20-30% and performance by 5-10% and it still would have been fine.
 
  • Like
Reactions: helper800
yeah...no.
if you spend 1500$ on a gpu it shouldnt require you buying grease to PLUG something in.

if your plug is issue then the plug needs revised.
the seller's responsible for making sure its idiot-proof.

it isnt users fault the selelr wanted to make a new adapater to save space on pcb...they are massive already wouldnt of changed anything making it fit 3 or 4 8pins.
Exactly this. It's easy to blame the victim.

If the adapter (or straight PSU cable, in some cases) is so difficult to plug in fully that it ends up being improperly seated, leading to uneven contact, uneven load, and eventually melting, then the connector design is at fault.
Buying extra grease just covers up the issue instead of fixing it.
 
No. Daisy chaining is when you connect multiple devices in series (e.g. if you were to have 3 PCIe cards in a system, Card 1 would receive power from the PSU, Card 2 would receive power from card 1, and card 3 would receive power from Card 2).

The 12VHPWR connector connects the conductors in parallel, which is the standard for high power low voltage connectors. Better than the previous ATX12V standards, which (due to legacy concerns) had a spaghetti of connectors that in theory would be connected to independent 12V rails. In practice, PSU manufacturers switched to single high current 12V rails because that's what modern systems require, which is why later ATX12V standards were updated to match that.

No connector can be made foolproof, as the universe will always supply a new standard of fool to compensate. It's preferable to design an optimised connector (and the 12VHPWR is already overbuilt, and capable of losing half its conductors without failure) and accept that some percentage of users will find "plug it all the way in" too difficult, than to produce an exquisitely overengineered connector with sprung-loaded busbars and rotary preload applicators and multiply redundant latches... that some percentage of users will still fail to plug all the way in.
It's not "parallel", because that would require for a single source to be split evenly/equally to their destinations; the idea of a "parallel" connection is if one is disconnected, the rest will work and absorb the load based on the "source balance". In the diagram you can see they are going to the same choke point where if the cable is attached to either end of the "chain" (or solder bundle/group), it would use that pin first and the rest would flow as accepted/required by the circuitry or not at all. That is not balanced to classify as "parallel" to me.

It's a subtle, but important difference in this case.

Regards.
 
Exactly this. It's easy to blame the victim.

If the adapter (or straight PSU cable, in some cases) is so difficult to plug in fully that it ends up being improperly seated, leading to uneven contact, uneven load, and eventually melting, then the connector design is at fault.
Buying extra grease just covers up the issue instead of fixing it.
Yup, the pins and holes should be ever so slightly tapered to be just a little smaller at the bottom than the top. This would eliminate the too tight fit from parts that may be at opposite ends of the tolerance and don't actually fit correctly.
 
It's not "parallel", because that would require for a single source to be split evenly/equally to their destinations; the idea of a "parallel" connection is if one is disconnected, the rest will work and absorb the load based on the "source balance". In the diagram you can see they are going to the same choke point where if the cable is attached to either end of the "chain" (or solder bundle/group), it would use that pin first and the rest would flow as accepted/required by the circuitry or not at all. That is not balanced to classify as "parallel" to me.

It's a subtle, but important difference in this case.

Regards.
All conductors are shorted at source end (internal to PSUI, same supply rail), device end (same source rail for GPU VRMs), and on both sides the the connector. The only time there is any appearance of separation is because the cable used is made of multiple smaller diameter strands rather than two large diameter strands to increase flexibility.
Yup, the pins and holes should be ever so slightly tapered to be just a little smaller at the bottom than the top. This would eliminate the too tight fit from parts that may be at opposite ends of the tolerance and don't actually fit correctly.
Tapered pins/sleeves would not help: a connector not fully seated would still result in relative movement. It would also mean the connectors would dramatically increase in unit cost due to the need for a post-machining step to add the conical undercut (the mould angles the undercut from the opposite side in order to produce the features needed for latching the crimped pins internally).
 
  • Like
Reactions: TJ Hooker
All conductors are shorted at source end (internal to PSUI, same supply rail), device end (same source rail for GPU VRMs), and on both sides the the connector. The only time there is any appearance of separation is because the cable used is made of multiple smaller diameter strands rather than two large diameter strands to increase flexibility.
Look at the diagram. The PSU has nothing to do with what I am talking about.

Unless they do something similar, internally, to create the "single" 16pin connection, which would be really stupid.

Regards.
 
Isn't this the guy who already admitted that he had seen the melting cables? Jayztwocents showed a discord comment where he replied something like, we just replaced the cable and continued testing.....

So what was causing those cables to melt???
 
Look at the diagram. The PSU has nothing to do with what I am talking about.

Unless they do something similar, internally, to create the "single" 16pin connection, which would be really stupid.

Regards.
Internal to the PSU, there s a single 12V rail. All 12V pins (be they on a PCIe power connector, CPU power connector, 24pin ATX connector, SATA connector, etc) are all shorted to that same rail.
On the GPU side, all 12V pins are shorted together to form the 12V bus that feeds the VRMs (with the exception of the isolated phases that connect to the PCIe card-edge connector, to power-limit the traces via the motherboard).
The same applies with GND, which is all shorted together throughout the system or you have ground loop issues.

Whilst the user-facing spaghetti that exists the PSU's case are separate wires, they are no more at different potential or in any other way isolated than the multiple strands within each wire. The only reason you have a connector that uses 6 16AWG wires rather than a single 8AWG wire is that it is easier to bend a bundle of 16AWG wires than a single 8AWG wire.
 
  • Like
Reactions: TJ Hooker
Internal to the PSU, there s a single 12V rail. All 12V pins (be they on a PCIe power connector, CPU power connector, 24pin ATX connector, SATA connector, etc) are all shorted to that same rail.
On the GPU side, all 12V pins are shorted together to form the 12V bus that feeds the VRMs (with the exception of the isolated phases that connect to the PCIe card-edge connector, to power-limit the traces via the motherboard).
On the PSU side, different things may be on separate over-current protection circuits and on some of the higher-powered PSUs, rails may be fed by different HF transformers.

On the GPU side, while the 4090FE is shorting all 12V together, the 3090Ti and many older models split VRM phases between 12V pin pairs. Had the 4090FE continued to split VRM phases three ways across the connector, perhaps many of the melted connectors could have been avoided due to poor contact causing affected VRMs to fail to deliver sufficient power, crash the card and force the user to investigate before physical damage occurs.
 
  • Like
Reactions: helper800
I strongly doubt this. We don't know how RX 7900 XTX performs yet, but all indications from AMD last week are that it will not compete with the RTX 4090. At all. (Except at 1080p and even 1440p where things become CPU limited.)

Just a few points to prove this out:
  1. 96 CUs and 12288 shaders @ 2.3 GHz vs. 128 SMs and 16384 shaders at 2.52 GHz
  2. AMD's RDNA 3 architecture did what Ampere did on the FP32, so now there's double the FP32 per CU but the same INT32
  3. Meaning, when we look at 61.4 TFLOPS for AMD vs 82.6 TFLOPS for Nvidia, they're going to be more comparable than RDNA2's 23.7 TFLOPS vs. Ampere's 40 TFLOPS
  4. And if that's not enough, 76.3 billion transistors vs. 58 billion, and AMD had to use some of that for the Infinity Fabric links between GCD and MCDs
I think 7900 XTX has a good shot at the 4080, particularly in non-RT games. But AMD effectively admitted it didn't think Nvidia would have a chip as big as AD102. In other words, Nvidia could have dropped power use by 20-30% and performance by 5-10% and it still would have been fine.
I read something yesterday or the day before that said the 7900 XTX would only be 10% slower. That's what I was basing my statement on. If the gap is that big, then they should drop power usage by 10% to improve their current draw problems.

Here's one source:
https://www.notebookcheck.net/AMD-s...extrapolated-preliminary-graphs.666320.0.html
 
The purpose of the dielectric grease is to make complete insertion of the plug much easier by lubricating the plastic plug and would literally "fix it".
It may fix it for people willing to shell out extra for dielectric grease out if their pocket to make full penetration easier but most people who object to it feel like they are already getting screwed hard enough by the retail price that the lube shouldn't feel like a necessary added cost.

If this really turns out to be the issue, then the simple fix would be to send every owner of HPWR adapters and cables a 5g lube packet with instructions to apply the lube if they are experiencing difficulties with their mating situation.
 
TL;DR: I am sure there will be games where the 7900 XTX is only 10% slower than a 4090. I am equally sure there will be games where the 4090 is closer to double the performance (Cyberpunk 2077 with DXR fully enabled being a prime example). Which games are "most important" will be an entirely subjective opinion. Is the performance hit of DXR worth the visual upgrade? On an RX 6000-series or RTX 20-series GPU, the answer will often be "not really," but on a card like the 4090 you suddenly have so much performance that things that previously made no sense start to seem like a good idea. That's the short summary. The longer opinion / reasoning follows...

That linked article is almost laughable. "...10% slower on average versus RTX 4090 according to extrapolated preliminary graphs." It says it right in the title. Extrapolated graphs. Based on what? AMD's own benchmarks! News flash: AMD isn't going to lead with its worst-case results. It included three DXR games in one of the slides showing a 50-70% performance uplift over the 6950 XT, but they're all lighter DXR games. It also included Cyberpunk 2077, one of the most demanding games around if you enabled DXR, but left DXR off! It did however show a different slide with DXR RT-Ultra settings:

154

Based on the native performance, looking at those pixel heights, the RX 7900 XTX gets around 21 fps (20.6 if you want less rounding) at 4K RT-Ultra settings. The RX 6950 XT meanwhile only gets around 13 fps (12.5 fps with more precision). Guess what? I've already tested an RTX 4090 at precisely these settings, and I've also tested an RX 6950 XT! Let's see what my number say:

155

Look at the RX 6950 XT: 12.7 fps, or nearly exactly in line with AMD's revealed performance. And the 4090 got 44.1 fps, more than double the RX 7900 XTX. (20.6 fps puts it at the same level as an RTX 3080 12GB / 3080 Ti. Ouch!) Yes, FSR will improve performance, but the same applies to DLSS. I show DLSS Quality mode performance on the 4090 of 78 fps, but AMD is using FSR Performance mode. DLSS Performance mode gets into the 95 fps range (https://cdn.mos.cms.futurecdn.net/Xqr3ea83xKDM5f32tBc2NR.png), and that's at more demanding Psycho Lighting settings. DLSS3 Performance mode would then boost that to 141 fps. Still a massive jump in either case over the 62 fps figure AMD shows for the 7900 XTX.

I could try to run tests with Dying Light 2 and Hitman 3, but it's less clear what sequence was used for testing (DL2 doesn't have a built-in benchmark). And it's still only three games. We'll see around December 13 just how well AMD's new GPUs stack up to the competition. I suspect they'll give the 4080 something to chew on. I suspect they'll be better values than the 4090, especially at street prices. But I also suspect there will be cases where the 4090 is easily 50% faster than the XTX, and for those who want the fastest GPU possible (melting adapter optional!), I think AMD will need a bigger chip than Navi 31's GCD.
 
  • Like
Reactions: dalauder
...Psycho Lighting settings. DLSS3 Performance mode would then boost that to 141 fps. Still a massive jump in either case over the 62 fps figure AMD shows for the 7900 XTX.
I can't even consider DLSS3.
The extra artifacting in HUD elements and perspective/camera changes is horrible. Maybe I'm in the minority here, but I immediately disregard any FPS numbers shown in conjunction with DLSS3 (and will do the same with FSR3).
It sucks that AMD's FSR3 is following in the same footsteps as the algorithmically-generated (I refuse to call it AI) DLSS3.

Maybe it will get better, but I'm just too sensitive to artifacting. Luckily, it's an added feature that can just be turned off.
 
It is the PCI-SIG CEM 5.0 plug and Intel's ATX 3.0 HPWR plug too.

Unless the PCI-SIG, Intel and Nvidia decide to recall the HPWR connector, this thing will eventually end up on $200 GPUs, so the plugs need to be kept low-cost. While I can imagine many ways of enforcing (close-enough-to-)fully inserted before allowing power, I cannot think of any example in daily life of a connector that actually does.

USB-C power will only function when fully inserted and we can add a screw lock or rotational lock to it. (latest phone from Xiaomi charges 200watts per USB-C (anyways we can design USBC like power connectors for GPU cards)..

headphone jacks and mic jacks also deliver power and will only function if fully inserted ...
we do have the concept already.
 
It may fix it for people willing to shell out extra for dielectric grease out if their pocket

Should be under $2 for a packet at any auto parts counter or big box store.

Permatex Dielectric is available for $3.99 with Free one-day shipping on Amazon.

And Yes, Nvidia should be able to throw a packet in for 25 cents. (Or pass the cost on to the AIB companies since that's the way they roll)

If you have bought a 4090 your pocketbook is probably going to be fine.