News First credible report of RTX 5090 FE with melted connector appears — third-party cable likely cause

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Time to switch from 12V to 24V and the problem is solved for the next decade or two when the GPUs will consume 2.5 - 3 kiloWatt. By the way NVIDIA already switched to even higher voltages in its server GPUs
 
Last edited:
I'm wondering how many PSUs with a 600W 12VHPWR/12V-2x6 connection are actually properly designed for it. When I was picking out a PSU I purposely looked for one which properly limited that connector according to Intel's specifications. Unfortunately the cable power rating is an optional part of the ATX 3.x specification even though it states 1100W minimum for 600W cable. Aris from Hardware Busters/Cybenetics has brought up multiple times about how nobody pays attention to this portion of the specifications. While 1000W should be plenty I'm always leery whenever companies just dismiss specifications.
 
I'm wondering how many PSUs with a 600W 12VHPWR/12V-2x6 connection are actually properly designed for it. When I was picking out a PSU I purposely looked for one which properly limited that connector according to Intel's specifications. Unfortunately the cable power rating is an optional part of the ATX 3.x specification even though it states 1100W minimum for 600W cable. Aris from Hardware Busters/Cybenetics has brought up multiple times about how nobody pays attention to this portion of the specifications. While 1000W should be plenty I'm always leery whenever companies just dismiss specifications.
Which tbf, the cable can be purchased from Seasonic using on their old atx2 PSU is rated to be used minimum with their 650W models, of course if someone plug their 400W+ card on a 650W will surely go into serious trouble, Seasonic PSUs generally can run 200W overhead or so without dying, albeit it will be really hot and may trigger protection circuits. I think the manufacturers ignore that is because Nvidia is putting the plug all the way down to 4060, they need to get the plug available on all their line or it won’t be selling, given that buyers should be aware of the minimum PSU requirement.
 
  • Like
Reactions: KyaraM
To be honest, I never feel that using a single tiny connector to supply this much power makes sense. Even if it works, the margin of error in order not to mess anything up is too low to deploy to mass market if there are no standard fail safe/ guard rails to prevent users from doing something wrong and causing it to overheat. In household electronics that requires a lot of power, the cables are thick and so are the pins. It may not be an Nvidia problem to be fair, but should have been thoroughly tested before deploying to retail.
 
I think the manufacturers ignore that is because Nvidia is putting the plug all the way down to 4060, they need to get the plug available on all their line or it won’t be selling, given that buyers should be aware of the minimum PSU requirement.
The connector itself isn't the problem so much as it's how the PSUs themselves are designed. The ATX 3.x specification has different power ratings of 150/300/450/600W for the 12V PCIe connection. My PSU for example will not go above 450W for the 12V PCIe connection because it adheres to the specification. The problem is that most PSUs ignore the specifications and just allow for full 600W.
 
  • Like
Reactions: KyaraM
The connector itself isn't the problem so much as it's how the PSUs themselves are designed. The ATX 3.x specification has different power ratings of 150/300/450/600W for the 12V PCIe connection. My PSU for example will not go above 450W for the 12V PCIe connection because it adheres to the specification. The problem is that most PSUs ignore the specifications and just allow for full 600W.
No idea about how widely this problem is, but even for say those corsair/seasonic cables for ATX2.0 PSU they should have been safe enough as OCP should've triggered when the card tries to pull over the spec.
And for most who will buy a 4090 or 5090 won't be using below1000W PSUs anyway so in the melting condition I don't think it's a major issue, the issue is still pushing too close to the design limit of the power it can handle, and from JTC or GN, they managed to log the power draw constantly spikes ABOVE 600W for a FE in their review, so it technically just allows over 600w to be pushed through is the problem I believe
 
  • Like
Reactions: Peksha
There is no cable delivered with the GPU, only an adapter. His PSU was legit, Platinum, 10 year warranty. But it is always recommended to use the cable that comes with the PSU, might be what caused the problem.
 
  • Like
Reactions: KyaraM
Reading through the user's posts in that Reddit thread:

- The PSU was a more than decent one, with great reviews, non-cheapo, 10 year warranty, Japanese capacitors yadda yadda.
- The 3rd party cable wasn't a no-name and had thicker cabling than the PSU one it replaced. It was used due to the SFF build.
- Both cable and PSU were used for two years with a 4090 and everything was still pristine there.
- The user never says they suspect a fault with the PSU, only that the connection damage is heavier there.
- The user clearly has a lot of experience and understanding of this type of connection, and was making very, very sure that the connections were sound, even down to e.g. checking for voltage fluctuations in HWInfo.
- The cable design hasn't changed between 12VHPWR and 12V-2x6 and they're explicitly stated to be interchangeable if not identical.
- NVIDIA don't list ATX3.1 PSU as a requirement.

The user is being very careful not to presume or assign blame for the cause here, and seems resigned to the likelihood of not getting anywhere with a warranty case on the technicality of '3rd party cable' despite the cable being made to the required standard and meeting all required specs. They don't appear to think that using the original PSU cable would have stopped this happening, only that it would have avoided any arguments over a warranty claim.

The general impression is that the user has a suspicion that the 12VHPWR/12V-2x6 design is such that it doesn't matter how carefully these cables are handled and plugged in, it's impossible to eliminate the possibility of this kind of damage happening.
 
Time to switch from 12V to 24V and the problem is solved for the next decade or two when the GPUs will consume 2.5 - 3 kiloWatt. By the way NVIDIA already switched to even higher voltages in its server GPUs
Might as well jump straight to 48v which is the max in the US still classified as "low voltage" (well 50v actually, but 48v in practice). At 48v that connector should be good for 2400 watts, which is well beyond anything a standard outlet can provide.
 
For those that don't know basic electricity, as others have pointed out, doubling the voltage would cut the current in half. High current is what requires thicker wires or they melt from heat.

Going from 12v to 48v is even better, as jp7189 says, as it would cut the current by 75%. So 600watts of power at 12v = 50A. 600watts at 48v = 12.5A.

It's a logical step forward if GPUs are going to continue having insane power requirements.

EDITED to fix mixup of amps and watts.
 
Last edited:
  • Like
Reactions: iLoveThe80s
Is there some problem to acknowledge the connector should not be used with the amount of power it is required to transfer? These articles are pointless. It was obvious there will be problems.
As some stated, it's time to make popcorn and laugh at the stupidity of the situation.
 
Is there some problem to acknowledge the connector should not be used with the amount of power it is required to transfer? These articles are pointless. It was obvious there will be problems.
As some stated, it's time to make popcorn and laugh at the stupidity of the situation.
If nVidia acknowledges the connector is the problem, then they open themselves for lawsuits more easily. I could argue they are still liable if a class action is fired their way, but saying it's a problem, then that's instant admition of guilt.

So, they have to try and bend over backwards to say "it's not the connector, we're right and you're wrong". This is standard Company MO, but in nVidia's case, I'm sure it's also because they do not want to say "we were wrong, sorry".

Hell, even when the GTX590 was catching fire they did not issue an apology xD!

Regards.
 
If nVidia acknowledges the connector is the problem, then they open themselves for lawsuits more easily. I could argue they are still liable if a class action is fired their way, but saying it's a problem, then that's instant admition of guilt.

So, they have to try and bend over backwards to say "it's not the connector, we're right and you're wrong". This is standard Company MO, but in nVidia's case, I'm sure it's also because they do not want to say "we were wrong, sorry".

Hell, even when the GTX590 was catching fire they did not issue an apology xD!

Regards.
Worse still they dominated the market, so literally it will be very hard for others to prove they are indeed at fault
 
  • Like
Reactions: iLoveThe80s
Imagine spending all that time and money to secure a 5090 only for the dodgy 12V-2x6 cable to burn up and leave you with multiple companies passing blame onto each other. The performance is not worth the risk!

I don't see an upgrade path from the RTX 3090 still. It uses the older 8 pin connectors that have been rock solid for a very long time. The power supply situation in PCs needs to evolve because what they're doing right now isn't reliable enough.
 
Worse still they dominated the market, so literally it will be very hard for others to prove they are indeed at fault
More like... Is there any other high powered device which uses this connector? You can't even draw comparisons in other use cases and contexts, since nVidia is the only one using it, no?

EDIT: I read somewhere nVidia could have just used the EPS connector instead, which is 330W for the 8pin (IIRC) and that would have been fine, but wanted to use this thing instead. Or just have added 2 more pins to the old/current 8pin PCIe power connector and make it 300W, so it would have been very straightforward: 6, 8 and 10 for 75W, 150W and 300W. The cables actually supported the bump in specs without eating into the safety margin that much as well.

Oh well. I'll never understand how some engineers think.

Regards.
 
Funny that Nvidia's new FE cards are smaller sized so its easier for SFF pc builders to fit them but then run the risk of this happening when SFF builders use their own cables for better airflow/cable management.
 
More like... Is there any other high powered device which uses this connector? You can't even draw comparisons in other use cases and contexts, since nVidia is the only one using it, no?
Only thing I can think of is Intel's DC GPU line (Ponte Vecchio), particularly the Max 1100 SKU which is 300W and uses 12VHPWR. I have no idea if anyone is actually buying and using those cards though.
 
Last edited:
  • Like
Reactions: -Fran-
Funny that Nvidia's new FE cards are smaller sized so its easier for SFF pc builders to fit them but then run the risk of this happening when SFF builders use their own cables for better airflow/cable management.
This is a point made by the Reddit OP: NVIDIA explicitly market the 5090 FE as an "SFF-Ready Enthusiast GeForce Card", yet it might be the case that SFF builders can't afford to risk using third-party cables even if those cables meet the required standard, and even if they're better than the original PSU cables.
 
  • Like
Reactions: TCA_ChinChin
De8auer did say in one video that 12vhpwr can not be remedied. There always is chance that connector get lose by time, or something else happens. His solution was to reduce power draw from one to give higher marginal and use two connectors...
Seems that he was right.
 
Although it’s a third party cable, melting at both ends looks like it can’t take the sustained load vs being plugged incorrectly, now given I forgot it’s GN or JTC who used the power measuring board found that their FE pulling more than 600W at times from that cable it’s likely just that the cable can’t handle long term spikes like that.
Actually both shown examples where GPU was pulling more then 600 from the cable.
 
  • Like
Reactions: YSCCC
More like... Is there any other high powered device which uses this connector? You can't even draw comparisons in other use cases and contexts, since nVidia is the only one using it, no?
In addition to the server parts TJ pointed out, ASRock made a 2-slot, blower style RX 7900XTX for high-density workstation/server deployments using 12v-2x6... but it uses the up-to-450W power profile, not the up-to-600W profile the RTX 5090 uses.