News RTX 5090 cable overheats to 150 degrees Celsius — Uneven current distribution likely the culprit

People need to get over this "user error" garbage. I have been saying since BEFORE the first case of 4090 melting that this cable was not capable of 600W, not even close. The very first time I laid eyes on that cable, and saw those tiny pins, I KNEW KNEW KNEW KNEW KNEW this was NEVER going to work. The standard 8-pin never had this problem. Its been a reliable standard for years. Suddenly, new cable melts.... "Oh, you're plugging it in wrong" ..... like all of us who spent 15+ years plugging in 8-pin cables suddenly forgot how to plug in a cable. This is straight forward logic. I have been saying it for 3 years now and I will say it again: NVidia, its time to own your mistake, stop blaming the user, and fix the problem. But hey, blaming the user is cheaper than fixing the problem. Hats off to AMD for sticking to the 8-pin
 
Watching that cable heat up in thermal imaging is insane. Is it the copper thickness (gauge) that is causing the overheat?
Electricity 101 says that current flowing through a conductor will cause the conductor to heat up. How much it heats up depends on the amount of current (more current == more heat), the size of the conductor (larger conductor == less heat), and the resistance of the conductor (resistance causes heat, more resistance == more heat).

In this case the cable is rated to handle only so much current, and if that is exceeded, dangerous heat conditions occur.

If this happens in your house, ideally a circuit breaker trips because there's too much current flowing for what the wires can safely handle. (Assuming proper wiring to code.)

No such safeguards exist in your PC. No fuse or breaker in the wire, so you get excessive heat that leads to melting and/or fire.

Reducing the current flow or increasing the conductor size would solve this issue. Both are easy to say!
 
At this point I feel a pitty for those hard core nerds camping for their own gaming use with their life savings.



And another part of me eating popcorn with a can of coke smiling poiletly to the scalpers stocking a few on their ebay selling list of $6000
 
People need to get over this "user error" garbage. I have been saying since BEFORE the first case of 4090 melting that this cable was not capable of 600W, not even close. The very first time I laid eyes on that cable, and saw those tiny pins, I KNEW KNEW KNEW KNEW KNEW this was NEVER going to work. The standard 8-pin never had this problem. Its been a reliable standard for years. Suddenly, new cable melts.... "Oh, you're plugging it in wrong" ..... like all of us who spent 15+ years plugging in 8-pin cables suddenly forgot how to plug in a cable. This is straight forward logic. I have been saying it for 3 years now and I will say it again: NVidia, its time to own your mistake, stop blaming the user, and fix the problem. But hey, blaming the user is cheaper than fixing the problem. Hats off to AMD for sticking to the 8-pin

People need to get over this "user error" garbage. I have been saying since BEFORE the first case of 4090 melting that this cable was not capable of 600W, not even close. The very first time I laid eyes on that cable, and saw those tiny pins, I KNEW KNEW KNEW KNEW KNEW this was NEVER going to work. The standard 8-pin never had this problem. Its been a reliable standard for years. Suddenly, new cable melts.... "Oh, you're plugging it in wrong" ..... like all of us who spent 15+ years plugging in 8-pin cables suddenly forgot how to plug in a cable. This is straight forward logic. I have been saying it for 3 years now and I will say it again: NVidia, its time to own your mistake, stop blaming the user, and fix the problem. But hey, blaming the user is cheaper than fixing the problem. Hats off to AMD for sticking to the 8-pin
You seem out of date. Did you do know the original 12VHPWR spec had an issue where you could partially install it and it would power on and get damaged? (yes, replicated in the lab) The spec was updated and renamed 12V-2x6, which has sense pins that do not connect until fully the plug is seated, thus eliminating the problem. It is now fool-proof (poka-yoke). So the 'user error' issue was real and it has been eliminated with the updated design.
 
Reducing the current flow or increasing the conductor size would solve this issue. Both are easy to say!
Current can be reduced by increasing voltage, so that's another fix that many are suggesting. Of course, if the current was distributed much more evenly across the positive 12V wires, this wouldn't have been an issue, so this needs to get resolved first.

Both ATX 12VHPWR and 12V-2X6 standards need to be abandoned, this generation of big green GPUs being the last to use them. I agree user error isn't the problem in most cases of both 4090's and 5090's getting burned, but the clamp from the 8-pin molex connector needs to come back for one. Then, they need to decide on 48V or 60V, create a fail-close mechanism (circuit breakers are the example but it could be some other electro-mechanical solution) when any pin gets too hot *OR* tell temperature sensors on each pin that signal back to the PSU to shutdown when any of those pins crosses the threshold, and consider increasing connector size (not thinking it's a wire problem but resistance problem at male and female connectors).

Oh, ha, isn't that funny!? Who maintains the ATX standard? INTEL. Original article by Tom's:
https://www.tomshardware.com/news/intel-atx-v3-psu-standard
Did they not work with Molex at all on the new connector? Honest question, I have no idea.

This is happening on nVidia's halo gaming GPU's -- WHY are they still letting this go on like this? It's a complete embarrassment and demonstrates to me that they aren't the clearly superior GPU company that so many try to paint them to be. This is a serious safety issue, even including whatever toxic fumes are released when these connectors melt. It's time wasted for those affected and obviously an extremely poor user experience.

That said, buyers took the risk. If you REALLY think you need 4090/5090 levels of performance, just know that you're accepting the risk until nVidia and Intel get their heads out of their a**es.
 
  • Like
Reactions: Peksha
The Molex Micro-fit connector has been use industrially for decades, under loads greater than that of mere GPUs, and in far harsher environments.

The question here is why 10 of the 12 pins are not carrying the current they should be.
Having a 20+:1 spread from best to worst wire/pin is far beyond any sort of normal manufacturing tolerances. Something has gone horribly wrong, most likely with the cables.

In GN and others 4090 melted cable autopsies from years ago, there was a bus bar bridging all of the pins together to evenly spread current across all wires. The fact that only one or two wires are passing all of the current appear to indicate either that this is no longer the case or that the over-burdened pins and wires have detached from it.

If cable manufacturers are unable to build them right, the current 12V-HPWR standard should get scrapped in favor of something more difficult to screw up like XT-60.
 
Having a 20+:1 spread from best to worst wire/pin is far beyond any sort of normal manufacturing tolerances. Something has gone horribly wrong, most likely with the cables.

In GN and others 4090 melted cable autopsies from years ago, there was a bus bar bridging all of the pins together to evenly spread current across all wires. The fact that only one or two wires are passing all of the current appear to indicate either that this is no longer the case or that the over-burdened pins and wires have detached from it.

If cable manufacturers are unable to build them right, the current 12V-HPWR standard should get scrapped in favor of something more difficult to screw up like XT-60.
I do doubt even De8auer have that corsair AX1600i having such a poorly made cable being the culprit... if so that is still the standard's issue, when likes of the corsair AX series original cables have manufacturing issues leading to that 20+:1 power spread, it's the standard making it impossible to be made perfect enmass.

If it's anything, combining the explaination of buildzoid about the 4090 and 5090 dumb design with shunt resistors, I would bet it's the now 90 degrees rotated inclined connector to the FE PCB causing the problem, the "legs" of the connector to the PCB having different lengths or solder joint quality would be more likely IMO
 
You seem out of date. Did you do know the original 12VHPWR spec had an issue where you could partially install it and it would power on and get damaged? (yes, replicated in the lab) The spec was updated and renamed 12V-2x6, which has sense pins that do not connect until fully the plug is seated, thus eliminating the problem. It is now fool-proof (poka-yoke). So the 'user error' issue was real and it has been eliminated with the updated design.

The user error issue was never real. The sense pins were always there. On the revised version. They just shortened them.... instead of having a retention clip that you could actually VERY CLEARLY hear click in....... like our good old 8 pin had. So sometimes what would happen, given a cable can take different twists and turns through cable routing during the build process, sometimes, over time, the cable could slowly come loose, depending on route. This wouldn't happen if the clicking retention clip had always been there. It was soon revised after.

Ya know, its interesting. Man-kind has been plugging electricals into various sockets of various kinds for 200 years. Even in the last 15 years, we have been plugging in 8-pins into GPUs with no issue, suddenly, 4090 comes out with a new connector.... and I guess users decided to start making errors plugging it in. Its like we suddenly forgot how to plug in a plug into a socket all the way..... but we only forgot the moment it came time to plug in our 4090, no other cards, and no other electric recepticals in the world. We just stopped plugging in 4090's all the way for some reason. Lol. This connector should never have been designed. We had our 8-pin which was matured, established, tested, and true for many years. Maybe we should just not fix what aint broken
 
Then, they need to decide on 48V or 60V
Component cost goes up quite a bit beyond about 30V. For cost-sensitive consumer stuff, 24V would be a more sensible choice. 60V is a no-go since that would either require idiot-proof wiring built to similar standards as mains power cords or technically require electrician license for wiring cabinets. You probably don't want something like a 6-50 plug between your PSU and GPU.

Staying within the 12V realm, the simplest fix to horrible current balance issues is ditching the multi-pin, multi-wire setup in favor of something like XT-60. Wouldn't hurt to still do that on "ATX-24V."
 
Having a 20+:1 spread from best to worst wire/pin is far beyond any sort of normal manufacturing tolerances. Something has gone horribly wrong, most likely with the cables.

In GN and others 4090 melted cable autopsies from years ago, there was a bus bar bridging all of the pins together to evenly spread current across all wires. The fact that only one or two wires are passing all of the current appear to indicate either that this is no longer the case or that the over-burdened pins and wires have detached from it.

If cable manufacturers are unable to build them right, the current 12V-HPWR standard should get scrapped in favor of something more difficult to screw up like XT-60.
This is beyond the cables. Need to watch.
 
I have the 5080 FE, draws nowhere near as much power but still it's concerning to see such an uneven spread of the current. I didn't want to buy the 5090 because I was worried of this kind of things. Nvidia doesn't care if it's burning your house as long as they got paid.
 
it's the standard making it impossible to be made perfect enmass.
Manufacturering standards for most electrical stuff rarely have a deviation worse than 5% within a batch. Manufacturing tolerances would only explain a ~30% best-to-worst imbalance: 5% worst pin on both ends + 5% worst piece of wire in-between vs best 5% everything on another.

The ~20mm longer board connector pins on the GPU don't add anywhere near enough wiring resistance to explain more than a ~10% discrepancy between pins either.
 
I can see it now:

nVidia announces the release of nVidia Platinum 1000W and higher power supplies, with custom cables to power the re-release of the world's hottest GPU, the RTX-5090B! ***

*** Compatibility not guaranteed with non-nVidia hardware.

(you heard it here first!!)
 
This is beyond the cables. Need to watch.
Cables are where all of the major potential failure points are.

Board connectors use solid pins. The thing most likely to go wrong with them are solder joints, not much else.

Cable (female) connectors have the springy contacts that surround pins made of thin folded metal, are subject to metal fatigue, warping from thermal cycles, trap debris, have a crimped wire going into them, crimps are also subject to fatigue and potentially losing their grip on wires over time, etc.

Since many of the people with failed cables on the 5090 had a 3090 and 4090 before, they are likely reusing the same 2-4 years old cables. There likely are age and wear issues at play.

Derb should re-test using a brand-new cable to see whether his gross imbalance issue persists.
 
Cables are where all of the major potential failure points are.

...

Derb should re-test using a brand-new cable to see whether his gross imbalance issue persists.
3rd party cables were never a problem like this in the past; the only thing that significantly changed was the ATX 3.0 standard and 12VHPWR (besides 4090 and then 5090 continuing to increase power draw). Derb called out the lack of headroom in 12VHWPR that the 8-pin connectors provided., amongst some other concerns with the new standard -- again, cable quality aside.

So, while it might be a fair observation to say this is mostly happening on 3rd party cables, these once reputable aftermarket cable companies were setup for failure.

First, the internet blamed user error.
Now, 3rd party cables.
How about instead we raise our pitchforks at the source of the problem; it's time to setup everyone for success, not failure.
 
  • Like
Reactions: YSCCC
60V is a no-go since that would either require idiot-proof wiring built to similar standards as mains power cords or technically require electrician license for wiring cabinets.
According to this web site, anything up to 60V DC is within the Safety Extra Low Voltage limit so no need for special precautions.
https://techoverflow.net/2019/08/15/what-is-a-selv-power-supply/

SELV means Safety extra low voltage.

This means that

  • the voltage at the output of the power supply is so low that is isn’t considered a safety risk (less than 60V DC or 35VAC).

What does 60V DC mean in practice?

Typically the 60V value is defined as ripple-free DC. This means that the peak value of the waveform is not more than 10% higher than the maximum allowed voltage, e.g. it must not be more than 70V for a 60V system.

Personally, I'd be happier with 36V or 48V DC as used in some aircraft power supplies, to reduce the overall weight of copper wires in long cable looms. Higher voltage = lower current = reduced CSA (cross sectional area) of conductors.
https://www.ajpowersupply.com/aerospace-power-converter/

If a GPU requires 600W:

600W divided by 12V = 50A (greater risk of hot spots if several pins in a 12V-2x6 connector have high resistance faults)

600W divided by 48V = 12.5A. Much lower overall current. Less likely to melt wires or contacts?

It would of course mean changing the ATX PSU standard yet again and who knows what confusion this might cause. We'd need another connector for 48V so we'll probably be stuck with 12V for the time being.
 
  • Like
Reactions: DS426 and adbatista
Cables are where all of the major potential failure points are.

Board connectors use solid pins. The thing most likely to go wrong with them are solder joints, not much else.

Cable (female) connectors have the springy contacts that surround pins made of thin folded metal, are subject to metal fatigue, warping from thermal cycles, trap debris, have a crimped wire going into them, crimps are also subject to fatigue and potentially losing their grip on wires over time, etc.

Since many of the people with failed cables on the 5090 had a 3090 and 4090 before, they are likely reusing the same 2-4 years old cables. There likely are age and wear issues at play.

Derb should re-test using a brand-new cable to see whether his gross imbalance issue persists.
You really need to watch that video. Better cables would only band-aid the problem. See how the power is distributed to the GPU post-connector. Compare 3090 to 4090/5090.
 
  • Like
Reactions: YSCCC and DS426
Component cost goes up quite a bit beyond about 30V. For cost-sensitive consumer stuff, 24V would be a more sensible choice. 60V is a no-go since that would either require idiot-proof wiring built to similar standards as mains power cords or technically require electrician license for wiring cabinets. You probably don't want something like a 6-50 plug between your PSU and GPU.

Staying within the 12V realm, the simplest fix to horrible current balance issues is ditching the multi-pin, multi-wire setup in favor of something like XT-60. Wouldn't hurt to still do that on "ATX-24V."
60VDC still usually falls under "low voltage" as @Misgar mentioned. As a real-world example, Power over Ethernet has been around for some time and hasn't caused much issue in the network and IoT world; this is even with less-than-ideal CAT5e or higher cable quality and lengths in some cases, less-than-stellar terminations, etc. PoE++ even provides 52V-57V as the power source, and again, one doesn't need an electrician's license to implement these devices, just a good ol' network technician, admin, or even Joe Blow at home.

AC is much more dangerous due to its alternating nature that induces more/stronger muscle contractions.

Yes though, even 24VDC would be an improvement -- probably enough to get by at current levels (~600W for 5090), but trying to future-proof a little, I'd prefer to see this taken further.

As for cost-sensitive consumer electronics, I don't think $1,500+ GPU buyers fall into extreme price-sensitivity; in fact, given what's happening to these unfortunate buyers, I'm sure they'd be more than happy paying an extra $20-$30 for a connector and wires that will just do their job of safely and reliably providing juice to their hungry GPU. Otherwise, all we need to do is go back to the ATX 8-pin molex connector that's tried and true; I mean, is it worth designing and implementing a new spec that isn't backwards compatible to make a marginal step forward?