Self-destructing graphics cards: Why are power connectors melting, and what can you do about it?

There are limited options. According to this image from this Toms' article: https://www.tomshardware.com/pc-components/gpus/nvidias-rtx-5090-power-cables-may-be-doomed-to-burn

MJzPaCkPWQSKV8xTRmHUsj-970-80.png.webp


1) An Astral RTX5090 is according to rumors properly shunted and will not melt down at the connector. So that is an option. (also mentioned in the article)
2) An RTX 4090 Matrix also will not have a connector melt down.
3) Any RTX 3090/80/70 was safe, Nvidia set them all up properly from the information I have
4) Any AMD or Intel video card. It had to be listed.

It would really be nice as well as a good service for Tom's Hardware readers if Tom's Hardware would sniff out any and all Nvidia video cards anywhere and list the ones where the provider went above and beyond to protect the card. Such as the Astral and Matrix lines. A spreadsheet would be by far the best service Tom's could provide. Perhaps a html table if it has to be that way.

I would like to get actual confirmation of these things. I find it a little hard to believe that the Astral and the Matrix are the only two options of the newer generations that a buyer can have confidence in. There have to be at least a handful of others out there among the sea of those who will melt the connector at any time.
 
Last edited:
  • Like
Reactions: artk2219
The design is flawed.
idk if I would say "flawed" but i'd for sure use poorly implemented.

The thing afaik hasnt ever damaged a mid/low end gpu using it..just the 90 & 80 tier.
which is likely issue of the user error risk w/ its very littel tolerance on bending coupled w/ the fact nvidia reduced safety on gpu end that would prevent a cable/gpu from being too hot and melting the plastic. (as back on 3090 they had actual safety built in)

but yeah honestly there should of been a lawsuit over it given how widely known an issue it is and them doubleing down on it with 50 series.
 
  • Like
Reactions: artk2219
Undervolting the 4090 has been recommended to prevent frequency throttling. I did undervolt my 4090 from the "get-go." I do have a Thermal Grizzly WireView installed between the cable and my 4090. While using it for MS Flight Simulator, I've never seen any wattage even close to 600. Is there any suggestion, theoretical or practical, that undervolting the 600w GPUs can make melted components less likely?
 
  • Like
Reactions: artk2219
Well, yes, less power should mean less potential for overheating the wires.

The main issue is unbalanced current flow though. So nothing says that at 400W a single wire won't take too much current and melt anyway. It comes down to the pin interface mostly it seems. Wear and tear of multiple insertions is a factor, as well as how much bending is needed to fit it in some chassis.
 
  • Like
Reactions: artk2219
It's a Darwin thing imo. Those gpu's separate the people who have a PC IQ north of a turnip from the people who should stick to consoles and pre builts.

Don't stick those cards inside a crackerbox size case, use a proper psu and make sure you have a solid connection.
 
  • Like
Reactions: artk2219
I have commented on this topic nearly a dozen times. I am a power electronic engineer who has worked on systems up to 12MW in size and currents up to 6300A. So I easily understand this kind of application. It is very evident that someone inexperienced has decided on this configuration without consulting the connector company on how to derate the pin current and how to implement this reliability. And then there is marketing specmanship that you find in the connector datasheets themselves.

I would have selected an Anderson connector or something similar. Paralleling DC power pins is a dangerous thing to do, especially in light of the complex physics. If they are Hell bent on sticking with this configuration on the GPUs, they should be using interleaved PWM switching stages, with one power stage used per pin. This allows the PWM controller to force each pin to draw a balanced load and not rely on uncontrolled path resistances. These resistances are for example: connector pins, wire crimp, and wire - on both ends of the cabling system. So the imbalance can occur at the GPU side and the PSU side.

The 9A current derating quoted in the article seems high. When I looked at the Minifit application notes for similar parts, I saw that it showed a derating to 5A (or was it 5.5A?) for a 16 way housing. That is 8 pins bringing power into the GPU, and 8 pins taking power out (return). So for the current coming into the GPU I get:

I (total) = 8 pins x 5.5A per pin = 44A

I would expect the voltage to droop badly with this much loading on the PSU and cabling so for a 600W load the voltage could easily be around 11.75V or even lower. The GPU will pull what it needs regardless. So 600W / 11.75V = 51A.

Connector systems are all about power dissipation, cooling, and material heat capability. Most UL approved nylon housings are 94V0 which will tend to not flame up and self extinguish when the heat is stopped. This nylon is usually rated for 105 Deg C at the housing.

The connector industry is not very transparent on what assumptions they used for the datasheet, and where the alligators exist in the usage. So for the datasheet info they will take a brand new pin and socket, crimp on a freshly stripped wire, and plug the entire assembly together and make measurements. The connector system is almost never this pristine the rest of its existence. Was the PSU or interface cabling made in Asia and sent by boat? It will suffer from humidity and corrosion. This is a double hit if it was shipped in the summertime. Accelerated failure modes due to higher heat.

And the datasheets are expecting decent wire cooling to draw heat away from the connector housing, i.e., no "pretty" nylon shroud covering the wiring to allow for reasonable convection cooling of the wire. The wire is expected to act as a heatsink for the connector pins and sockets.

Here are some suggestions to help people with this issue - but this is offered as unpaid assistance. In some states they say "buyer beware". But these should help:

1. Do not use cabling that has a wiring loom wrapped around it, which acts as to reduce and constrict the airflow. The datasheets are expecting decent wire cooling to draw heat away from the connector housing, i.e., no "pretty" nylon shroud covering the wiring to allow for reasonable convection cooling of the wire. If this loom exists, remove it and allow the wire insulation to convection cool better. The wiring looms should require yet another current derating if you want to use them.

2. The wire is expected to act as a heatsink for the connector pins and sockets. Again this is simple physics. The pin and socket contact points are tiny and much smaller than the wire cross-sectional area. For this reason, suppliers should find a way to use larger wire than is presently being used.

3. Since the pin and socket contact points have gone through oxidation and corrosion before you even install them, you should plug and unplug them several time to knock this oxidation off. Don't get excessive with it because you don't want to create fretting failure by scrapping off too much tin plating. About 3-5 time is great. Make sure the last pass is seated well.

4. You want to get the heat out of the cabling ends. So mount a fast fan by the GPU connector, as well as at the PSU connector. The imbalance can occur at both ends!

I hope these suggestions help people while the industry addresses these shortcomings.
 
Would an XT-90 fit the bill?

LTT did a hack job of it, but it's a popular, cheap, and reliable connector for high power RC crafts.

Thanks for the link. I found a datasheet for the XT-90 and it is only rated for 40A continuous with 10AWG / 6 sqmm. The 90 in the part number is for 90A peak rating. For a true 600W GPU you want to be north of 50A-55A, so I would use 2 in parallel (lots of margin). Two of the XT-60s is too low for a derating of ~80%, so use two XT-90s.

There are many knockoffs too, so be careful. If I had a GPU that had been cooked and had good soldering skills, using this is a no-brainer. A single XT-90 would be fine for 450W or so.

One really great spec is that those are gold plated contacts so it gets away from corrosion issues. Bonus points for that. The contact resistance won't drift up dramatically with time and temp like the tin plated ones do and it will be less likely to suffer thermal runaway (resistance goes up as the connection heats up, making everything get worse and worse rather quickly).
 
  • Like
Reactions: artk2219
Undervolting the 4090 has been recommended to prevent frequency throttling. I did undervolt my 4090 from the "get-go." I do have a Thermal Grizzly WireView installed between the cable and my 4090. While using it for MS Flight Simulator, I've never seen any wattage even close to 600. Is there any suggestion, theoretical or practical, that undervolting the 600w GPUs can make melted components less likely?
Nothing should require undervolting to run at out of the box settings and not blow up.
Not GPU, not CPU....nothing,
 
idk if I would say "flawed" but i'd for sure use poorly implemented.

The thing afaik hasnt ever damaged a mid/low end gpu using it..just the 90 & 80 tier.
which is likely issue of the user error risk w/ its very littel tolerance on bending coupled w/ the fact nvidia reduced safety on gpu end that would prevent a cable/gpu from being too hot and melting the plastic. (as back on 3090 they had actual safety built in)

but yeah honestly there should of been a lawsuit over it given how widely known an issue it is and them doubleing down on it with 50 series.
It’s definitely flawed. Even in electrical you never use more than 80% of the true capacity because it’s not safe to do so. This is using 100% which is why they’re melting. Class action lawsuit territory for sure.
 
https://www.tomshardware.com/author/jeffrey-kampman

>"As Senior Analyst, Graphics, Jeff covers everything from integrated graphics processors to discrete graphics cards to the massive data center GPU installations powering our AI future."

It appears that Jeffrey Kampman will be replacement for our departing Jarred Walton.

LinkedIn: https://www.linkedin.com/in/jeffrey-kampman-b1623429/

>"Editor-in-Chief at The Tech Report"

I didn't find any Jeff/Jeffrey Kampman writing on https://techreport.com from a "kampman site:techreport.com" search.

>"later took on roles at Asus and Intel as a technical marketer before joining Tom's Hardware."


Writings above are indicative of said role, which is mainly to market widgets to a tech audience. Here is a representative snip,

"It’s easy to understand the love, and it’s not just about looks. Noctua fans have smooth noise signatures, optimized airflow characteristics, and high reliability, making for cool, quiet PCs that are easy on the ear yet deliver high sustained performance over time. Our flagship ROG Ryujin series of CPU liquid coolers have used Noctua iPPC fans from the beginning for the same reasons."

Edit: Digging more into TechReport's history,
====

The Tech Report was sold in 2019 and turned into a very different site; it's mostly crypto and phone clickbait. The old editor(s) apparently asked for his name to be removed from it, and the articles authored by the past staff were simply migrated to one of the new editors and left that way.
====
Verifying, https://en.wikipedia.org/wiki/Techreport

"On December 21, 2018 Jeff Kampman stepped down as Editor-In-Chief. The site was then sold to investors John Rampton and John Rall, and Renee Johnson took over as Editor-in-Chief."

From Perplexity:
====
In 2019, "The Tech Report" (techreport.com), one of the oldest hardware news and review sites, underwent an ownership change and a major redesign. The site was sold to investors John Rampton and John Rall, with Renee Johnson taking over as Editor-in-Chief. This transition occurred after previous editor-in-chief Jeff Kampman stepped down in December 2018.

The redesign, launched in July 2019, moved the site from its custom CMS to a WordPress template and was accompanied by a significant shift in editorial direction. After the sale and redesign, The Tech Report ceased specializing in hardware reviews, system guides, and podcasts, and its content focus changed dramatically. These changes were met with criticism from the site's user community, and the senior editing team saw considerable turnover during this period.

====
 
Last edited:
Nothing should require undervolting to run at out of the box settings and not blow up.
Not GPU, not CPU....nothing,
I got the 14900k back to in slot upgrade in the Z690 for MSFS, then UV it massively just to not overheat (and kill itself due to degradation), then when seasonic get it's 90 degrees cable out I bought one for future GPU upgrade, then... 5000 series released with the same connector, same melting, unstable driver and same ridiculous pricing, I decided to ignore the purchased cable and get a 9070XT @ $750 and call it a day... don't want to gamble with these kind requirement of UV kind of things again
 
  • Like
Reactions: artk2219
Don't buy a GPU with said connector, problem solved. The design is flawed.
I'm not an engineer like Chaz_Music so I am just using good 'ol commonsense. The fact that we went from two eight pin connectors (16 wires) down to 12 pins, is not comforting, especially when you are potentially feeding 600W of power with a 5090. There are some 9070XT's that have three 8-pin connectors and use less than half the power of a 5090.

The main issue though, is that whatever the reason, (murphy's law, walking under a ladder, a black cat crossed your path, user error) these Nvidia 12 pin connectors have been a problem with the 4000 series. They doubled down and did it again but this time added more power to the connector.

I guess it's true that the definition of insanity is doing the same thing over and over and expecting a different result. I remember back in the 80's with cars crashing unexpectedly when putting them in gear and car companies making changes that required the driver to put their foot on the brake pedal first.
 
  • Like
Reactions: artk2219
That is until console gamers start demanding true 4k, 120fps with full ray tracing.
None of the current consoles can achieve a solid 60FPS in 4K without compromises. The next XBOX in 2027 according to Microsoft will have the largest performance generational gap in the history. I wonder how many games will still achieve a consistent 60FPS in 4K with RT? Even the mighty 5090 has trouble hitting 60FPS in 4K with every graphics feature set to max.
 
  • Like
Reactions: artk2219
None of the current consoles can achieve a solid 60FPS in 4K without compromises. The next XBOX in 2027 according to Microsoft will have the largest performance generational gap in the history. I wonder how many games will still achieve a consistent 60FPS in 4K with RT? Even the mighty 5090 has trouble hitting 60FPS in 4K with every graphics feature set to max.

Correct.
 
  • Like
Reactions: artk2219
Just don’t buy this stuff as long as they refuse to implement this connection properly (with per pin balancing on the add in board) on cards that cost north of $1K, seriously how hard is it.
 
  • Like
Reactions: artk2219