News Melted RTX 4090 16-pin Adapter: Bad Luck or the First of Many?

Status
Not open for further replies.
First of many is my prediction. That new connector is absolutely ludicrous. No way I would run the equivalent of 4.5 100-watt light-bulbs on that thing. Even a standard household AC wall outlet plug would have been more elegant. What on earth were they thinking? "Oh hey! Let's take these four massive connectors and squish them all down into this one llittle itty-bitty one so we can watch everyone's machine catch fire and they'll have to buy even more video cards!"
 
First of many is my prediction. That new connector is absolutely ludicrous. No way I would run the equivalent of 4.5 100-watt light-bulbs on that thing. Even a standard household AC wall outlet plug would have been more elegant. What on earth were they thinking? "Oh hey! Let's take these four massive connectors and squish them all down into this one llittle itty-bitty one so we can watch everyone's machine catch fire and they'll have to buy even more video cards!"
Highend FE Ampere models have the same compact 12 pin connector, minus the 4 signal pins which are not the cause of the melting here. 3090 TI has the same 450W TDP and no one has reported melted connectors for that one. This is an isolated case that may very well have been user created.
 
Combination of problems.

These graphics cards are enormous in all dimensions. The adapter plugs in where the card designers put it and they don't know what chassis it is being installed in. Not something people have had to think about too much. So pushed up against the side panel, crushed against hard drive cages or the radiator/fans at the front. Outside of the pins and wires, lots of points of failure, connector on the GPU itself could snap, delaminate the PCB, etc.

Plus having 4 cables by default means those will have to be crammed through gaps as well.

6 12V wires would only have to carry ~8 AMPs a piece, well within spec for 16 gauge wire. 18 gauge wire might get a little warm at 600W.

Corsair is confident enough to sell a two connector direct adapter from their 18 gauge standard wiring, that is 6 16 gauge wires. (Only downside there is that you can hook up two of them to like an 850W PSU, which is crazy)
 
I question why somebody "simply playing Red Dead Redemption 2" would know the power load on their GPU. Most overlockers don't even know how much power their card is using, outside of a broad "the slider won't go higher because the card is capped at xxx watts". Somebody who knows (or think they know) how much power their card is using probably cares because they've been messing around with the card's power delivery.
It's also questionable why fuses on the board didn't blow before the connector melted, which is most likely bad design. Still, it's not unheard of for extreme overlockers to deliberately bypass fuses.
But it's Reddit, which is one of those platforms where users are strongly encouraged by "an algorithm" to generate viral clickbait sensationalism. So maybe this person really had this problem, or maybe they made up the story to try and return a card with a voided warranty, or maybe this person doesn't even own a 4090 and just wants attention, or maybe the person isn't even a person and is just some kind of spam bot. There's no way to be sure. It's reddit.
 
As to reigning in the power requirements. OEMs will certainly do that to save a buck. Enthusiast desktop market is a pretty small segment. If you don't want high power requirements, you don't have to buy them. Or you can limit them yourselves.

They were somewhat wise in adopting a new standard. It will prevent the average person from simply plugging in a super powerful GPU into an underrated power supply. (And hopefully they don't go shopping on Aliexpress for adapters)
 
  • Like
Reactions: jp7189 and artk2219
The sag on the GPU in that image....

2Du2GTU2Tt93Ke4iWpPPxC.jpg
 
Skimming through the article, being unable to come to any sort of conclusion about this being a user error or something a bit more a general problem wasn't what caught my attention. What caught my attention was the line 'a relatively modest GPU load of 400W'.

400W of GPU load can now be considered 'modest'? What!?

If SLI was still a thing, that would be one half of a typical microwave. Balance a bowl of soup on that thing and it'll be nice and steamy in about 7 minutes. Snack preparation without having to step away from your monitor.
 
Ahh, s*!
Well, that's the early adopter tax for ya...

The sag on the GPU in that image....
[image]
Why the hell the owner of that PC didn't make the gpu part of the loop bugs me even more...

First of many obviously. And we can't just call 'an user error for bending the cable' because those adapters don't come with big letter sign "DON'T BEND", do they?
Manuals, as well as quick start postcards, are invisible if not in users' faces, and when they are, the manuals get tossed aside anyway, because it's too much to read or whatever.
There would be fewer still troubleshooting threads if some folks didn't skip the things. Some of the info is right bloody there..!
 
No issues with mine yet, I can say plugging in that connector is a pain in the ass. Took me two attempts to get the clip to bite down for solid connection. If I were a betting man, he had a loose connection and small arcing was occurring.
I've tested several now, and while the Founders Edition "clicked" pretty easily, and I think the Asus did as well, two more (MSI and Gigabyte) seemed to require a bit more force. Or it may have been the MSI that was "easier" and the Asus that was hard. Anyway, the point being, after personally testing several 4090 models, there seems to be a bit of inconsistency in how hard you have to press in the 16-pin cable to get it to fully engage. It was one of my first thoughts when I saw this Reddit post this morning, and while I don't want to jump to conclusions, that last paragraph in our article was based on my experiences.
 
  • Like
Reactions: prtskg
Burnt connectors were already a thing on far less powerful cards, albeit not anywhere as common as what may be on its way here. The problem here is that if you have bad contact on one wire, the current will get pulled on the remaining wires, those will get warmer, one of them may have a worse connection than the other, then its share of current will be carried by the four remaining wires, which will get even hotter until yet another connection fails, then you have the load going through only three good connections that are getting even hotter, rinse and repeat until catastrophic failure.

The only way to avoid this would be for GPUs to either do per-wire current monitoring/balancing or splitting the VRM phases evenly between pins so the VRM can detect when phases aren't getting the expected amount of power from the PSU, reduce the affected phases' current limit to a safe level and tell drivers to notify the user.
 
It's too bad they couldn't have designed the GPU with 4 8-pin connectors. First, the darn GPU is huge--so real estate should have been no problem, and second, we see several prior-gen cards (from AIB partners) with three 8-pins, already! Squeezing in a 4th should have been no problem with this monster. Just my two cents--I don't like this arrangement on a $1600 GPU. Trying to blame the customer because theoretically he didn't "plug it in right" is pretty poor policy, imo. (Sometimes my wife and I argue on whether I "plugged it in" or I didn't, but, well, that's a different subject...😉)
 
Status
Not open for further replies.