News Melted RTX 4090 16-pin Adapter: Bad Luck or the First of Many?

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
A standard kettle lead can carry far in excess of 450W, perfectly safely. 12VHPWR is stupid and careless design at it's best.
A kettle works on 120/240VAC. To move the same power at 12V, you need 10/20X as much current.

BTW, I've had a kettle plug blow up on me many years ago. It was always a bit warm and after a couple of years of wear on a likely already sub-par connection between the wire and blade, it got hot enough to blister and pop, leaving a small scorch mark on the cable and plug.
 
  • Like
Reactions: drivinfast247
NVIDIA is officially 'looking into it' -
https://www.theverge.com/2022/10/25/23422349/nvidia-rtx-4090-power-cables-connectors-melting-burning
Funny how, initially, the NVIDIA senior technical marketing manager told Jay that it's nothing to worry about. With that comment, I bet he's one of the ones in the hot seat if this really blows up.
We'll have to see how things progress.

Presumably, right now there might be tens of thousands of RTX 4090 cards in the hands of gamers. We've heard about a handful failing (melting connector), and every one of those that I've seen has had a cable with horizontal bend on the power connector. Now, do we know for certain that the 16-pin cable actually locked into place? Because if it didn't, I could easily see the connection being not quite secure, with the result being arcing and a melted connector. Basically, it could be user error. It could also be design error, or at least user error compounded by design error.

The thing is, with a handful of reported failures, if Nvidia really has shipped 10,000+ units that's still only a failure rate of 0.04% . If you had 10,000 enthusiasts buying a brand new GPU that uses more power than just about anything before it, do you think four of them might somehow make a mistake? I do! But only Nvidia knows how many units have shipped. Maybe it's actually only 1000 units and a 0.4% failure rate, or maybe its 50,000 units and a 0.01% failure rate. It sucks to be in that group, and I would still exercise caution on how much bend you put on the cable, but again we don't know too many things for certain.

At the same time, card failures versus melted cable failures are a different matter, because melting implies the potential for a fire. All it would take is one or two fires and Nvidia and its partners could be forced into issuing a recall. That would be disastrous!
 
Presumably, right now there might be tens of thousands of RTX 4090 cards in the hands of gamers. We've heard about a handful failing (melting connector), and every one of those that I've seen has had a cable with horizontal bend on the power connector. Now, do we know for certain that the 16-pin cable actually locked into place? Because if it didn't, I could easily see the connection being not quite secure, with the result being arcing and a melted connector. Basically, it could be user error. It could also be design error, or at least user error compounded by design error.
Even if the plastic connector is clipped into place, each of the 12-pin 'receptacles' on the cable (not counting the 4 sense pins) floats a little inside their plastic housing. Kinda like the old 4-pin molex does but to a lesser extent. If just one of these receptacles is bent enough by the pin on the GPU-side connector to pry open and cause a poor connection/increased resistance, I believe you will have both more heat at that pin AND more power draw on the other pins (assuming a static workload). This is a compounding issue caused by poor design AND implementation. The 8-pin PCIe power connector has such a huge safety margin per power pin (and power rating per connector) that this has never been a major concern, even with mashed cables at the connector.
 
Even if the plastic connector is clipped into place, each of the 12-pin 'receptacles' on the cable (not counting the 4 sense pins) floats a little inside their plastic housing. Kinda like the old 4-pin molex does but to a lesser extent. If just one of these receptacles is bent enough by the pin on the GPU-side connector to pry open and cause a poor connection/increased resistance, I believe you will have both more heat at that pin AND more power draw on the other pins (assuming a static workload). This is a compounding issue caused by poor design AND implementation. The 8-pin PCIe power connector has such a huge safety margin per power pin (and power rating per connector) that this has never been a major concern, even with mashed cables at the connector.
Yeah, I get that. I also wonder if we're going to see stuff where a firm connection over time ends up with fatigue on the pins and receptacles inside the plastic housing, with the result being higher failure rates down to road. But at the same time, the 16-pin connector isn't massively different from the existing 12-pin Nvidia connector. Yes, it can carry more power, though it doesn't necessarily need to. 3090 FE was 'only' 350W, 3090 Ti FE was 450W. 4090 FE and other cards at 450W should behave similarly to the 3090 Ti FE. But who knows how many 3090 Ti FE cards were actually sent out? I guess there were non-FE cards as well in the 3090 Ti family with 16-pin connectors, though, and again we didn't hear a lot of noise about failures.

Fundamentally, though, a smaller connector than a standard 8-pin is carrying up to four times as much power. That's nuts. That's a bad design decision. I thought a year or so back that there was no way we'd actually get 600W graphics cards, but we're basically there with manually overclocking, and we don't even have the 4090 Ti yet. I would have been far happier with a 12-pin connector (plus four sense pins, sure, whatever) that was 50% thicker than the 8-pin connector. These cards are MASSIVE! Who cares if the 12-pin connector is smaller?
 
One thing though - it has been less then 2 weeks. How many of those cards are not even in any PC yet? How many have been built but not really used? I suspect true failure rate is much higher then your estimate.
Oh, my estimate isn't worth the pixels it's printed with! Because while I'm confident Nvidia and it's AIB partners shipped and sold more than 1000 RTX 4090 cards ("confident" -- I don't know for certain), I really have no clear idea how many were actually sold worldwide. I'd be surprised if it was more than 100,000, but that's not impossible. Shocked silly if it was more than 250K, though! My main insight is the fact that AIB partners and Nvidia were very willing to send review samples, which wasn't the case with a lot of card launches over the past two years.

I do think most of the cards that were sold are being actively used by now, but are they being used for intense gaming several hours a day? Probably not. Some are being used for professional workloads as well, I'm sure. I just can't see spending close to $2,000 on a graphics card only to have it sit around for a couple of weeks. I would rip that sucker open and install it ASAP if it were my money!

But I'm also fairly confident that overall failure rates via melting connectors are about 10,000% higher on the RTX 4090 in its first two weeks than they were on any previous GPU. (LOL, I'm just pulling numbers out of a hat at this point.)
 
  • Like
Reactions: DRagor and -Fran-
But I'm also fairly confident that overall failure rates via melting connectors are about 10,000% higher on the RTX 4090 in its first two weeks than they were on any previous GPU. (LOL, I'm just pulling numbers out of a hat at this point.)
You're almost certainly right. However, at this early stage it certainly looks like the unprecedented size of the AIB 4090's in combination with the level of power these cards require is just as much a factor as the connector itself. If Nvidia had released a 4070 first, I doubt we would be seeing any reports of melted connectors.

Didn't all Ampere FE cards from the 3070 on up use a similarly sized 12 pin connector? I don't recall a single report of melted connectors in the 2+ years they have been out, so it doesn't seem likely there is a problem with the connector itself. None of those cards had issues with the cable getting smooshed against the side of standard sized cases either. Nvidia had even smartly angled the connector on the higher end cards so there was no need for a tight u turn off the card. Nvidia needs to require board partners add right angle adapters in the box of all 4090's.
 
Yeah, I get that. I also wonder if we're going to see stuff where a firm connection over time ends up with fatigue on the pins and receptacles inside the plastic housing, with the result being higher failure rates down to road. But at the same time, the 16-pin connector isn't massively different from the existing 12-pin Nvidia connector. Yes, it can carry more power, though it doesn't necessarily need to. 3090 FE was 'only' 350W, 3090 Ti FE was 450W. 4090 FE and other cards at 450W should behave similarly to the 3090 Ti FE. But who knows how many 3090 Ti FE cards were actually sent out? I guess there were non-FE cards as well in the 3090 Ti family with 16-pin connectors, though, and again we didn't hear a lot of noise about failures.

Fundamentally, though, a smaller connector than a standard 8-pin is carrying up to four times as much power. That's nuts. That's a bad design decision. I thought a year or so back that there was no way we'd actually get 600W graphics cards, but we're basically there with manually overclocking, and we don't even have the 4090 Ti yet. I would have been far happier with a 12-pin connector (plus four sense pins, sure, whatever) that was 50% thicker than the 8-pin connector. These cards are MASSIVE! Who cares if the 12-pin connector is smaller?
The main comparison is between this new 16-pin cable and the old 8-pin cable. The 8-pin is much better in terms of safety margins for its max power draw. I really don't know how this new design got the green light at NVIDIA.

I think it will become more of an issue over time due to plastic and metal fatigue at the connector AND higher power draw with the next-gen games (and the gen after that). We may get to a point where, 3 years from now, we won't even bat an eye when we hear of another RTX 4090 connector melting. NVIDIA may have to design (and ship for free) a better 16-pin adapter that is much more resistant to this issue. Probably with receptacle pins molded directly into plastic (no wiggle room) or hot glued in place. This, of course can create issues with pin to receptacle alignment.

It's definitely a mess all around for NVIDIA.
 
The main comparison is between this new 16-pin cable and the old 8-pin cable. The 8-pin is much better in terms of safety margins for its max power draw. I really don't know how this new design got the green light at NVIDIA.
Even when you operate a connector at its maximum rated spec, that spec already has safety margins built into it to account for manufacturing tolerances, wear and other specifications through its rated service life. If you need to add even more margins on top, then you probably picked the wrong design for the job.

The main fault here is Nvidia, the PCI-SIG and Intel/ATX3.0 picking a connector that isn't suited for the kind of abuse it was going to be subjected to in typical consumer setups, especially in conjunction with over-sized cards that don't leave much room even in larger PC cases for bends.
 
Even when you operate a connector at its maximum rated spec, that spec already has safety margins built into it to account for manufacturing tolerances, wear and other specifications through its rated service life. If you need to add even more margins on top, then you probably picked the wrong design for the job.
The 8-pin worked (bad bends and all) because of the extra safety margin built into it. NVIDIA/PCI-SIG lowered this safety margin, power pin for power pin, with this new 16-pin design and are getting burned because of it - literally.
 
The 8-pin worked (bad bends and all) because of the extra safety margin built into it. NVIDIA/PCI-SIG lowered this safety margin, power pin for power pin, with this new 16-pin design and are getting burned because of it - literally.
Google melted 8 pin power connector. You will find no shortage of pictures. I'll ask you again, how many melted 12 pin connectors have seen with Nvidia Ampere cards?
 
Google melted 8 pin power connector. You will find no shortage of pictures. I'll ask you again, how many melted 12 pin connectors have seen with Nvidia Ampere cards?
Since you're into Googling, research the per-pin power limit of the power pins in the 8-pin connector vs. the per pin power limit of the power pins in the new 16-pin connector. My statement stands.
I never said the 8-pin connector has never been melted. I will, however, say that they are less susceptible to melting due to what you will find in your research. 😉

The extra safety margin built into the 8-pin connector has been lessened with the new 16-pin connector.
 
We had a deep look into max current on video card cables a couple of years back. The problem generally isn't the cables, or the PSUs. They handle plenty of current (or if they don't, it is obvious).

The most common, less obvious, problem by far, is the quality & specifications of the molex pins (or copycat 3rd party molex pins) used in the cables. There are a wide variety of pins on the market. Dozens of subtly different versions.

An example
In perfect conditions the 8.3mm long pin can handle 13Amps.
https://www.molex.com/pdm_docs/sd/460123151_sd.pdf

But these near visually identical 6.3mm pins can only handle 9Amps.
https://www.molex.com/pdm_docs/sd/039000048_sd.pdf

Often conditions aren't perfect however, so a big safety margin should be applied. Pins get pulled to one side and don't make electrical contact along their full length or the cables don't get fully inserted or the crimp is badly done, or there is corrosion, etc...

We bought ~10,000 of the "good" high current pins for some custom video cables. But they are hard to find. There have been a lot of supply chain issues. At the time we bought the entire world stock of the high current pins. The cheaper low current ones are more common and available however. I am betting some cable vendors got caught with their pants down and couldn't source the "good" brand name pins and instead got lower spec pins. And here we are with things melting. No surprise really.
 
Since you're into Googling, research the per-pin power limit of the power pins in the 8-pin connector vs. the per pin power limit of the power pins in the new 16-pin connector. My statement stands.
I never said the 8-pin connector has never been melted. I will, however, say that they are less susceptible to melting due to what you will find in your research. 😉
I don't need to, it's not the power that's causing this issue. 3090Ti FE has a 450W TDP and the 16pin power cable. You continue to ignore the question. How many ampere FE cards have you seen melt the connector?
 
The problem here is not the cable but the unwieldy size of the 4090's. I really don't understand how board partners didn't identify this problem when testing cards. Do any of them test boards in actual cases or just open test benches? I'm pretty sure the power cable was not designed to endure the pressure it took to shatter this case. This is a Corsair case, not some cheap Chinese knockoff case with thin side glass.


sb03j5ecksv91.jpg
 
  • Like
Reactions: JarredWaltonGPU
A new one. Nvidia is in deaper and deaper water. Can Jensen swim? My best guess... Class action or recall of cards/in best case sending out new adapters. Nvidia's investigation team can't keep up with this pace.

I think my Asus 4090 TUF OC adapter is melting too

https://ibb.co/94Q97dX https://ibb.co/0q0RWPq https://ibb.co/YXK1d2h https://ibb.co/XYBHG5L
Had to disconnect my GPU to make space for the 13900k installation and I noticed that one of the pins looks melted. What should I do?
Is that your personal GPU? Because yeah, it’s definitely melting.
 
Status
Not open for further replies.