News AMD Exec Burns Nvidia Over Melting Connectors

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Everything that I've read and seen so far suggests the root problem with the 12VHPWR connector is the people plugging in the connector improperly. In other words, user error. So AMD corporate to make any unfounded comment is just going to backfire on AMD especially when they adopt the ATX 3.0 / PCIe 5 standard.
 

waltc3

Honorable
Aug 4, 2019
453
252
11,060
The really weird thing about this is how easy it would have been for nVidia to mandate idiot-proof latches on the connectors. For $1600+ GPUs, when nVidia already knew about the problem, not to do so was really, really cheapskate. But unfortunately I can think of a reason why maybe they didn't--because then they wouldn't be able to pass the buck to the customer--which was a really, really crass thing to do in the first place. nVidia should have simply said, "Mea Culpa, our mistake. We are shipping new connectors that fix the problem, and even new cards if required, to those affected." But if the problem actually isn't a problem with the "connectors being pushed in all the way" then that would explain why they haven't done that yet...;) Lots of connectors have little latches on them to ensure snug connections, eh? No-brainer.
 

daworstplaya

Distinguished
Oct 30, 2009
221
181
18,760
Look! Steve caused an adapter to melt... by not plugging it in correctly! That must be the only possible explanation as to what's going on, right? Seriously, for all his "fact" talk, the reality is that we still don't have a clear answer on what is causing problems — and more importantly, we don't have a guaranteed solution. His rant on responsible reporting rings a bit hollow, considering he himself has been a major part of the theorycrafting that's taken place. Also, even though he mentioned FOD (foreign object debris) as a factor, like Igor, he has no actual proof that FOD caused any of the failures — he hasn't replicated that; he just showed that FOD is present and then jumped to a conclusion. Anyway, GN spent a LOT of money to make these videos, but it's not like he can actually solve the problem — that's up to Nvidia and its partners. The videos were done to increase his street cred and to get lots of views, while officially providing no answer whatsoever.

Not quite! The connector was created by PCI-SIG at the behest of Nvidia. That's important to remember. On its own, PCI-SIG almost certainly wouldn't have made this connector. Nvidia even pre-empted PCI-SIG's version with the 12-pin connector two years ago. It's exactly the same, but without the sense pins — and funny enough might actually be safer, as there's reason to suppose the extra four sense pins are part of what's potentially causing people to incorrectly install the connector. If the true root cause is user error — not proved yet, but possible — then it indicates a faulty design. People haven't been experiencing melting 8-pin and 12-pin connectors, so what changed that's causing problems now?

I'm also still quite concerned with connector longevity, as someone who swaps GPUs in my test beds often multiple times per day. For that reason, I have to use the adapters! Because if I didn't, I'd need a native 16-pin connector (I have one), and then I'd be plugging that in and unplugging it likely more than 500 times in a year. The 12-pin and 16-pin connectors just won't last under that sort of use, while history shows the 8-pin connectors, being larger and more robust, can manage just fine.

Bottom line, Steve from GN has done the testing and has gotten repeatable results with melting connectors. How many tests has TH done to replicate the issue? If you know how electricity and thermals work, it's pretty sound explanation. This is user error as far as I'm concerned.

There are a lot of things that can cause fires we use in our daily lives, eg: propane tanks, toaster, electrical lights ... yet we read the instructions, follow them correctly, use them correctly and keep ourselves safe. At some point people need to take personal responsibility for their own actions instead of asking regulators to get involved.
 
  • Like
Reactions: P1nky
Soooooo, what factual evidence have you found causing the issue? You're being pretty harsh to Steve, and not recognizing that he is the only one that got the issue to replicate multiple times... Lets not dive into the integrity of the burnt connections? I've still yet to see any photos of the adapter plugged in with melting at the corners in its natural state before being removed from the GPU and shown to the public... I'm sticking with we got 26 clowns in the world that can't plug in a connector, its not rocket science.... "connector has clip, it should probably latch like all the others"
But that's just it: I'm not trying to present things as FACTS! I'm saying, "Hey, it's a fact people have melted connectors. It's also a fact that not plugging them in correctly can cause a meltdown. But unless there's a way to show that's the only way to cause the melting, that's not super useful." I too believe there's a good chance most of the melted adapters came via user error — some might have even happened when someone read about the problem, then started unplugging and checking the connector several times a day and ended up with a problem! LOL But I don't know.

And I do respect what Steve has done, but at the same time: WHERE WAS THE NEED!? The only real answer that can be acceptable will have to come via Nvidia and the board partners. Heck, he might have lost money (temporarily) doing all these videos and tests on the melting adapters. And Igor spouting off with all sorts of theories was a joke. Nvidia has way more expertise and financial backing to do the proper investigations, and also come up with a proper solution — even if the solution is just, "Make sure you plug it in properly, contact us if your adapter melts and we'll do an RMA." Investigative journalists getting into the mix with potential solutions (especially this early) just muddies the waters.

I didn't even try to replicate the problems people were experiencing, because there are just too many factors involved. Nvidia could get the parts from users, offer to replace them, etc. What could I do? "Hey, umm... send me your card and I'll poke around at it!" So we reported on the facts and repeatedly stated, "We don't have enough data for any firm conclusions and we need to wait for an official statement from Nvidia." I suspect Steve is probably right, and that it's just user error. I also suspect Nvidia will release a statement to that effect. But Nvidia will also need to figure out a "fix" because if it's that easy to not install the connector properly, it's a bad design. Maybe we can all go back to 12-pin adapters and forget about the useless extra sense pins? (Useless on the adapters, I mean: They don't do anything! They're for sending information back to the PSU, which the 8-pin connectors that link to the adapter don't even support.)
 

Arbie

Distinguished
Oct 8, 2007
208
65
18,760
Electricity is conducted over the surface of the metal, so you have very visible "pathways" electricity can take, so if you have surface debris (obstacles), then like a fluid it will just go around them until the alternative path becomes saturated while still hitting that debris on its path creating a lot of heat. That's in layman terms
That isn't a parallel path, in standard EE terms. And no "high resistance parallel path" would generate "more" heat since P = V^2/R. The effect of a foreign object inserted in the path would only be described as an increase in series resistance, causing the resulting conductive path to dissipate more there per P = I^2R. Hence my confusion over their choice of words. Maybe another EE can square this circle for us. Layman's terms aren't going to do it.
 
Last edited:
  • Like
Reactions: TJ Hooker
The really weird thing about this is how easy it would have been for nVidia to mandate idiot-proof latches on the connectors. For $1600+ GPUs, when nVidia already knew about the problem, not to do so was really, really cheapskate. But unfortunately I can think of a reason why maybe they didn't--because then they wouldn't be able to pass the buck to the customer--which was a really, really crass thing to do in the first place. nVidia should have simply said, "Mea Culpa, our mistake. We are shipping new connectors that fix the problem, and even new cards if required, to those affected." But if the problem actually isn't a problem with the "connectors being pushed in all the way" then that would explain why they haven't done that yet...;) Lots of connectors have little latches on them to ensure snug connections, eh? No-brainer.
An easy fix would be a button that gets depressed on the graphics cards female power connector when the 12+4 male connector is properly latched which tells the graphics card that it’s safe to pull power. I feel like making sure the power connector is properly seated and latched is common sense but I also feel like gasoline being flammable is also common sense but apparently it is still necessary to put a “flammable” warning on it.
 
Soooooo, what factual evidence have you found causing the issue? You're being pretty harsh to Steve, and not recognizing that he is the only one that got the issue to replicate multiple times... Lets not dive into the integrity of the burnt connections? I've still yet to see any photos of the adapter plugged in with melting at the corners in its natural state before being removed from the GPU and shown to the public... I'm sticking with we got 26 clowns in the world that can't plug in a connector, its not rocket science.... "connector has clip, it should probably latch like all the others"
Was PCI-SIG's peripheral component testing also run by clowns?
I'm not downing GN's contribution. I think they put a lot of time and money into making a great piece. I also think that connectors being only partially plugged in is one of the issues, but we have an actual peripheral components standards organization who experienced melted connectors too and yes, they had the cable plugged in all the way.
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,419
944
20,060
As for the dodgy power connector...yeah, they should have stuck with the 8-pins. EVGA was right to backlash and GTFO. Having said that, NVIDIA has had amaizing products, everyone screws up. As long as they take ownership and correct it, no hard feelings. It does indeed seem to be a very limited number of cases where people bent the cable out of spec, but the 8 pins are very superior in every way except they take up a bit more space. Oof.
So far, that hasn't happened, they've been mostly radio silent about it and state that their "Investigating the issues".
8-Pin, tried & true, time tested, & very reliable in many of the worst case scenarios.


I'm also still quite concerned with connector longevity, as someone who swaps GPUs in my test beds often multiple times per day. For that reason, I have to use the adapters! Because if I didn't, I'd need a native 16-pin connector (I have one), and then I'd be plugging that in and unplugging it likely more than 500 times in a year. The 12-pin and 16-pin connectors just won't last under that sort of use, while history shows the 8-pin connectors, being larger and more robust, can manage just fine.
CableMods Cables must be your new favorite site, having to get extra "Consumable" 16-pin connectors just for that extra safety factor that you don't burn down your test rig.
 

RandomWan

Prominent
Sep 22, 2022
59
65
610
The really weird thing about this is how easy it would have been for nVidia to mandate idiot-proof latches on the connectors. For $1600+ GPUs, when nVidia already knew about the problem, not to do so was really, really cheapskate. But unfortunately I can think of a reason why maybe they didn't--because then they wouldn't be able to pass the buck to the customer--which was a really, really crass thing to do in the first place. nVidia should have simply said, "Mea Culpa, our mistake. We are shipping new connectors that fix the problem, and even new cards if required, to those affected." But if the problem actually isn't a problem with the "connectors being pushed in all the way" then that would explain why they haven't done that yet...;) Lots of connectors have little latches on them to ensure snug connections, eh? No-brainer.

The connector is as idiot proof as it can get without overengineering it. Almost all of these connectors are made to a standard, but are manufactured by multiple companies. It's almost impossible for them to be working from the same exact hardware and process, so there's going to be variances that cause tolerance stacking. Some will fit together better than others. It's up to the end user to be assembling the final components properly. If you aren't up to the task of doing it correctly and verifying your work, maybe a prebuilt is for you (even they have errors).
 

Kamen Rider Blade

Distinguished
Dec 2, 2013
1,419
944
20,060
Look! Steve caused an adapter to melt... by not plugging it in correctly! That must be the only possible explanation as to what's going on, right? Seriously, for all his "fact" talk, the reality is that we still don't have a clear answer on what is causing problems — and more importantly, we don't have a guaranteed solution. His rant on responsible reporting rings a bit hollow, considering he himself has been a major part of the theorycrafting that's taken place. Also, even though he mentioned FOD (foreign object debris) as a factor, like Igor, he has no actual proof that FOD caused any of the failures — he hasn't replicated that; he just showed that FOD is present and then jumped to a conclusion. Anyway, GN spent a LOT of money to make these videos, but it's not like he can actually solve the problem — that's up to Nvidia and its partners. The videos were done to increase his street cred and to get lots of views, while officially providing no answer whatsoever.
The guranteed solution needs to come from nVIDIA.

He just needs to figure out the cause of the melting/fire hazard.

Steve really didn't do much theory crafting, most of that started from Jayz2cents & Igor of Igor's Lab.

We also needed YOU to investigate the situation. Where's your personal analysis on Adapter-gate?

I'm sure you can come up with a video / article analyzing the situation and coming up with the points of failure, right?

The more the merrier when it comes to investigating "Adapter-Gate", right?
 
That isn't a parallel path, in standard EE terms. And no "high resistance parallel path" would generate "more" heat since P = V^2/R. The effect of a foreign object inserted in the path would only be described as an increase in series resistance, causing the resulting conductive path to dissipate more there per P = I^2R. Hence my confusion over their choice of words. Maybe another EE can square this circle for us. Layman's terms aren't going to do it.
https://www.electronics-tutorials.ws/resistor/res_4.html

That is what I was talking about with the part you seem to have not read and did not quote.

All DC-operated circuits follow equations (as long as they stay within Ohm's law) which tell you how current transits and then you calculate power based on the energy you transit in it.

Regards.
 

TJ Hooker

Titan
Ambassador
Electricity is conducted over the surface of the metal, so you have very visible "pathways" electricity can take, so if you have surface debris (obstacles), then like a fluid it will just go around them until the alternative path becomes saturated while still hitting that debris on its path creating a lot of heat. That's in layman terms.

Engineering wise, there must be an equation that rules cable conductivity of the adapter and imagine you just put a resistance in the way while still providing the circuit alternate "parallel" paths to choose from until those paths themselves become saturated, or something along those lines?

It's been ages since I've made any circuitry design and resolved the corresponding equations, but this comes down to basics I think.

Regards.
Current being restricted to the surface of a conductor applies to AC (and becomes more pronounced the greater the frequency). DC current should be more or less uniform throughout the conductor. The 12V line will have some AC components (ripple, transients), but is mostly DC. And metallic conductors don't "saturate".

How can a piece of debris be both "in the way" and simultaneously in parallel?
 
Last edited:
  • Like
Reactions: Arbie
Wait, are you saying that PCI SIG has replicated (properly installed) connectors melting? Do you have a source for that?
A copy of PCI-SIG's original letter sent out BEFORE the RTX 4090 was released - https://wccftech.com/atx-3-0-12vhpw...r-safety-risk-using-adapter-confirms-pci-sig/
A good read on the leaked PCI-SIG email obtained by Wccftech (and subsequently leaked everywhere) - https://cultists.network/8815/melting-12vhpwr-connectors/

As far as the properly installed part goes, again, this is an organization that tests and sets standards for peripheral components. There is no way they performed tests with the cable only partially inserted. They don't test the human factor.
 

TJ Hooker

Titan
Ambassador
A copy of PCI-SIG's original letter sent out BEFORE the RTX 4090 was released - https://wccftech.com/atx-3-0-12vhpw...r-safety-risk-using-adapter-confirms-pci-sig/
A good read on the leaked PCI-SIG email obtained by Wccftech (and subsequently leaked everywhere) - https://cultists.network/8815/melting-12vhpwr-connectors/

As far as the properly installed part goes, again, this is an organization that tests and sets standards for peripheral components. There is no way they performed tests with the cable only partially inserted. They don't test the human factor.
Ah, right, I do remember seeing that. At the time I thought PCI SIG email/attachment only referred to reports/testing of melting connectors from 3rd parties, but looking at it again it does seem like it could be the results of their own testing.

Edit: Nevermind, it doesn't seem like that testing was done by PCI SIG after all, but rather by Nvidia.
 
Last edited:

spongiemaster

Admirable
Dec 12, 2019
2,346
1,325
7,560
A copy of PCI-SIG's original letter sent out BEFORE the RTX 4090 was released - https://wccftech.com/atx-3-0-12vhpw...r-safety-risk-using-adapter-confirms-pci-sig/
A good read on the leaked PCI-SIG email obtained by Wccftech (and subsequently leaked everywhere) - https://cultists.network/8815/melting-12vhpwr-connectors/

As far as the properly installed part goes, again, this is an organization that tests and sets standards for peripheral components. There is no way they performed tests with the cable only partially inserted. They don't test the human factor.
Nothing in that letter states that PCI SIG has done any testing. Way back before the 4090 was even released, Nvidia reported to PCI SIG that they had seen issues with the connector and PCI SIG reported those finding to other members.

PCI-SIG-POWER-CABLE-1.jpg
 
  • Like
Reactions: TJ Hooker
Nothing in that letter states that PCI SIG has done any testing. Way back before the 4090 was even released, Nvidia reported to PCI SIG that they had seen issues with the connector and PCI SIG reported those finding to other members.
Check out the second link. Read through the whole thing.

Edit - All of this occurred, behind the scenes, before the first RTX 4090 hit the market.
 
Last edited:

spongiemaster

Admirable
Dec 12, 2019
2,346
1,325
7,560
:LOL:
They're not referring to the whole of PCI-SIG's findings, just Wccftech reporting that PCI-SIG 'warned against the use of using non-ATX 3.0 PSUs and 8-pin to 12VHPWR adapters'.
Again, there is nothing in any of that stating PCI SIG did any testing that resulted in melted connectors. You ignored what I posted which showed it was Nvidia that did the testing.