News Nvidia's RTX 5090 power cables may be doomed to burn

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
People are trying to write about this as if it was a super-urgent fire problem. It isn't. None of them have actually caught fire.
Its not going to open flame fire, just stink with some toxic smoke at the worst.
But seriously, they cling to this connector because it was cheap. But nice power connectors like Harting cost $30-40 just for 1/2 of the connection, but that is why a lot of industrial computing electronics cost significantly more. These are $2-5 connectors that are burning up.
 
kinda funny to own the fastest GPU on earth but it still melts plugs ..

One would have thought Nvidia would have learnt their lesson last gen and just gone back to 3 8pin connectors ..

Not sure AMD has do anything to win the GPU war just let Nvidia wallow in its own stupidity either you buy a 50 series card to have it melt the plug or dont buy because it might melt the plug ..

get rolled by the low vram in lesser model cards !

Then wait for Nvidia to supersede its current rubbish with 2% better specs

OR

Buy AMD which seems safer ..
 
Its not going to open flame fire, just stink with some toxic smoke at the worst.
But seriously, they cling to this connector because it was cheap. But nice power connectors like Harting cost $30-40 just for 1/2 of the connection, but that is why a lot of industrial computing electronics cost significantly more. These are $2-5 connectors that are burning up.
Some correction, for most ppl, burning the card that expensive with a denied warranty is not really different to it burst into flames as most ppl are using them for gaming, which, if anything really catches fire it likely can be extinguished before the house is burnt down. But then the same complain of the stupid melting connector is the same, we pay for $1k+ since 4090, and to begin with, there are absolutely NO safeguard of any sort on power monitor or distribution side of things.

By blaming the cables or user to re-use a cable a dozen times when they detach the card for installing/replacing a NVMe drive or cleaning is just plain stupid blame the victim way of thinking.

So Nvidia and the PCISIG pushs this brand new future proof standard, after 2 years and a previous gen card having melting connectors, we are paying $2k for a GPU, $500 for a 1600W PSU which supposedly come with a decent cable, and you tell me that if I don't buy a bloody new cable and throw away the old one it's my own fault of re-use a detachable connector cable? which the old power cables can easily handle the same rated 30+ cycles and no problem and now after a dozen or a few cycles it's our own issue?
 
Their use of molex connectors are incorrect and need definately a better connector.

Which they are, but they are going to be expensive.

The problem is the molex male pin is 12A and the female pin is 10A. What compounds this problem is the amount of pins de-rate the current capacity.

A connector with 6 pins is 9 amps per pin max, but for continuous duty is 80% of this. So the max continuous current is 7.2A per pin.

Btw this has been known about this connector for years. The only people that have issues with this is companies who decided to move to China and got rid of their American Engineers. So I laugh about this and I call this karma.

May all the traitor corporations suffer financially from their failed products.
Do you have a source that states that max continuous current is only 80% of the specified ampacity? I assume the spec sheet you're looking at for 9 A per pin is this one: https://www.molex.com/content/dam/m...onpdf/206/206460/2064600000-PS-000.pdf?inline

Molex' own spec sheet for their 12VHPWR board header lists a rating of 9.2 A per pin. If continuous is really only 80% of that value (i.e. 7.4 A), then that would that a part they list as "PCIe CEM5" wouldn't meet the PCIe CEM5 spec (which mandates 9.2 A per pin, continuous) as per its own spec sheet. Which would make no sense.
https://www.molex.com/en-us/products/part-detail-pdf/2191160161?display=pdf

As an aside, I don't think this really has anything to do with my comment you quoted regarding whether the connector is UL certified or not.
 
But seriously, they cling to this connector because it was cheap. But nice power connectors like Harting cost $30-40 just for 1/2 of the connection, but that is why a lot of industrial computing electronics cost significantly more. These are $2-5 connectors that are burning up.
The HPWR connector was intended to replace 6-pin and 8-pin power for everything from 75W to 600W. You cannot slap a $30 connector on a $150 GPU and $50 PSU.

If saving costs was really their primary objective, XT60 is smaller, cheaper and quicker to assemble than HPWR without the load balance issues. But then you would need to deal with a pair of #8 or thicker wires.
 
  • Like
Reactions: KyaraM
Do you have a source that states that max continuous current is only 80% of the specified ampacity?
Probably pulled that out of the NEC. Long-duration loads shouldn't exceed 80% of the circuit's rating to be on the safer side of manufacturers and contractors cheaping out on install quality and materials. And also to avoid nuisance trips from perpetually being on the verge of thermal-tripping and heat-soaking nearby breakers.

The NEC provides guidelines for electricians. It doesn't apply to things independently certified by licensed engineers. If Nvidia, Intel, PCI-SIG, Molex, PSU manufacturers, GPU manufacturers, etc. engineers certified their HPWR designs, that is the end of the story until someone proves otherwise in court. That is Anon vs whole bunch of engineers across the whole industry.
 
The HPWR connector was intended to replace 6-pin and 8-pin power for everything from 75W to 600W. You cannot slap a $30 connector on a $150 GPU and $50 PSU.

If saving costs was really their primary objective, XT60 is smaller, cheaper and quicker to assemble than HPWR without the load balance issues. But then you would need to deal with a pair of #8 or thicker wires.
There is a lot of options out there. One that would be better is a Positronic connector.
If I'm buying a $1500 video card, I rather see one of these connectors on them than that junk connector.
s-l1600.jpg

https://www.mouser.com/datasheet/2/1093/C014RevG6_PCS-3395029.pdf
 
Do you have a source that states that max continuous current is only 80% of the specified ampacity? I assume the spec sheet you're looking at for 9 A per pin is this one: https://www.molex.com/content/dam/m...onpdf/206/206460/2064600000-PS-000.pdf?inline
The shell itself has a current limit. 9A is the norm for 6 pin while 6A is the normal limit for an 8 pin. This is the common de-rating of the connector with this pins considering the temperature rise at continuous current. For a molex connector.

What happened is that no one consulted someone that had a degree in electronics technology which are the electronics engineers' sanity check and assistant when designing something.

look at the data for the contacts, and you will see why:

https://www.molex.com/content/dam/m...s/testsummarypdf/555/5556/TS-5556-002-001.pdf

UL does not get involved in a part unless its connected directly on the AC power. Its up to electronics engineers to select the correct parts to use to pass certification. Also, they wouldn't care if the power connector self destructed. They would care if the safety circuit didn't shut down and disable the unit. That is what they would be concerned with.

But what I would reference is the connector, and not the series datasheet:

https://www.molex.com/en-us/products/part-detail-pdf/455590002?display=pdf
 
Last edited:
Nvidia's decision to ditch current balance monitoring is likely backed by millions of combined internal testing hours showing that it wasn't necessary.
Except that it likely would have prevented the vast majority of the cable melting happening and they had the opportunity to rectify this with the 5090, but chose to fly closer to the sun instead. Doing proper current balance monitoring would have cost them very little compared to the cost of the cards it'd be important for. Claiming they determined it's not necessary is giving them a pass on a clearly negligent design.
 
Except that it likely would have prevented the vast majority of the cable melting happening and they had the opportunity to rectify this with the 5090, but chose to fly closer to the sun instead. Doing proper current balance monitoring would have cost them very little compared to the cost of the cards it'd be important for. Claiming they determined it's not necessary is giving them a pass on a clearly negligent design.
At this point I think the ones who panicked most about the drama rising again are the scalpers... it's not fun ppl arn't tyring to buy it at all cost
 
There is a lot of options out there. One that would be better is a Positronic connector.
If I'm buying a $1500 video card, I rather see one of these connectors on them than that junk connector.
Again, enthusiast GPUs are sharing product space with entry-level system buyers. The PCI-SIG and Intel/ATX 3.0 intends HPWR to be used to power everything from 75W to 600W, everything from entry-level to high-end. When you want to mandate one 12V connector to rule them all (eventually) in the consumer space, you cannot spec a $100+ worth of parts (GPU-side connector, PSU-side connector, 2x cable connectors) as a requirement.
 
If adding current monitoring adds $3 to costs for the monitoring chip itself, the shunts, LC filters, board space, etc., it is cheaper to do $200 RMAs on 1% of GPUs than adding the circuitry to all GPUs.
Current monitoring will not help the situation since current delivery is not the issue. When you connect power wires in parallel, they will spit the current. This phenomenon is call Kirchhoff's law.

Its the contact's temperature rise under load is why this connector types is normally limited to a 3-5 amps. Its total ignorance they used such a connector in the first place.
 
Again, enthusiast GPUs are sharing product space with entry-level system buyers. The PCI-SIG and Intel/ATX 3.0 intends HPWR to be used to power everything from 75W to 600W, everything from entry-level to high-end. When you want to mandate one 12V connector to rule them all (eventually) in the consumer space, you cannot spec a $100+ worth of parts (GPU-side connector, PSU-side connector, 2x cable connectors) as a requirement.
Its not really my problem that manufacturers built thousands of power supplies with substandard parts for the application. Much less ATX 3.0 specifications are somewhat a joke because they don't build off previous versions. Otherwise all ATX 3.0 supplies would have 3A 5Vsb which only some do.
 
  • Like
Reactions: Peksha
If adding current monitoring adds $3 to costs for the monitoring chip itself, the shunts, LC filters, board space, etc., it is cheaper to do $200 RMAs on 1% of GPUs than adding the circuitry to all GPUs.
This is as ridiculous an arguement to let poor coporate culture to milk on consumers, it should not be a free pass coz that way they could save a few bucks more than properly designing a GPU costing $2000.

Again, enthusiast GPUs are sharing product space with entry-level system buyers. The PCI-SIG and Intel/ATX 3.0 intends HPWR to be used to power everything from 75W to 600W, everything from entry-level to high-end. When you want to mandate one 12V connector to rule them all (eventually) in the consumer space, you cannot spec a $100+ worth of parts (GPU-side connector, PSU-side connector, 2x cable connectors) as a requirement.
Again, it's EXACTLY the reason to:
1) Not rated it to the cosmos at 600W and leaving non-existent overhead on the connector, rate it for 300W and requires the 600W cards to use 2 connectors

2) Put the sensing and current balancing/protection circuit on the SKU that draws 450W+. As the 150W part won't remotely goes burning/melting even if only 1 wire is active there's no need for the $200 SKU to have safety monitoring, but the TOTL 600W SKU? you'd be kidding or being paid by them to think it's ok for them to treat consumers this way
 
If adding current monitoring adds $3 to costs for the monitoring chip itself, the shunts, LC filters, board space, etc...
Except it obviously wouldn't. Shunts and filters and board components of that ilk cost pennies bought in bulk. Nobody knows the true BOM cost of course, but even taking 1/3rd RRP this is attempting to suggest that just under 0.5% of the manufacturing budget for this would be required for that current monitoring.

It should also be noted that the issue found isn't the lack of active current monitoring but the removal of previously-present passive circuitry that treated the incoming 6 power lines separately. But anyway, let's say they did really want to save $3 per board...

...it is cheaper to do $200 RMAs on 1% of GPUs than adding the circuitry to all GPUs.
Let's take sales of 200,000 units. (Reportedly they sold 160,000 4080s in the first month or something, so as good a figure to work with as any. Feel free to provide better.)

200,000 x $2000 = $400 m revenue.
$3 (suspiciously high) saving per board means $0.6 m saved in costs.
1% failures at a (suspiciously low) $200 cost to repair per board = $0.4 m additional costs.

Savings on $400 m revenue = $200,000, or basically $1/unit for something they might sell, what, one or two million of? For a company with a current $3.4 t market cap. Plus the inevitable poor publicity of melting connectors.

I mean, there's looking after the pennies...yeah, but no. They're not omitting these parts to increase the bottom line.

If saving costs was really their primary objective, XT60 is smaller, cheaper and quicker to assemble than HPWR without the load balance issues. But then you would need to deal with a pair of #8 or thicker wires.
Exactly. Saving costs at this level isn't really their primary objective. I stand by my theory that they found the combination of 4/5090 card plus 12VHPWR just couldn't stay stable when load balancing was passively checked so they decided to let wires get hot instead.
 
  • Like
Reactions: Peksha
Current monitoring will not help the situation since current delivery is not the issue. When you connect power wires in parallel, they will spit the current. This phenomenon is call Kirchhoff's law.

Its the contact's temperature rise under load is why this connector types is normally limited to a 3-5 amps. Its total ignorance they used such a connector in the first place.
Contact temperature alone wouldn't cause one wire/pin to have 20X worse end-to-end resistance than the other. On a good connector, temperature differences explain maybe a 5% variance since the bulk of resistance is in the 18-36" of wiring between the PSU and GPU. When derB used new cables on his RTX5090 and PSU, the difference was about 15% from best to worst wire.

If some wires see 50+% more current than the others, something has physically gone wrong. With 6/8-pin cables, that often happened due to over-crimping of daisy-chain cables because standard MiniFitJr pins weren't designed for that. This is where the strong recommendation to use every single-ended 6/8-pin cable from the PSU before using any daisy-chain cables and stagger first-connector connections between GPUs in multi-SLI setups where daisy-chain is inevitable came from .
 
Contact temperature alone wouldn't cause one wire/pin to have 20X worse end-to-end resistance than the other. On a good connector, temperature differences explain maybe a 5% variance since the bulk of resistance is in the 18-36" of wiring between the PSU and GPU. When derB used new cables on his RTX5090 and PSU, the difference was about 15% from best to worst wire.

If some wires see 50+% more current than the others, something has physically gone wrong. With 6/8-pin cables, that often happened due to over-crimping of daisy-chain cables because standard MiniFitJr pins weren't designed for that. This is where the strong recommendation to use every single-ended 6/8-pin cable from the PSU before using any daisy-chain cables and stagger first-connector connections between GPUs in multi-SLI setups where daisy-chain is inevitable came from .
So where's the point again?? De8auer himself said that the issue is there's no monitoring nor warning signs before the bloody cable melted. It's not like Nvidia are forced to adopt the ridiculous standard, they are the pushers and after last gen's issue they still don't do something just to protect their consumers nor their brand image.
 
  • Like
Reactions: Peksha
So where's the point again?? De8auer himself said that the issue is there's no monitoring nor warning signs before the bloody cable melted.
The issue isn't the lack monitoring as it shouldn't be necessary in the first place. The amount of imbalance people are experiencing is physically impossible without cable/connector damage or defects well in excess of what specs allow.
 
The issue isn't the lack monitoring as it shouldn't be necessary in the first place. The amount of imbalance people are experiencing is physically impossible without cable/connector damage or defects well in excess of what specs allow.
Don't move the goal post once again.

Now there are quite a few cases in the rare supplies of 5090s being melted, some at least are using cables from reputable vendors or PSU bundled cable, and that:

1) If it have manufacturing defect and go bad day 1, there's no visual clues of that
2) If it wears out, there's no visual clues.

You've argued De8auer is re-using his cable multiple times, but then other users likely didn't remotely push the cable through the 30 rated cycles of the spec. So what do you mean by monitoring isn't necessary? it is bloody present in 3090Tis, and AIB like ASUS have foreseen such issues are coming so they got that circuitry and software warning to try keep users safer, why on earth Nvidia is not obligated to do the same to consumer? Adding $3 into the cost is too much? Or that they knew that if the circuitry is put inside they will have a massive backlash for selling a $2k GPU which claimed to fix the melting issue but keep warns the user that the cable is broken well before of the rated 30 cycles so they decide to just give a big Finger to the consumers?
 
  • Like
Reactions: Peksha
The issue isn't the lack monitoring as it shouldn't be necessary in the first place. The amount of imbalance people are experiencing is physically impossible without cable/connector damage or defects well in excess of what specs allow.
That's like arguing circuit breakers in houses and offices aren't necessary because if everything electrical is working properly there's no point to the circuit breaker.
 
  • Like
Reactions: YSCCC
The shell itself has a current limit. 9A is the norm for 6 pin while 6A is the normal limit for an 8 pin. This is the common de-rating of the connector with this pins considering the temperature rise at continuous current. For a molex connector.
I'll ask again, source? Molex' spec pages indicate that the stated current limits are based off a max temperature rise, with a 12 pin connector being able to handle 9A per pin while staying at or under 30 C rise.

Figure titled "Tin Plated Terminals Temperature Rise vs. Current per EIA-364-70
Tested with UL1061 Tinned Wire – Dual Row" here: https://www.molex.com/content/dam/m...onpdf/206/206460/2064600000-PS-000.pdf?inline

Obviously the exact temp rise will depend on the particular environment, but I'm not seeing anything that would suggest a on-size-fits-all 20% current derating.
What exactly am I meant to be looking at in this link?

UL does not get involved in a part unless its connected directly on the AC power. Its up to electronics engineers to select the correct parts to use to pass certification. Also, they wouldn't care if the power connector self destructed. They would care if the safety circuit didn't shut down and disable the unit. That is what they would be concerned with.
What? UL does way more that just AC mains connectors. The molex connectors we've been discussing are UL-certified for fire resistance/retardant at least. Heck, you can get UL-certified HDMI and ethernet cables. Maybe I'm misunderstanding you.

But yeah, obviously just because the individual components are UL-rated to be flame retardant or whatnot doesn't guarantee that a design that uses those parts will be good.
 
Last edited:
If adding current monitoring adds $3 to costs for the monitoring chip itself, the shunts, LC filters, board space, etc., it is cheaper to do $200 RMAs on 1% of GPUs than adding the circuitry to all GPUs.
And GM decided it was cheaper to pay off families than fix a certain problem. While this is hardly life and death nvidia chose to do something with minor cost savings over doing what's right. I'm not sure why you'd be defending this behavior unless you have a vested interest.
 
That's like arguing circuit breakers in houses and offices aren't necessary because if everything electrical is working properly there's no point to the circuit breaker.
Circuit breakers protect against short-circuits and sustained overloads, not damaged wiring, connectors and incorrectly torqued terminals (ex.: melting 14R50 outlets when people plug in EVs) reaching dangerous temperatures while under load.
 
Obviously the exact temp rise will depend on the particular environment, but I'm not seeing anything that would suggest a on-size-fits-all 20% current derating.
Its because this is taught in school when you build things. Obviously idiots on a different continent think they can rely on a datasheet and not test what they build. Its pretty obvious.

I don't who don't build anything that doesn't derate the connectors to 80% or more.

What compounds the issue with the PCIe power connector is the ambient heat from the video card which would derate the shell even further.
 
Last edited: