News RTX 5090 cable overheats to 150 degrees Celsius — Uneven current distribution likely the culprit

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
The question is if Tom's Hardware, Techspot, and Techpowerup can recreate it with their own cards. It will also be interesting to see if this is yet another FE issue or an issue that affects the entire stack due to faulty design.

I also still stand by my power supply criticism: Don't go cheap and don't buy one where you don't know the OEM. Corsair is generally alright since they use Seasonic for many of their units, it's the only brand other than Seasonic I will use when building non gaming PCs, but even then it's a relatively small price jump to a Seasonic or Super Flower branded unit (Enermax seems to be on the decline), and it's something you don't want to change, especially if you route your cables nicely.

As for the power connector itself, I still think they should have went with a locking BNC connector with a single, heavier gauge power delivery wire vs multiple, lighter gauge wires with a clipping plug.
 
I watched "The Farmer" clip. Ein Bauer, a Farmer, Der Bauer, The Farmer. Bau, build. He did some clever play with words. I speak german.

What can go wrong with pushing enough watts and amps to run a house hold heater with, through two rather thin and less than 15 gauge cables?

If the industry accepted the same standard that has to be applied to electric house heaters, that get their current from plugging into a wall socket, then the GPU market will be forced to adopt thicker gauge cables.

Even those cables can and have melted when the heater draws more than the 15 gauge wire is able to push.

High end GPU cards draw enough electricity that you could consider them heaters.

So it turns out the gear was at fault after all.

I also thought about the statement from Molex about not mixing connecting materials. As in gold should connect to gold. Not gold on one side and copper or gold coated copper on the other.

Right now there is no standard for this either and it is up to each manufacturer to come up with the metal that is inside these connectors. As long as the metal connection is able to transmit the current this is deemed to be good enough.
 
Derb called out the lack of headroom in 12VHWPR that the 8-pin connectors provided., amongst some other concerns with the new standard -- again, cable quality aside.
PSU and cable manufacturers have demonstrated their 12V-HPWR cables passing 1000-1500W perfectly fine. There is plenty of margin when everything is built properly and in mint condition.
 
The Molex Micro-fit connector has been use industrially for decades, under loads greater than that of mere GPUs, and in far harsher environments.

The question here is why 10 of the 12 pins are not carrying the current they should be.
The individual wires are rated for 9.5 amps per wire. If one of the wires is carrying more than 9.5 amps, then the question is why is there more than 9.5 amps on that wire.
 
  • Like
Reactions: Quirkz
Component cost goes up quite a bit beyond about 30V. For cost-sensitive consumer stuff, 24V would be a more sensible choice. 60V is a no-go since that would either require idiot-proof wiring built to similar standards as mains power cords or technically require electrician license for wiring cabinets. You probably don't want something like a 6-50 plug between your PSU and GPU.

Staying within the 12V realm, the simplest fix to horrible current balance issues is ditching the multi-pin, multi-wire setup in favor of something like XT-60. Wouldn't hurt to still do that on "ATX-24V."

I agree, I really hate how they made the new connectors. The older standard, while not idea, at least attempted to spread the power over different plugs. 24V would be pretty decent, or like you said just switching to a similar industrial connector like XT-60.
 
I like how everyone is ignoring that he shunt modded the card to remove all normal power limits for the testing.
The GPU wouldn't ever be able to reach that kind of power draw under normal conditions due to the VBIOS limiting the power draw, which is what the sense pins are for. If the sense pins get no information, well...this can happen.
Not exactly the "gotcha" people like to think it is.
I'd like to see some thorough testing using multiple cables under normal conditions.
 
As for cost-sensitive consumer electronics, I don't think $1,500+ GPU buyers fall into extreme price-sensitivity
The high-end is not the only GPU market that exists.

The point was that if 60V becomes the GPU standard, then every PSU intended to get paired with any GPU will need a 60V output and every GPU intended to take external power will also need 60V input, which raises prices for everyone including those buying ~$200 GPUs, ex.: me.

You could fragment the PC market between PCs equipped to handle 60V GPUs and those that don't, though that creates some interoperability issues and will raise costs all around now that GPU manufactures would have to split sales of sub-500W GPUs between 12V and 60V variants.

Everything from entry-level to high-end can run 24V power distribution with negligible impact on PSU and VRM BoMs. Fundamentally the same old buck regulators, only slightly less efficient while operating from 24V due to increased switching losses.

When you get to 48V, you have to use step-down transformers to leverage the current conversion ratio instead of buck regulators to maintain efficiency, otherwise switching losses go out of control. That substantially increases costs.

Basically, 24V is about as high as PC voltages can go without requiring substantially more expensive VRMs.

As I have written before, "12VO" is a missed opportunity at going 24VO instead. 12VO is still niche enough that it may not be too late to correct that.
 
Basically, 24V is about as high as PC voltages can go without requiring substantially more expensive VRMs.

As I have written before, "12VO" is a missed opportunity at going 24VO instead. 12VO is still niche enough that it may not be too late to correct that.

ATX already using 12V as the primary power out makes it kinda easy to do 24V. I agree that 48V+ is just kinda nuts for home computers.
 
Never use multiple cables to feed power to a device, unless you have current sensing devices in line with each individual feeder.
If they had that, none of these issues would exist regardless of connector type.

They simply are cheap and want to save money...
 
I agree, I really hate how they made the new connectors. The older standard, while not idea, at least attempted to spread the power over different plugs. 24V would be pretty decent, or like you said just switching to a similar industrial connector like XT-60.
I know XT60 is rated for 60Amps, but have you tried using them for sustained loads? My XT60's get hot if I'm fast charging my RC heli batteries at over 30 amps. The 50amps of the 5090 would get them very very hot. XT90's or something similar would be the better choice.
 
Last edited:
Watching that cable heat up in thermal imaging is insane. Is it the copper thickness (gauge) that is causing the overheat?
The wire in a 12VHPWR cable is 16 gauge. That thickness of wire (I assume copper, not aluminum) is rated to carry no more than 10amps. Pulling 22amps is going to really heat the wire and it will eventually melt or short.
 
PSU and cable manufacturers have demonstrated their 12V-HPWR cables passing 1000-1500W perfectly fine. There is plenty of margin when everything is built properly and in mint condition.
Man you really need to stop blame the victim IMO, it’s a bloody connector built for consumers and if you suggested that a 2-3 years old OEM cable is unsafe and might burn your $3000+ Hardware, it’s still design issue unless you specify it is a install once and replace whenever you change GPU disposable cable.

In any reasonable design the card itself should have the safety mechanism when it detects the issue, and means to balance the current draw through the pins, as buildzoid explains in his video of the 3090ti. When all balance fails it should have the way of stop power draw and warn you to replace the cable, not burn your hardware in oblivion.
 
  • Like
Reactions: SomeoneElse23
People need to get over this "user error" garbage. I have been saying since BEFORE the first case of 4090 melting that this cable was not capable of 600W, not even close. The very first time I laid eyes on that cable, and saw those tiny pins, I KNEW KNEW KNEW KNEW KNEW this was NEVER going to work. The standard 8-pin never had this problem. Its been a reliable standard for years. Suddenly, new cable melts.... "Oh, you're plugging it in wrong" ..... like all of us who spent 15+ years plugging in 8-pin cables suddenly forgot how to plug in a cable. This is straight forward logic. I have been saying it for 3 years now and I will say it again: NVidia, its time to own your mistake, stop blaming the user, and fix the problem. But hey, blaming the user is cheaper than fixing the problem. Hats off to AMD for sticking to the 8-pin
The ubiquitous 8 pin connector only has three +12V wires. You had to use three of those to power a 4090 without an ATX3.0 PS. That means that the power was being carried by nine 16ga wires. The 12VHPWR connection has six +12V wires in it. So using 3 separate 8-pin wires distributes the same load over 50% more copper.
 
The cable specification itself isn't as much to blame as the piss poor implementation for managing the connection by nvidia. Asus on their halo version of the 4090/5090s put in per cable monitoring which can warn of failure. However even on that it's a solution that sits between the connector and the board power system because of how nvidia has designed these cards. I think Buildzoid's video (has been linked multiple times) is certainly worth a watch because even if nvidia just used the circuitry design from the days of 8-pin PCIe connectors 12VHPWR/2x6 wouldn't be anywhere near as susceptible to these issues.

The 3090 Ti had basically the same connector as 12VHPWR/2x6 (just no sense pins) and while they didn't sell a ton of these there also weren't reports of cables/connectors melting.

24VO would be the best forward thinking for PC power delivery though.
 
The wire in a 12VHPWR cable is 16 gauge. That thickness of wire (I assume copper, not aluminum) is rated to carry no more than 10amps. Pulling 22amps is going to really heat the wire and it will eventually melt or short.
The normal AWG tables are based on wiring passing through insulated walls in bundles of three. Wiring in PCs isn't surrounded in R5+ insulation.

Inside a ventilated cabinet such as most PC cases, you can easily push 30+% more current without exceeding the wire's temperature rating. That is how lower-quality PSU manufacturers mostly got away with using #18 wiring on their 2x 8-pin cables (~13A per wire) until the voltage drop started causing issues.
 
The cable specification itself isn't as much to blame as the piss poor implementation for managing the connection by nvidia. Asus on their halo version of the 4090/5090s put in per cable monitoring which can warn of failure. However even on that it's a solution that sits between the connector and the board power system because of how nvidia has designed these cards. I think Buildzoid's video (has been linked multiple times) is certainly worth a watch because even if nvidia just used the circuitry design from the days of 8-pin PCIe connectors 12VHPWR/2x6 wouldn't be anywhere near as susceptible to these issues.

The 3090 Ti had basically the same connector as 12VHPWR/2x6 (just no sense pins) and while they didn't sell a ton of these there also weren't reports of cables/connectors melting.

24VO would be the best forward thinking for PC power delivery though.
Agree on everything but I would be thinking maybe we should stop pushing multi grand costing cards needing it's own power plant and an AC... maybe we should go back to optimize power usage and texture optimization, or at least, get 2 of those stupid connectors, in the good old days beside the extra margin on the 8pin connector, one of the major safety design IMO is they are almost always 0.5-1 connector extra on top of the power draw, now in the second gen of the ATX 3.x standard and literally first gen of 12V 2x6 even the non OC FE model have been measured constantly peaking to 650W in a 600W rated connector. Just get 2 of those FFS if you really need a 1600+W PSU for that card
 
The spec was updated and renamed 12V-2x6, which has sense pins that do not connect until fully the plug is seated, thus eliminating the problem. It is now fool-proof (poka-yoke). So the 'user error' issue was real and it has been eliminated with the updated design.

That's not user issue: That's a *design* issue. The poor design made it very easy for people to not realise it wasn't inserted properly.

The changes improves the design, by warning people when it's not properly inserted. It's still a work around to the fundamental issue though.
 
That's not user issue: That's a *design* issue. The poor design made it very easy for people to not realise it wasn't inserted properly.

The changes improves the design, by warning people when it's not properly inserted. It's still a work around to the fundamental issue though.
And by the latest De8auer's video which is posted repeatedly since it's release less than 24 hrs ago is worth watching, even his setup showing as good an insertion into his card as it can be in an open bench, it still goes to 85C in a wire and peaking to 150C, that have nothing to do with the poor insertion
 
Man you really need to stop blame the victim IMO, it’s a bloody connector built for consumers and if you suggested that a 2-3 years old OEM cable is unsafe and might burn your $3000+ Hardware, it’s still design issue unless you specify it is a install once and replace whenever you change GPU disposable cable.
I'm not blaming the victim, I'm blaming the cable.

Per Molex's own specs, the connectors are officially only rated for 20 insert-remove cycles. Yes, they are consumables, not really intended for much more than one-time use if you need them to meet full spec. Then you have to add all of the cable bend radius, suspended weight and other external stresses that could cause premature connector failure.

Same story for the specs of most other internal PC connectors.

For most of the GPU load to land only on only two wires in derB's experiment, something has to have gone wrong at either or possibly both ends of his cable.
 
  • Like
Reactions: austin512
Of course doubling the voltage to 24V would reduce the heating load by the factor of 4 and further quadrupling to 48V to the factor of 16 so that the whole power could be easily supplied by just a single (!) wire but here is one issue.

I would worry about more than 24V for the GPU due to small kids can do unthinkable things with the opened PC box. When less than a year old my kid was very interested what I was doing inside PC. At his 2 he played Doom on 77th level, and since similar crazy young ages was often inside of his own PCs. Heck knows how in the world these overactive kids can mishandle the 48V cables....
 
And by the latest De8auer's video which is posted repeatedly since it's release less than 24 hrs ago is worth watching, even his setup showing as good an insertion into his card as it can be in an open bench, it still goes to 85C in a wire and peaking to 150C, that have nothing to do with the poor insertion
I don't remember him stating where that cable came from and how old it was. Since he does lots of swaps, the cable may very well just have been overdue for replacement.
 
  • Like
Reactions: austin512