News RTX 5090 cable overheats to 150 degrees Celsius — Uneven current distribution likely the culprit

Page 3 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Of course doubling the voltage to 24V would reduce the heating load by the factor of 4 and further quadrupling to 48V to the factor of 16 so that the whole power could be easily supplied by just a single (!) wire but here is one issue.
You already only need one wire pair at 12V... if you make it #6 solid or #4 fine-stranded for flexibility.

Imagine using rigid-wiring for GPU support :)
 
I'm not blaming the victim, I'm blaming the cable.

Per Molex's own specs, the connectors are officially only rated for 20 insert-remove cycles. Yes, they are consumables, not really intended for much more than one-time use if you need them to meet full spec. Then you have to add all of the cable bend radius, suspended weight and other external stresses that could cause premature connector failure.

Same story for the specs of most other internal PC connectors.

For most of the GPU load to land only on only two wires in derB's experiment, something has to have gone wrong at either or possibly both ends of his cable.
I can't agree it's the cable's fault, the stupid connector and that exact implementation of Nvidia is the problem, for that much power draw and releasing the standard they have to have a design to at least balance the current among the wires, not letting it draw on one wire without warning beside of smoke and flames or 2nd degree burn in your finger.

I don't remember him stating where that cable came from and how old it was. Since he does lots of swaps, the cable may very well just have been overdue for replacement.
Same as the above comment. if it is that susceptable to wear and tear, you need to have the wear marks just like you have on your car tyre, or just like say, the iPhone lightening port says it cannot detect the cable and not pulling current (charging), as in Buildzoid explained, Nvidia have done that in the 3090 Ti and now they ditched that current balancing and detection on the GPU hence causing this whole mess. The redditer using the Moddiy cable doesn't swap cable as often I believe and still melted (likely due to SFF accelerating the heat buildup). If it is that sensitive to old cables, how about put a big warning in the box and promo: "Brand new OEM cable required, re-use old cable at your own risk with warranty void"?

Cable and PSU company at this point is the victim also as the standard didn't specify or include all those overhead and fail safe mechanism and then ppl are blaming them.

Remember, not only De8auer, all of the reviewers have measured peaks of the card drawing ~650W which is way overspec of the cables, plus others also observed much hotter cables while not as severe in de8auer's case.
 
Same as the above comment. if it is that susceptable to wear and tear, you need to have the wear marks just like you have on your car tyre
No need for "wear indicators" since all internal PC connectors are only intended for "install once for the lifetime of the PC" with a few cycles to spare for warranty purposes.

The redditer using the Moddiy cable doesn't swap cable as often I believe and still melted
Well, an older and more worn-out cable sounds like it would be more likely to run into issues, not less. If you meant to say 'card', pulling the GPU for cleaning or access to other components still counts as wear effectively the same as swapping GPUs.

Remember, not only De8auer, all of the reviewers have measured peaks of the card drawing ~650W which is way overspec of the cables, plus others also observed much hotter cables while not as severe in de8auer's case.
The cable spec is 600W and 650W is BARELY over-spec.

I don't remember deB specifying how old his cable was and since he moves stuff around a lot, I wouldn't be at all surprised if his cable was well beyond its safe cycle life. That is why I said he needs to redo that with a factory-fresh cable. That would show how much of the problem is due to cable wear.
 
No need for "wear indicators" since all internal PC connectors are only intended for "install once for the lifetime of the PC" with a few cycles to spare for warranty purposes.

Well, an older and more worn-out cable sounds like it would be more likely to run into issues, not less. If you meant to say 'card', pulling the GPU for cleaning or access to other components still counts as wear effectively the same as swapping GPUs.

The cable spec is 600W and 650W is BARELY over-spec.

I don't remember deB specifying how old his cable was and since he moves stuff around a lot, I wouldn't be at all surprised if his cable was well beyond its safe cycle life. That is why I said he needs to redo that with a factory-fresh cable. That would show how much of the problem is due to cable wear.
So please explain why on earth we have been reusing PSUs for close to a decade over the years constantly without having such drama before this whole stupid 12VHPWR x Nvidia implementation??

I've bought a first gen seasonic OEM Corsair AX750W for my sandy bridge 2600k, gone through GTX580, R9-290X and then GTX1060 for 10 years, with interim testing suspecious cards for friends and it works perfectly for 10 years until I change build to the i7-12700KF in 2021.

If all connectors are only designed to the same standard as the new 12vhpwr, we would have seen all these throughout the years, but no, we only see them since 12vhpwr. The failure to the standard to have warn indicator and fail safe detection mechanism on the GPU side to just stop drawing power when the connector is worn out is the defect, not the cable or user reuse a pristine looking cable and connector.
 
If all connectors are only designed to the same standard as the new 12vhpwr, we would have seen all these throughout the years, but no, we only see them since 12vhpwr.
All connectors being officially rated for 10-20 cycles doesn't mean they will all fail within 10-20 cycles. It only means the manufacturers do not guarantee the connectors will still meet all specs beyond that. If you keep using the same cables, you chose your own adventure.

Also, connector failures aren't new with HPWR. Melting power cables have been a thing with 6/8-pins too, especially back when double-tapped 6/8-pin cables were common, most frequently due to bad crimps since standard MiniFitJr pins aren't large enough to accommodate doubled-up wiring.
 
All connectors being officially rated for 10-20 cycles doesn't mean they will all fail within 10-20 cycles. It only means the manufacturers do not guarantee the connectors will still meet all specs beyond that. If you keep using the same cables, you chose your own adventure.
So, why the blame ISN't on Nvidia to stop draining the power to oblivion and have the warning popping out but the user or cable manufacturer? Or Nvidia.PCISIG just state after X cycle of plug and unplug, buy a brand new cable? And the GPU side would wear out also, so a GPU after 10 cycles of unplugging should be dumped as there should be no wear indicator or current detection design like in the 3090Ti as said by Buildzoid? That just don't make sense for me to blame the cable/cable maker.

If something after X cycle could just burst into flames, it should be the standard's problem as far as I know, not individual vendor of the cables, just like if a space heater will burn if it was 5 years old and no protection mechanism is in place, I am pretty sure for most countries that heater will be illegal to be sold.