News Melting RTX cables solution proposed — embed over temperature and current protection on each wire

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Or if the connector is not intended to be disconnected in GPU side, how about solder it onto the card with a big fat cable dangling 45cm outside the card? "Requires solder or screwed onto the PSU by electrician" would be fine
 
Honestly just brute force the amount of 8 pin connectors on the card. We have the PSUs on the market to deliver that many watts without any issues.
 
I think the proposed solution is over complicating the matter and at the same time, not addressing the root cause. From my takeaway from Roman's (aka Der bauer) assessment, the key problem is that a single power line is not an ideal solution to deliver such a high power load. Specifically, there seem to be no way to load balance how much current is going through a single strand of cable and the simple solution is to use 2 connectors instead of just 1. It may not directly resolve the load balancing on a single cable, but will surely reduce the strain per cable while allowing for a good amount of margin for error.
 
I think the proposed solution is over complicating the matter and at the same time, not addressing the root cause. From my takeaway from Roman's (aka Der bauer) assessment, the key problem is that a single power line is not an ideal solution to deliver such a high power load. Specifically, there seem to be no way to load balance how much current is going through a single strand of cable and the simple solution is to use 2 connectors instead of just 1. It may not directly resolve the load balancing on a single cable, but will surely reduce the strain per cable while allowing for a good amount of margin for error.
Sure, but since Nvidia don't seem to have any faint hint to admit fault and fix the root cause on their side, all other parties can do is to do these sort of band aid fixes to save the consumers.
 
Sure, but since Nvidia don't seem to have any faint hint to admit fault and fix the root cause on their side, all other parties can do is to do these sort of band aid fixes to save the consumers.
I don't disagree. However Nvidia can get away with anything because their supporters will support them regardless of poor performance and hazards. The decision to make the power deliver look "clean and elegant" comes at the cost of potential fire hazard. In my 2.5 decades of fiddling with computer hardware, I've not ran into any issue with power connector melting, not with the initial molex and 8 pin power connectors, up till now. I understand that the power requirement have increased significantly over the years, but is it really that difficult to make it less risky to use?
 
  • Like
Reactions: YSCCC
I don't disagree. However Nvidia can get away with anything because their supporters will support them regardless of poor performance and hazards. The decision to make the power deliver look "clean and elegant" comes at the cost of potential fire hazard. In my 2.5 decades of fiddling with computer hardware, I've not ran into any issue with power connector melting, not with the initial molex and 8 pin power connectors, up till now. I understand that the power requirement have increased significantly over the years, but is it really that difficult to make it less risky to use?
Completely agree with your view, back then you have to really be ruthless to have a 8pin charred or melted, I've seen multiple cases of ppl putting the 6+2 pin on the wrong alignment making the 2pin literally half plugged and it just got unstable and not melted, yet is very visually apparent, now that the 12vHPWR standard and the 12V 2x6 is literally a dumpster fire with blind fans defending for no reason.

IMO getting a smaller connector is just to make cable routing and tidying much easier with less airflow hinderence, and updating power connector is mainly to SAFELY deliver the more power hungry cards with enough power, but the new standard did none of those.
 
Cutting power to the GPU will save the GPU, but it will also crash the GPU. Depending how often the power needs to be cut, it could be a very interesting gaming experience.

Fix the problem, don't band-aid it!
 
Why is this overcomplicated solution even being proposed, putting on thicker gauge wire and fixing the issues with solder joints at the terminals will fix 99% of the issues.
 
Why is this overcomplicated solution even being proposed, putting on thicker gauge wire and fixing the issues with solder joints at the terminals will fix 99% of the issues.
Sadly coz the plug spec cannot fit any thicker wires, plus the Molex plug can’t be soldered onto… so here you go
 
Dig into the spec sheets for the pins in these connectors. It'll be either Amphenol Minitek Pwr3.0 HCC or Molex Microfit 3.0. In a 12 pin configuration they're only rated for 7.5a or 8a respectively. 12vhpwr is trying to pull up to 9.2a. It's flawed by design.
 
  • Like
Reactions: Flanz1
Or, instead of putting layers of makeup on this pimple, just squeze it.

Say sorry we went so far up with power draw, its not reasonable for gaming. Game devs should simply stop. Lets focus on reduction of power in next generation till we get back to sane levels. And lets face it - gfx in gaming is excessive to what it needs to be. Adding rt reflections in a mud puddle of ultra fast paced fps shooter is a gizmo.
 
If my 4090's connector melts I will, 1) completely remove the connector, 2) cut the nvidia provided four 8 pin adapter cable in half, 3) solder the wires from the adapter directly to the 4090's board. Then I'll use only four 8 pin cables and never again (for this card) have to worry about 12vhpwr/12v-2x6 bullshit.
 
Actually IMO in this aspect, it's still the ATX 3.x standard's issue, the connector basically makes the cable only able to do so much within the standard spec, so it literally have no headroom nor balancing capability by the standard itself.

As per why Nvidia removed the safety rail splitting, a conspricy theory circling is that Nvidia found that if such feature is implemented, more reported case in the form of RMA will appear showing the high failure rate of the new connector since 4090's power draw, so they decide to screw us and blame user error instead of blaming themselves andget a ATX 4.0 right after the 3.0 standard

Trying to push 600W down a slim cable is a dumb idea anyway, the cable would need to be much beefier to have reliable safety margins. Better to just use two of them for large GPU's.

As for why nVidia removed them, it's nothing to do with returns but a simple cost cutting measure. The nVidia 30 series had to have a different power circuit for every PCIe power connector, three connectors means three circuits and the required circuitry to manage them. This is where all the safety features were at, because they needed to manage two to three PCIe power connectors anyway. With the 40 and 50 series there is just one large power circuit, no management required. This change significantly simplifies the cards power design. It's also why AIB partners can only do so much, they can't actively manage the power because the nVidia chipset only see's one big power line.
 
A proper power management system on the card. You can read, go read the comments.
I've already agreed with that and commented about it above. You can read too, lol. Just ambiguous what you said -- were we supposed to read your mind? That said, that is the best solution but doesn't mean we should stop there.

I believe the ATX 3.0 and 3.1 standard has to be dropped as well. As other commenters have said and Der8auer investigated, there's not enough safety margin in the current standard. We have four sense pins that only sense themselves -- like c'mon, lol. NVidia's design flaw exposed the weaknesses of the standard.
 
How about Nvidia recall their dangerously fault cards and implement proper load balancing across each wire? Even their 3000 series load balancing implementation was flawed as it only load balanced the +12V wires and not the ground wires. I bet if you cut 5 of the 6 ground wires on a 3000 series card, the remaining wires would still dangerously overheat.

They need to do it properly, this flaw affects all multi-wire GPUs, even the ones with the old PCI-E power connectors.
 
The best solution IMO is to avoid these cards and just go with AMD instead. I decided that my next build will be AMD CPU/GPU. Intel has dropped the ball both in design and performance and Nvidia are certified lunatics with their overpriced and faulty designs.
 
The best solution IMO is to avoid these cards and just go with AMD instead. I decided that my next build will be AMD CPU/GPU. Intel has dropped the ball both in design and performance and Nvidia are certified lunatics with their overpriced and faulty designs.
but it seems now that the 9070 will not be priced really midrange... Radeon division is still the good old never miss an opportunity to...
 
The best solution IMO is to avoid these cards and just go with AMD instead. I decided that my next build will be AMD CPU/GPU. Intel has dropped the ball both in design and performance and Nvidia are certified lunatics with their overpriced and faulty designs.
My only concern is whether or not this is a wake-up call for all high-power GPUs with multi-wire power connectors. I'm yet to see a GPU which has the ability to monitor / load balance the current distribution over the ground wires.
 
My only concern is whether or not this is a wake-up call for all high-power GPUs with multi-wire power connectors. I'm yet to see a GPU which has the ability to monitor / load balance the current distribution over the ground wires.
It will be difficult, after all these and Nvidia have what consequence? their stock rise through the sky..
 
  • Like
Reactions: yngndrw
How about Nvidia recall their dangerously fault cards and implement proper load balancing across each wire? Even their 3000 series load balancing implementation was flawed as it only load balanced the +12V wires and not the ground wires. I bet if you cut 5 of the 6 ground wires on a 3000 series card, the remaining wires would still dangerously overheat.

They need to do it properly, this flaw affects all multi-wire GPUs, even the ones with the old PCI-E power connectors.
but that would require them to admit fault.

the only way that happens is if a progressive gov... say the EU forces them to fix it to continue selling products. or a Class Action Lawsuit draws enough attention to affect stock prices.

as every Gov entity is salivating on AI cards and does not want to be left behind... it is unlikely that any entity is going to force Nvidia to change.

That leaves consumers... how many of us will willingly go buy an AMD or Intel GPU and "suffer" with "worse" drivers and actual worse upscalling? (AMD's drivers are in my experiance better than Nvidias for my multi monitor set ups but that's an argument for another thread) I do miss DLSS...

I suspect as long as people are willing to put up with fake MSRPs, Fake Frames, and real fires... Nvidia won't change.
 
  • Like
Reactions: yngndrw