News Melting RTX cables solution proposed — embed over temperature and current protection on each wire

Better solution (one I have proposed before): A SINGLE, thicker power delivery wire with the connectors being locking BNC types. Considering the small distance it'd be 6awg at most for 600w (though I'd overengineer it to 4awg), easily within the same space recs as the 12v-2x6.
 
Yeah... instead of solving the problem, balance and split power delivery rails, let's bus them together at both ends and monitor for "anomalies" which are in fact Inevitable with such a stupid design.

Six thin parallel wires connected to each other through two connectors will never act like one thick wire.
 
Yeah.. this is absolute stupidity to propose a solution, which does not solve the actual problem (NVIDIA's idiotic power delivery design). If the PSU and cable manufacturers are sane, they should not listen Aris but put NVIDIA to fix the cards.
 
It doesn't make sense to do any of this in the cable. You'd just the up with a cable so bulky/expensive that the entire point is missed. If it went anywhere, it should go on the card itself, before all the lines get apparently combined into a single rail.
But that would eat a few cents out of Nvidia's offensively extreme profit margins on their stupidly expensive cards. That's not going to happen.
But Nvidia's current strategy of "not actually making any 5090s" will also do a great job of solving the problem.
Think about it. If there's a 1 in 10,000 change that the connector melts, but they only manufacturer 5,000 cards, then it's only a 50-50 chance that even a single cable has a problem.
Sounds like they got unlucky this time, but really it takes 3 or more failures to establish a pattern.
 
AMD has the right approach. Just use more connectors that have worked for a few decades now.
For now, yes, NV could (*should*) just ditch the 16-pin connectors and go back to triple 8-pin connectors until a much more robust solution is standardized and implemented. Unfortunately, 4090 and 5090 have a single 12V input shunt which probably won't ever change for this generation (i.e. a "Rev 2.0" design for 5090).

For comparison, the 7900 XTX has three input shunts, so the system will exhibit instability rather than melting one or two wires -- even as the 8-pin connector is far safer anyways.
 
  • Like
Reactions: palladin9479
From what i've seen from Buildzoid and a few other folks who have done down to the PCB teardowns of the cards. as well as Der bauer who literally cut several cables and the card continued running...

The flaw is fully on Nvidia's poor (Money saving for them) design. 30 series had several independent power stages. if any one failed the cards would shut down and no overpower would happen.
40 and 50 series all cables feed into one circuit. so if one cable is bad, more current flows over the others increasing heat. The card has no ability to tell if one cable is overvolting as it sees all cables as one unified cable.

IF Nvidia had properly designed the card to regulate power delivery on the inbound which... AMD and 30 series and older cards did... this would not have been a problem.

Long and short of it... Nvidia wanted to save some costs and make more money so they created a power delivery system that is destined to fail knowng that a lot of people will just replace the card with a new one and blame themselves rather than going through a lengthy RMA process.

The connector is likely fine. HOW the cards themselves treat the connector is the problem.
 
Active cables seem like an expensive and complicated solution to a problem caused by Nvidia wanting to save a tiny fraction of their production cost and have some nice aesthetics.

They could have done quad 8-pin... but that would have made it easier to visualize the huge power draw, and they'd have to buy and mount 4 power receptacles per card instead of one. They could have kept the 30-series circuit layout, but that would involve more circuitry and may be difficult to mount on it's end for their slick angled positioning.
 
  • Like
Reactions: Taslios
Active cables seem like an expensive and complicated solution to a problem caused by Nvidia wanting to save a tiny fraction of their production cost and have some nice aesthetics.

They could have done quad 8-pin... but that would have made it easier to visualize the huge power draw, and they'd have to buy and mount 4 power receptacles per card instead of one. They could have kept the 30-series circuit layout, but that would involve more circuitry and may be difficult to mount on it's end for their slick angled positioning.
yup... people need to stop blaming the cables. The issue is with NGreeedia's desire to save a few $$ in production costs. Had they put the proper voltage regulation/detection circuitry into their cards none of this would be happening.
 
This is one of the holy grails of band-aid patch for a inherently bad design put into specs which the organisation need to defend themselves and bluntly stay on to the proposed standard.

It’s apparently a lose lose situation for Nvidia, if they kept going down the road their brand reputation is going down a road leading to a cliff or potential class action lawsuits.

If the admit fault it’s a financial and stock value disaster to admit the best GPU you can buy had been cheaping out on such critical things, and that ppl buying brand new ATX 3.1 PSUs are now a relict of dumbness. Those PSU vendors and users won’t be happy to replace them to ATX 4.0 or revert those 1600W models back to 2.0 standard connectors.

In this situation Nvidia seems dead-ended on both choice and AMD is likely scrambling to notify AIBs their 9070XT can be sold a hundred bucks more MSRP when they don’t adopt the new 12V 2x6 connectors
 
  • Like
Reactions: DingusDog
The 16-pin design is so stupid on so many levels.

The 4 "Data" pins don't send or receive any data. Instead they are only there to tell the GPU if the other end has 150W, 300W, 450W, or 600W of power cables connected. It does this in the stupidest way possible, by jumpers. If you know the pin-out, you can fool the GPU into believing it has 600W connected to it by shorting the correct wires.

You know what else uses 4-pins and can send and receive diagnostic data between devices? USB2.0.
I doubt it even costs that much to add some diagnostic circuitry on the PSU side to tell the GPU it's pulling too much amps over a couple wires.

The connector has 3 power rails, but allows the omission of shunts is even more ridiculous. The fact that older cards had shunts to prevent the exact type of runaway melting situation is even more heinous.

And then the connector is so sensitive that it can't even handle 1 insertion cycles without requiring a careful inspection is all you need to know.
OCulink, for instance, started off with 50 insertion cycles guarantee before it wore out.

In fact, there are better power connectors. For instance in the RC industry, you can get an XT-90 power connector rated for continuous 90A, and over 1000 insertion cycles.

Instead, nvidia has decided to go with that 16-pin POS.
 
  • Like
Reactions: Peksha
yup... people need to stop blaming the cables. The issue is with NGreeedia's desire to save a few $$ in production costs. Had they put the proper voltage regulation/detection circuitry into their cards none of this would be happening.

It's both. The cable, as designed, has practically zero headroom for safety. Under ideal circumstances with all wires and pins working optimally, it can delivery slightly over 600W of power. Any failed wire or pin will cut that maximum value drastically, which is what is happening. Coupled with this is that nVIdia removed the per-pin safety feature that would automatically shutdown wires in the event of a failure and thus reduce the maximum power draw of the card. Now when a wire / pin fails, the card continues to pull the same power but the remaining wires have to handle the excess load which is above what they were rated for.

Contrast this to the six and eight pin PCIe power connectors which have a safe limit 2.2x higher then the rated limit. One or two pins could fail and the cable would experience no problems.
 
You know what else uses 4-pins and can send and receive diagnostic data between devices? USB2.0.
I doubt it even costs that much to add some diagnostic circuitry on the PSU side to tell the GPU it's pulling too much amps over a couple wires.

USB would be a very bad idea, its not reliable enough and requires one side be a host controller and gets into messy signaling. Thankfully we don't need something as convoluted as USB for this kind of communication, GPIO would serve perfectly well for bidirectional serial communication. Very simple signaling protocol that both ends could negotiate around.

Ultimately though it wouldn't fix this problem. So nVidia did a very dumb thing, they bridged all their power lines into a single massive line internally, meaning from the cards POV there is just one big power line. From the cards POV it can't distinguish the power draw between different lines and know if one is over or under. 3rd Party vendors have attempted to remedy this by putting their own monitoring circuitry in the hopes of shutting down the card before it burns something out.
 
  • Like
Reactions: jlake3
It's both. The cable, as designed, has practically zero headroom for safety. Under ideal circumstances with all wires and pins working optimally, it can delivery slightly over 600W of power. Any failed wire or pin will cut that maximum value drastically, which is what is happening. Coupled with this is that nVIdia removed the per-pin safety feature that would automatically shutdown wires in the event of a failure and thus reduce the maximum power draw of the card. Now when a wire / pin fails, the card continues to pull the same power but the remaining wires have to handle the excess load which is above what they were rated for.

Contrast this to the six and eight pin PCIe power connectors which have a safe limit 2.2x higher then the rated limit. One or two pins could fail and the cable would experience no problems.
Actually IMO in this aspect, it's still the ATX 3.x standard's issue, the connector basically makes the cable only able to do so much within the standard spec, so it literally have no headroom nor balancing capability by the standard itself.

As per why Nvidia removed the safety rail splitting, a conspricy theory circling is that Nvidia found that if such feature is implemented, more reported case in the form of RMA will appear showing the high failure rate of the new connector since 4090's power draw, so they decide to screw us and blame user error instead of blaming themselves andget a ATX 4.0 right after the 3.0 standard