News 12VHPWR adapters sporting heatsinks and thermal pads show how problematic the connector is

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
I just don't understand WHY. On these huge PCBs nowadays there is totally enough space for six to eight old and proven 8-pin connectors, should it be needed, where 4 to 6 can be easily placed just on the card backside so to not be noticeable in decent builds. But no, we do this crazy burning plump with cable just on the card center.
 
The issue is they are pushing these cables out of specs 600w is the max not 620 not 680. The last generation was not pulling this much power to be causing this much of a issue because it wasn't cranking to 600 watts constant. I mean they are literally admitting the cables and connectors can't take the heat by slapping on thermal pads.

New name rtx 5090 duct tape edition or fix it again tony edition.
 
I just don't understand WHY. On these huge PCBs nowadays there is totally enough space for six to eight old and proven 8-pin connectors, should it be needed, where 4 to 6 can be easily placed just on the card backside so to not be noticeable in decent builds. But no, we do this crazy burning plump with cable just on the card center.
Same question for me since the debut of Ada Lovelace, going from something working to this crazy is just plain stupidity IMO, the only upside of the new connector is that if it isn't melting, it have less cable clutter to worry from
 
I just don't understand WHY. On these huge PCBs nowadays there is totally enough space for six to eight old and proven 8-pin connectors, should it be needed, where 4 to 6 can be easily placed just on the card backside so to not be noticeable in decent builds. But no, we do this crazy burning plump with cable just on the card center.

Because it doesn't look pretty having 4 of these connectors hanging out lol. They should just put 2 of these connectors on the end and call it a day.
 
Because it doesn't look pretty having 4 of these connectors hanging out lol. They should just put 2 of these connectors on the end and call it a day.
If placed on the card rear side (which is barely seen, it would allow for good looking builds which route all these cables throught the backplate short way). Compared to the current design with cable routed from card side center through middle of the machine, it would be a pinnacle of beauty.
 
What I don't understand is why they went with 12 mini power connectors if they're going to fuse them to a single rail.

Take a look at what an XT90 can do with just 2 wires.
And the kicker is it's smaller, safer, and higher endurance than the 12VHPWR.
The cables aren't even that thick.

Because the know nothing of flow mechanics. It's like they literally wired an entire house to the main without circuit breakers. They have zero business making electronics if the have zero idea how electricity transfers and flows. 6 traces, 6 shunt resistors, for 600w. So each pin is load balanced to 100w or 8.3A. Zero power regulation on the card. That means the resistance will be in a combination of the pin connection, the natural resistance difference in potential for each individual wire or resistance, and the behavior of that card. Some will throw all the wattage down a single line. Some will throw down a couple. Some will throw it down all of them equally. Never was a user error! NVIDIA lied. NVIDIA has thus been caught with provable evidence they are dealing in legal malpractice and corporate espionage. "Closed source" the legal system. "Closed source" the market in a free and open economy. So a monopoly...
 
If placed on the card rear side (which is barely seen, it would allow for good looking builds which route all these cables throught the backplate short way).
The PCB itself is half the card's length if not less. To put connectors in the back, you need extra wiring from there to the PCB. Extra assembly costs, extra heatsink machining costs, degraded airflow and cooling performance, more things that can potentially go wrong between the power connectors and PCB.
 
Extra cooling components on 16-pin adapters demonstrate that the standard is fragile compared to 8-pin and 6-pin connectors.

12VHPWR adapters sporting heatsinks and thermal pads show how problematic the connector is : Read more
There is a few phrases that fit this:
"putting a band-aid on the situation"
"diamond studding a piece of sh*t"

But seriously, There are better connectors to use. But I might go for desoldering the connector and just wire it since I use modular power supplies, that will reduce the instance of higher impedance eliminating one of the two plugs the current goes through.

Their bad choice of plugs have a poor low impedance profile. That is why the pins heat up in the first place. And why they are the lowest price connector they can pick.
 
Last edited:
Amp rating for 16awg is 13a. Period. Max. That's the safety standard. For decades. It's the literal electrical code for 16awg. Laws are already in place. NEC is wrong.
Not quite right - it is a code for installation in walls, where the wires might not be able to dissipate heat efficiently.

The maximum amps a given wire can handle is an engineering problem and depends on design details. You figure this out by computing heat dissipation in the wire and finding cooling rate. Then you compute wire temperature in operation and compare it to max temp for insulation and the wire itself. Safety factors are needed in case a wire gets nicked, or ambient temperature increase, or other reasons. High temperature insulation (like PTFE) allows more current in the same gauge wire.

Take a look at the following chart:

https://www.powerstream.com/Wire_Size.htm

You see the difference in amps for chassis wiring versus power transmission ? This is because a wire in air inside a case is cooled more efficiently then a wire inside a wall.
 
  • Like
Reactions: TJ Hooker
Not quite right - it is a code for installation in walls, where the wires might not be able to dissipate heat efficiently.

The maximum amps a given wire can handle is an engineering problem and depends on design details. You figure this out by computing heat dissipation in the wire and finding cooling rate. Then you compute wire temperature in operation and compare it to max temp for insulation and the wire itself. Safety factors are needed in case a wire gets nicked, or ambient temperature increase, or other reasons. High temperature insulation (like PTFE) allows more current in the same gauge wire.

Take a look at the following chart:

https://www.powerstream.com/Wire_Size.htm

You see the difference in amps for chassis wiring versus power transmission ? This is because a wire in air inside a case is cooled more efficiently then a wire inside a wall.
Which, given cable management, quite a portion of the said wire inside the case will be cramped in an area with adjacent wires and no airflow
 
Which, given cable management, quite a portion of the said wire inside the case will be cramped in an area with adjacent wires and no airflow
How many instances of wires failing before something at the connector ends failed exist? Probably none. Unless you repeatedly bend wires at the same place until metal fatigue failure, wiring hardly ever fails on its own even with a continuous 30% overload.

Nothing in a PC case compares to being surrounded in R5 insulation. Even the deadest of dead spots inside a case still has a breeze from fans churning air around and convection. If you jam wires between the side panel and motherboard plate, the cover and plate are heatsinks and radiators for conducted heat, not insulation.
 
  • Like
Reactions: TJ Hooker and qxp
when a cable is melting, just take a thicker one..!!!!!! nvidia & co don't know this elementary thing ????
Thicker cables wouldn't help much when the root issue is connector pins either being defective, becoming loose or breaking - the only way you can have 1A on some 12V wires and 20+A on others is if something along the 1A wires is broken, ex.: vibrations and cable strain fracturing the connection somewhere between the pin and crimp over time.

Thicker wires are heavier and harder to bend, which would further increase the likelihood of connector damage.
 
Last edited:
There is a discussion on what users want in a new connector here:


Short summary: 48V, one wire for positive, one for ground, two sensing wires to tell whether power wires are overloading (so you don't have to reset fuses) and two data wires to talk to PSU. 48V could also be used to provide motherboard with power ready for USB-C connectors.
How about using connector designed for high current from the start? For example, XT-90 has been demonstrated to work well in drones so surely it would work with GPUs, too.
 
  • Like
Reactions: qxp
Actually no, this is the first time anyone mentioned it. I looked online and could not find the specs for PCIe CEM 48VHPWR - they appear to be behind a paywall. Do you have a link ? I am curious to compare.


It is not just the higher voltage. There is a way to see how much voltage drop you have from power supply to the card so that the output can be disconnected when it is too big due to poor contact or excessive current.
Haha, oops, didn't notice you were the one posting over at TPU.

I have access to the spec from PCI SIG through my employer, don't have a free link to share unfortunately. But it is largely similar to what you said, minus the data communication protocol between the GPU and PSU.
- 2 power-carrying wires, with what looks like some form of blade connectors
- 48V
- 4 aux/signal wires. However their use is different that what you suggested; I think they're the same as the 12V-2x6 aux pins. 2 are used as sense wires, to set power limit for the connector by the PSU. The other 2 are optional, and used to provide some rudimentary feedback from the GPU to the PSU (a 'cable connection detected' signal and a 'power ok' signal).
 
Last edited:
Amp rating for 16awg is 13a. Period. Max. That's the safety standard. For decades. It's the literal electrical code for 16awg. Laws are already in place. NEC is wrong.
"NEC" stands for "National Electrical Code". The NEC is the "literal electrical code" (in the US).

Edit: Also, the NEC itself isn't law. Although many states base their laws upon it.
 
Last edited:
It is not just the higher voltage. There is a way to see how much voltage drop you have from power supply to the card so that the output can be disconnected when it is too big due to poor contact or excessive current.
A smarter way to do it is to monitor voltage drop from the PSU's bus bar to the load's DC-in power plane and throttle the load to keep voltage drop within specs. That way, instead of getting a sudden shut-off, the load gets progressively power-limited by supply droop allowance being maxed out.

When using cable droop for throttling, the remote voltage sense could also be manipulated by the PSU to indirectly communicate how much power headroom is available instead of blindly running into OCP, though this method doesn't account for variability in cable and connector voltage drop, requires a calibration method.

You could communicate actual PSU load% through a 20kHz 1-99% PWM signal but that will never be as fast as the immediate feedback from getting Vsense+ from the other side of a shunt resistor.

The two in combination would enable loads (their drivers or firmware) to calculate exactly how much power is available on tap, measure total resistance between DC-in+ and wherever Vsense+ is coming from, likewise for ground-to-ground if you want to monitor that. All that is left to do after this is program throttling parameters to keep Vdroop+ (and Vdroop- if desired) below threshold.
 
  • Like
Reactions: qxp
A smarter way to do it is to monitor voltage drop from the PSU's bus bar to the load's DC-in power plane and throttle the load to keep voltage drop within specs. That way, instead of getting a sudden shut-off, the load gets progressively power-limited by supply droop allowance being maxed out.

When using cable droop for throttling, the remote voltage sense could also be manipulated by the PSU to indirectly communicate how much power headroom is available instead of blindly running into OCP, though this method doesn't account for variability in cable and connector voltage drop, requires a calibration method.

You could communicate actual PSU load% through a 20kHz 1-99% PWM signal but that will never be as fast as the immediate feedback from getting Vsense+ from the other side of a shunt resistor.

The two in combination would enable loads (their drivers or firmware) to calculate exactly how much power is available on tap, measure total resistance between DC-in+ and wherever Vsense+ is coming from, likewise for ground-to-ground if you want to monitor that. All that is left to do after this is program throttling parameters to keep Vdroop+ (and Vdroop- if desired) below threshold.
Ah~ best logic ever, instead of Nvidia designing a product drawing massive power with Nvidia proposed new shiny 12VHPWR/12V 2x6 connector with built in load balancing and safety features on Nvidia's side, let's push the PSU vendor to implement that on their side for Nvidia, and maybe rate any cables for 2 cycles or 3 in a "I told you" manner.

👍
 
  • Like
Reactions: qxp
Ah~ best logic ever, instead of Nvidia designing a product drawing massive power with Nvidia proposed new shiny 12VHPWR/12V 2x6 connector with built in load balancing and safety features on Nvidia's side, let's push the PSU vendor to implement that on their side for Nvidia, and maybe rate any cables for 2 cycles or 3 in a "I told you" manner.
Since 12VHPWR doesn't have provisions for remote-Vsense pins, implementing that would require a new standard.
 
  • Like
Reactions: TJ Hooker and qxp
Haha, oops, didn't notice you were the one posting over at TPU.

I have access to the spec from PCI SIG through my employer, don't have a free link to share unfortunately. But it is largely similar to what you said, minus the data communication protocol between the GPU and PSU.
- 2 power-carrying wires, with what looks like some form of blade connectors
- 48V
- 4 aux/signal wires. However their use is different that what you suggested; I think they're the same as the 12V-2x6 aux pins. 2 are used as sense wires, to set power limit for the connector by the PSU. The other 2 are optional, and used to provide some rudimentary feedback from the GPU to the PSU (a 'cable connection detected' signal and a 'power ok' signal).

I wonder if there is any way to get in contact with people at SIG who design this. The sense wires would allow the power supply to measure voltage directly on ground and DC planes on the card and let power supply protect itself, cable and load.

As for data communications protocol I think it would be very useful for the load to tell PSU what power supply it wants and the for the user to monitor system health from PSU viewpoint.

A smarter way to do it is to monitor voltage drop from the PSU's bus bar to the load's DC-in power plane and throttle the load to keep voltage drop within specs. That way, instead of getting a sudden shut-off, the load gets progressively power-limited by supply droop allowance being maxed out.

When using cable droop for throttling, the remote voltage sense could also be manipulated by the PSU to indirectly communicate how much power headroom is available instead of blindly running into OCP, though this method doesn't account for variability in cable and connector voltage drop, requires a calibration method.

You could communicate actual PSU load% through a 20kHz 1-99% PWM signal but that will never be as fast as the immediate feedback from getting Vsense+ from the other side of a shunt resistor.

The two in combination would enable loads (their drivers or firmware) to calculate exactly how much power is available on tap, measure total resistance between DC-in+ and wherever Vsense+ is coming from, likewise for ground-to-ground if you want to monitor that. All that is left to do after this is program throttling parameters to keep Vdroop+ (and Vdroop- if desired) below threshold.
All good suggestions, some of these we discussed at the link below. I personally would be in favour of PSU monitoring the voltage plane sense wires (thanks for suggesting the name!) and cutting the load if anything is out of order. I wonder if there is a way to get people in the industry to listen to these suggestions.

 
I wonder if there is any way to get in contact with people at SIG who design this. The sense wires would allow the power supply to measure voltage directly on ground and DC planes on the card and let power supply protect itself, cable and load.
Unfortunately that's not how the standard was designed. The two "sense" pins are just fixed, binary values (either open or grounded). They are a way for the PSU to indicate how much power the connector can provide, from 0 to 600W in increments of 150W. As InvalidError said, adding actual remote voltage sensing would need a new standard and new PSUs.

PCI SIG has a contact page, but I really doubt they're interested in design input directly from members of the public.

https://pcisig.com/membership/contact-us
 
Last edited:
  • Like
Reactions: qxp