News Programmer Creates 12VHPWR Safety Mechanism for RTX 4090

Nvidia design a new connector for nothing because the old one created in 2003 haven't encounter any issues. The older is bigger, a larger contact area for greater current flow so can stay for 10-50 more years like power should decrease over time due to better efficiency.
Nvidia create a too small connector with contact surfaces that are too small and cause burning issue due to a bad design. It's strange than people know how connect their power plug from 2003 to 2023 but this year they no longer know how to do it.

Why graphic card seller continue to use this brand new bad connector and not go back to the older one ?
This world is truly in decline.
 
A Reddit user created a custom monitoring program to monitor the connector temperature of 12VHPWR-equipped graphics cards like the RTX 4090 (one of the Best GPUs) that have earned a reputation for melting. The program utilizes a 10k sensor that measures the temperature of the connector to warn the user or protect the graphics card from potential damage.
Correction: What they actually wrote was just a little Python script (that I think they didn't even share!), which used the liquidctl library to read the thermal probe from their Corsair iCUE Commander box.

The latter point is important, because liquidctl supports only a limited number of devices:

So, I'm not sure how general a solution this actually is. I'm also genuinely curious just how many motherboards have a connector for such temperature sensors, but the real liming factor might be having software you can use to read it.

What might be of more interest is the temperature data shared in the Reddit post:

Take careful note of the various caveats.
 
Last edited:
  • Like
Reactions: TJ Hooker
power should decrease over time due to better efficiency.
GPUs do get more efficient over time. However, rather than delivering the same performance at lower power, they exploit those efficiencies by delivering better performance at the same (or even greater) power!

What's worse is that they run these devices well above their peak-efficiency point, chasing ever diminishing returns for the ability to claim the highest performance.

Basically, if a GPU maker can deliver X performance by making a bigger die or by increasing clock speeds, they'll go with the latter option, because that costs them much less. The downside is that it burns more power.

We're seeing a similar story with CPUs. In both cases, if customers only cared about efficiency, then perf/$ would suffer.
 
Last edited:
Something similar to this should also be used with devices that people charge overnight like smartphones or electric bicycles that sontimes tend to explode during the charge due to battery failure and burn down the house. A heat detector should be placed on the battery area and raise an alarm if the battery's temperature start to rise, before it catches fire and that will give some time to the rasidents to react.
 
  • Like
Reactions: bit_user
Something similar to this should also be used with devices that people charge overnight like smartphones or electric bicycles that sontimes tend to explode during the charge due to battery failure and burn down the house. A heat detector should be placed on the battery area and raise an alarm if the battery's temperature start to rise, before it catches fire and that will give some time to the rasidents to react.
I'm no expert on the matter, but I think these Li-ion batteries tend to have a built-in controller and I'd bet they do typically also have temperature sensors. I have the sense that it's usually the cheap ones which explode, so maybe you're right that those don't.

As e-bikes, scooters, cars, etc. get more popular, you're right that we're going to have to solve the whole "catch fire and die" problem. I know I've heard a couple times about promising next-gen battery technologies that won't do that, but then I keep wondering why we're still using Li-ion.
 
GPUs do get more efficient over time. However, rather than delivering the same performance at lower power, they exploit those efficiencies by delivering better performance at the same (or even greater) power!

What's worse is that they run these devices well above their peak-efficiency point, chasing ever diminishing returns for the ability to claim the highest performance.

Basically, if a GPU maker can deliver X performance by making a bigger die or by increasing clock speeds, they'll go with the latter option, because that costs them much less. The downside is that it burns more power.

We're seeing a similar story with CPUs. In both cases, if customers only cared about efficiency, then perf/$ would suffer.

Yes but most of the time better performance for lower power like 3060 ti/4060 ti.
NVIDIA recommends a 550W PSU for the RTX 4060 Ti and a 600W PSU for the 3060 Ti

The RTX 3080 (10GB) has a graphics card power of 320W, while the 12GB variant takes the graphics card power up to 350W.
The RTX 4080 can consume less power than its predecessor, with the 16GB variation needing 320W, while the 12GB variation takes that down to 285W.


It's may be not all the time the case but most of time consume lower or the same. GPU manufacturer limit power you can supply to the card now and even drop frequency if run too hot. Since some years it's probably better spending time to improve cooling or power/frequency ratio than draw maximum current as possible. Frequencies will drop over time when temperature increase so for have better performance it's better to cool your card and keep maximum performance over time.

This connector is useless and manufacturer shouldn't use. Return all card to Nvidia and ask for full reimbursement due to major design issues.
 
The RTX 3080 (10GB) has a graphics card power of 320W, while the 12GB variant takes the graphics card power up to 350W.
The RTX 4080 can consume less power than its predecessor, with the 16GB variation needing 320W, while the 12GB variation takes that down to 285W.[/I]
Wow, that's old info because the "12 GB variation" of the RTX 4080 got launched as the RTX 4070 Ti.

It doesn't really mean anything, because product names and specs are rather fluid. If you want to find deeper truths in power usage, you should look at trends spanning a lot more data points and probably at least 3 generations. In case you're interested in pursuing the matter, Wikipedia is a good resource:

TechPowerUp's GPU Database is another:
 
Last edited: