News Another RTX 5090 connector melts down, reportedly taking a Redditor's PSU with it

🎵 Tale as old as time... 🎶

No 12VHPWR/12V-2X6 connector is safe. It sucks that it took out his PSU and his card.

The SF1000 is supposed to by ATX 3.1 certified. I thought that meant that it used the new 12V-2X6 instead of the 12VHPWR, but it still says 12VHPWR on the website. Maybe just the site needing updating...?
 
"As of now, the only foolproof method to secure your RTX 50 series GPU is to ensure optimal current draw through each pin. You might want to consider Asus' ROG Astral GPUs as they can provide per-pin current readings, a feature that's absent in reference RTX 5090 models."

Alternatively, don't buy an Nvidia GPU until they (Nvidia and its board partners) properly resolve this design issue. Both iterations of the 12V connector have serious design flaws in regards to the supporting hardware on the cards, not balancing the load. Anything drawing close to 600 watts, doesn't strike me as being very energy efficient.
 
Last edited:
When the connector was first trialed, on the 3090Ti/3090/3080Ti/3080 FE cards, NVIDIA put three shunt resistors (one per 12V pair) on the card PCB itself. We had no issues because the card just wouldn't power on (or would power itself off) if the load was significantly unbalanced.

NVIDIA was proud of their creation! They took their wonderful idea to PCI-SIG (PCI-SIG tests and creates the spec). NVIDIA is the largest partner of PCI-SIG and my guess is that NVIDIA threw a ton of money at getting the connector certified, showing PCI-SIG their test data for the RTX 30x0 series cards.

What no one knew (or failed to report if they did know) was that the extra shunt resistors on the RTX 30x0 cards were masking a critical flaw of the 12VHPWR design. When the 40x0 series cards came out, the 12VHPWR was the new, fully certified spec., but NVIDIA (on the FE cards) added shunt resistors in a way to nullify any protection from having them load balance - melting 4090s ensued, and melting 40x0 and 50x0 cards continue to this day.

As to why it's not fixed...? I seriously think it's hubris on NVIDIAs side at this point. NVIDIA is the biggest company in the world. They can't be wrong, right?

NVIDIA could go back to the 30x0 series design and start putting a shunt resistor on the PCB per pair of 12V pins, on all their FE cards. They could put out an addendum for their card designs stating that shunt resistors are required for all xx90 and xx80 class cards. Why don't they...? Too much of a fallout is my guess. By doing this they would admit that the design is at fault. They are just too proud and/or too much of a bully to admit that they made a mistake and were wrong.
 
Last edited:
  • Like
Reactions: snemarch and jlake3
No 12VHPWR/12V-2X6 connector is safe. It sucks that it took out his PSU and his card.

To be fair, all 12VHPWR are (to my knowledge) entirely safe on the 30x0 series of cards.

Nvidia went out of spec and added shunt resistors to the card's power subsystems which prevented this sort of ridiculousness.

EDIT: Just now seeing your second post. Already stated.
They took their wonderful idea to PCI-SIG
PCI-SIG needs another update to mandate those shunts as an official part of the spec.
 
To be fair, all 12VHPWR are (to my knowledge) entirely safe on the 30x0 series of cards.

Nvidia went out of spec and added shunt resistors to the card's power subsystems which prevented this sort of ridiculousness.
See my post above yours. The 12VHPWR spec is not safe.
Any spec that requires extra circuitry outside of the spec itself, to be safe, is not itself safe.
 
PCI-SIG needs another update to mandate those shunts as an official part of the spec.
And also on PSU side as well? Good luck.

The only proper way forward is to completely bury this standard. Even going with EPS connector would be fast and sane mitigation of the issue.
 
They can't. The connector is a spec. They can't say that to use this connector you need something outside of the connector spec scope. Maybe they could retract the spec as a whole...? Not sure.
The PCIe card spec is the PCIe CEM (Card Electromechanical) specification <-- not just a connector spec. And it has all kinds of electrical requirements for PCIe cards. The PCI spec is exactly where this circuit requirement would be documented as a recommended implementation.
 
Uh so, I'm still rocking with my 1060 6gb, unsure why some people buying a new gpu every year or two. One can play most games such as RDR2,Witcher 3 even with a 1050 ti. Mid to High settings 30-45fps easily. Tweaking graphics settings is still an art. This isn't iphone. There aint sliders and one should craw their way out to discover/experience speed,temps,quality if one wishes to get their bang for their buck. But hey who am I to judge
 
A Redditor's $2,900 RTX 5090 was struck by a melted connector, also damaging their 1000W power supply.

Another RTX 5090 connector melts down, reportedly taking a Redditor's PSU with it : Read more

I’ve been using MSI GeForce RTX 5090 Gaming Trio OC for more than a month now, along with the MSI power connector, that came with. Haven’t encountered any issues thus far.

P.S. I know it’s off topic, but, for anyone reading this, using a 1000W PSU with this specific GPU, is a BAD idea. My computer would shut down and restart, every time I tried to run Heart of Chornobyl, or certain 3dMark benchmarks. Bought a 1200W PSU a few weeks back and everything’s fine now.
 
Last edited:
The PCIe card spec is the PCIe CEM (Card Electromechanical) specification <-- not just a connector spec. And it has all kinds of electrical requirements for PCIe cards. The PCI spec is exactly where this circuit requirement would be documented as a recommended implementation.
Right. I was talking specifically about the 12VHPWR spec. I didn't think it could include things outside the scope of the 12VHPWR itself, which shunt resistors on the GPU PCB would be. Maybe I'm wrong...?

Whether they can modify the spec itself, retract the spec entirely, or NVIDIA add an addendum to all their card's power requirements (fat chance of that one), the situation definitely needs to be corrected.
 
  • Like
Reactions: ezst036
sadly they will not change the connector's failsafes until it causes a persons house to burn down.

All the gpu's need is ability to tell if each pin is within spec and not pushing out of spec amounts of voltage which is entire issue.
 
Uh so, I'm still rocking with my 1060 6gb, unsure why some people buying a new gpu every year or two. One can play most games such as RDR2,Witcher 3 even with a 1050 ti. Mid to High settings 30-45fps easily. Tweaking graphics settings is still an art. This isn't iphone. There aint sliders and one should craw their way out to discover/experience speed,temps,quality if one wishes to get their bang for their buck. But hey who am I to judge
You don't understand because you are used to your low frame rate and low graphics quality with your old games. It's not easy to understand when you don't know any better.

But at some point in their life, some people start making a decent amount of money and can afford to finally enjoy playing games at a decent resolution and frame rate.

I don't need to worry about the art of tweaking settings anymore. I set everything to max (sometimes with some help from dlss) and can enjoy any games at 4k 100+ fps. And yes, you can absolutely notice the difference between 60 and 100+ fps unlike some people like to pretend. I am now unable to play at 60 fps it feels awful, so imagine at your 30-45 fps. No thank you.

I would certainly not spend 2000$ for a 15-20% performance uplift (it's why I skip the 5000 series), but if someone has the money and wants those extra fps then it's their right to do so.
 
I don't need to worry about the art of tweaking settings anymore. I set everything to max (sometimes with some help from dlss) and can enjoy any games at 4k 100+ fps. And yes, you can absolutely notice the difference between 60 and 100+ fps unlike some people like to pretend. I am now unable to play at 60 fps it feels awful, so imagine at your 30-45 fps. No thank you.

I would certainly not spend 2000$ for a 15-20% performance uplift (it's why I skip the 5000 series), but if someone has the money and wants those extra fps then it's their right to do so.

Just out of sheer curiosity; which GPU are you currently using?
 
I think it also depends on what folks are playing/running. I recall seeing a post on TPU suggesting a workaround being to power limit the card to never draw over 400/450w.

Just taking a guess here, but the "Well, I'm not having any issues" folks are probably not pushing their cards hard enough. There's so many different combos of hardware and software out there... everyone can't be expected to play the same titles at the same settings.
For those that manage to push it hard enough... POOF.
 
Just taking a guess here, but the "Well, I'm not having any issues" folks are probably not pushing their cards hard enough. There's so many different combos of hardware and software out there... everyone can't be expected to play the same titles at the same settings.

Not applicable here.

I've tried everything; from Avatar Unobtainium settings, Star Wars Outlaw settings (which, by the way, can sometimes consume more than 21 GB of VRAM) and Alan Wake 4K Ultra PT, to running Assassin's Creed Mirage, Resident Evil 4 remake and FarCry 6 with 200% image quality (or resolution scale, if you like).

FarCry 6, specifically, requires 25 GB of VRAM at max settings.

How much more pushing than that would you need? 🤣
 
Not applicable here.

I've tried everything; from Avatar Unobtainium settings, Star Wars Outlaw settings (which, by the way, can sometimes consume more than 21 GB of VRAM) and Alan Wake 4K Ultra PT, to running Assassin's Creed Mirage, Resident Evil 4 remake and FarCry 6 with 200% image quality (or resolution scale, if you like).

FarCry 6, specifically, requires 25 GB of VRAM at max settings.

How much more pushing than that would you need? 🤣
That doesn't say anything about watts and current being drawn through these connectors and cables, which is what's burning them up.
 
"As of now, the only foolproof method to secure your RTX 50 series GPU is to ensure optimal current draw through each pin. You might want to consider Asus' ROG Astral GPUs as they can provide per-pin current readings, a feature that's absent in reference RTX 5090 models."

Alternatively, don't buy an Nvidia GPU until they (Nvidia and its board partners) properly resolve this design issue. Both iterations of the 12V connector have serious design flaws in regards to the supporting hardware on the cards, not balancing the load. Anything drawing close to 600 watts, doesn't strike me as being very energy I can imagine people using an Astral and staring at their pin voltages
I can imagine your paranoid Astral user staring at the pin voltages every second their card is in use (good grief).

Just an FYI, an Astral 5090 was one of the few cards that had a confirmed case of a melted connector. Dunno if he/she was monitoring pin voltages at the time.

The best way to mitigate this right now is to use a WireView Pro. It monitors the connector and will alarm when it gets too hot. Simply replacing the cable would likely correct the issue.
 
  • Like
Reactions: MosephV
I wonder what is the cause for most pins of the 12VHPWR to lose connection so only a few pins remain to deliver the power? That didn't happen with the older power connectors. Maybe they should make the 12VHPWR connector bigger.