News Another RTX 5090 connector melts down, reportedly taking a Redditor's PSU with it

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
That doesn't say anything about watts and current being drawn through these connectors and cables, which is what's burning them up.

Your whole argument, was about titles and game settings, that would push a 5090 to its limits... so i mentioned certain ultra settings for some of the most demanding games we currently have.

Sure, i can't mention specific power draws, because i'm only monitoring FPS while i'm gaming.

But if strenuous real world conditions don't say anything about the possibility of a power connector burning, then i don't know what does.

My apologies if i misunderstood your argument, but you seem to imply that, under certain circumstances, every 5090 connector would burn... which is why it would be nice if you gave us your notion of pushing a 5090 hard enough.
 
Last edited:
And also on PSU side as well? Good luck.
I am not aware of any melted RTX 30x0. I'm sure theres 1, maybe 3. ??

If the shunts prevent the melting on the GPU side, there shouldn't be any reason it would cause melting on the PSU side? Were PSU melting with RTX 30x0 and I missed the dozens and dozens of stories about it? I could have missed it, please elaborate.
 
The SF1000 is supposed to by ATX 3.1 certified. I thought that meant that it used the new 12V-2X6 instead of the 12VHPWR, but it still says 12VHPWR on the website. Maybe just the site needing updating...?
The cable didn't change with the new spec, just the header. The corsair PSU uses 8 pin connectors on the PSU end for their 12 pin connector. So 12VHPWR vs 12V-2x6 doesn't make any difference for this PSU.
 
Last edited:
  • Like
Reactions: alceryes
Right. I was talking specifically about the 12VHPWR spec. I didn't think it could include things outside the scope of the 12VHPWR itself, which shunt resistors on the GPU PCB would be. Maybe I'm wrong...?

Whether they can modify the spec itself, retract the spec entirely, or NVIDIA add an addendum to all their card's power requirements (fat chance of that one), the situation definitely needs to be corrected.
There is no separate 12VHPWR spec, it's just one of many sections within the PCIe CEM spec. I see no reason why PCI SIG couldn't mandate a current monitoring scheme in the spec if they were so inclined.
 
Last edited:
Uh so, I'm still rocking with my 1060 6gb, unsure why some people buying a new gpu every year or two. One can play most games such as RDR2,Witcher 3 even with a 1050 ti. Mid to High settings 30-45fps easily. Tweaking graphics settings is still an art. This isn't iphone. There aint sliders and one should craw their way out to discover/experience speed,temps,quality if one wishes to get their bang for their buck. But hey who am I to judge
Because some people WANT to experience more than 720p resolution. And some WANT to play at more than 30fps.

I personally will NOT play a game @ 30fps.! I can't physically stand it. I get what I'd say is motion sickness, especially when playing on a large screen.
 
Am I wrong, or have a number, possibly most, of the 5090 melting reports been from SFF builds? I'm wondering if high case temps are a factor, pushing something marginal into failure.
I suspected this from the start, it's likely that the spec, even in a perfect case scenario isn't cool to begin with (IIRC in the early reviews or investigation of melting gate 2.0 there are temps as high as 70C+ reported on the connector side, with amp distribution being normal)

With such, a SFF case will absoultely raise the temp futher, possible 20C+ during lengthy gaming/AI/Video encoding etc. So if anything goes marginally wrong, the SFF builds will be way more likely to melt. And maybe it's the good old "Time will tell" thing... just like smoking will absolutely cause cancer, but not everybody will get cancer, less so in the first few years.
 
Your whole argument, was about titles and game settings, that would push a 5090 to its limits... so i mentioned certain ultra settings for some of the most demanding games we currently have.
Aye, but that doesn't automatically mean the card's running the same power in each one of those scenarios. It just can't, cause the games aren't coded the same...

My apologies if i misunderstood your argument, but you seem to imply that, under certain circumstances, every 5090 connector would burn... which is why it would be nice if you gave us your notion of pushing a 5090 hard enough.
Maybe finding out what some of these folks were playing when things started burning up as a start, looking into the card's power draw curve, and then testing those titles at lower power limits.
 
I wouldn't pay premium price for poor quality hardware. With missing ROPs and bad drivers too. That's why I hesitated on 5090.
 
Am I wrong, or have a number, possibly most, of the 5090 melting reports been from SFF builds? I'm wondering if high case temps are a factor, pushing something marginal into failure.
I don't know if SFF have been more susceptible or not; if they have been, case temps could be a factor, but it could also be that the restrictive space ends up putting more pressure on the cable/connector.

When the connector was first trialed, on the 3090Ti/3090/3080Ti/3080 FE cards, NVIDIA put three shunt resistors (one per 12V pair) on the card PCB itself. We had no issues because the card just wouldn't power on (or would power itself off) if the load was significantly unbalanced....NVIDIA could go back to the 30x0 series design and start putting a shunt resistor on the PCB per pair of 12V pins, on all their FE cards. They could put out an addendum for their card designs stating that shunt resistors are required for all xx90 and xx80 class cards. Why don't they...?
My pet theory is that, at the higher power draws of these latest cards combined with the low safety margin of 12VHPWR, there'd be so many occurrences of 4090/5090 cards shutting off due to load imbalance that the connector design couldn't be described as anything other than broken. For every melted card there must be plenty more which are imbalanced enough that the shunt design would shut them off.

Given a choice between (i) having high-end cards continually quitting left, right and centre or (ii) leaving high-end cards secretly running some pins very hot and riding out the % of those running hot enough to cause catastrophic failure, it feels like NVIDIA might have opted for (ii).
 
I don't know if SFF have been more susceptible or not; if they have been, case temps could be a factor, but it could also be that the restrictive space ends up putting more pressure on the cable/connector.


My pet theory is that, at the higher power draws of these latest cards combined with the low safety margin of 12VHPWR, there'd be so many occurrences of 4090/5090 cards shutting off due to load imbalance that the connector design couldn't be described as anything other than broken. For every melted card there must be plenty more which are imbalanced enough that the shunt design would shut them off.

Given a choice between (i) having high-end cards continually quitting left, right and centre or (ii) leaving high-end cards secretly running some pins very hot and riding out the % of those running hot enough to cause catastrophic failure, it feels like NVIDIA might have opted for (ii).
Feels like the case, especially they dominate the market and basically have no consequence even if they give you a big "F you loser" everygen... just RMA or even decline without issue compared to a card with a safety feature shutting down left right and center.

But after all these years it have a feel to me it's more about ego than engineering, I am pretty sure if they just make the connector 300W instead of 600W, just make the need of 4x 8pin into 2x 12vhpwr and it will be good and safe to go. They just gone too far and don't dare to admit fault after initial issues breaking loose