Just run some 10 gauge from the PSU to the GPU. Done.
Sure, but since Nvidia don't seem to have any faint hint to admit fault and fix the root cause on their side, all other parties can do is to do these sort of band aid fixes to save the consumers.I think the proposed solution is over complicating the matter and at the same time, not addressing the root cause. From my takeaway from Roman's (aka Der bauer) assessment, the key problem is that a single power line is not an ideal solution to deliver such a high power load. Specifically, there seem to be no way to load balance how much current is going through a single strand of cable and the simple solution is to use 2 connectors instead of just 1. It may not directly resolve the load balancing on a single cable, but will surely reduce the strain per cable while allowing for a good amount of margin for error.
I don't disagree. However Nvidia can get away with anything because their supporters will support them regardless of poor performance and hazards. The decision to make the power deliver look "clean and elegant" comes at the cost of potential fire hazard. In my 2.5 decades of fiddling with computer hardware, I've not ran into any issue with power connector melting, not with the initial molex and 8 pin power connectors, up till now. I understand that the power requirement have increased significantly over the years, but is it really that difficult to make it less risky to use?Sure, but since Nvidia don't seem to have any faint hint to admit fault and fix the root cause on their side, all other parties can do is to do these sort of band aid fixes to save the consumers.
Completely agree with your view, back then you have to really be ruthless to have a 8pin charred or melted, I've seen multiple cases of ppl putting the 6+2 pin on the wrong alignment making the 2pin literally half plugged and it just got unstable and not melted, yet is very visually apparent, now that the 12vHPWR standard and the 12V 2x6 is literally a dumpster fire with blind fans defending for no reason.I don't disagree. However Nvidia can get away with anything because their supporters will support them regardless of poor performance and hazards. The decision to make the power deliver look "clean and elegant" comes at the cost of potential fire hazard. In my 2.5 decades of fiddling with computer hardware, I've not ran into any issue with power connector melting, not with the initial molex and 8 pin power connectors, up till now. I understand that the power requirement have increased significantly over the years, but is it really that difficult to make it less risky to use?
I am sorry to inform you that none of us here decides what the correct solution is.That is A solution but it is not the CORRECT solution.
Sadly coz the plug spec cannot fit any thicker wires, plus the Molex plug can’t be soldered onto… so here you goWhy is this overcomplicated solution even being proposed, putting on thicker gauge wire and fixing the issues with solder joints at the terminals will fix 99% of the issues.
Actually IMO in this aspect, it's still the ATX 3.x standard's issue, the connector basically makes the cable only able to do so much within the standard spec, so it literally have no headroom nor balancing capability by the standard itself.
As per why Nvidia removed the safety rail splitting, a conspricy theory circling is that Nvidia found that if such feature is implemented, more reported case in the form of RMA will appear showing the high failure rate of the new connector since 4090's power draw, so they decide to screw us and blame user error instead of blaming themselves andget a ATX 4.0 right after the 3.0 standard
I've already agreed with that and commented about it above. You can read too, lol. Just ambiguous what you said -- were we supposed to read your mind? That said, that is the best solution but doesn't mean we should stop there.A proper power management system on the card. You can read, go read the comments.
but it seems now that the 9070 will not be priced really midrange... Radeon division is still the good old never miss an opportunity to...The best solution IMO is to avoid these cards and just go with AMD instead. I decided that my next build will be AMD CPU/GPU. Intel has dropped the ball both in design and performance and Nvidia are certified lunatics with their overpriced and faulty designs.
My only concern is whether or not this is a wake-up call for all high-power GPUs with multi-wire power connectors. I'm yet to see a GPU which has the ability to monitor / load balance the current distribution over the ground wires.The best solution IMO is to avoid these cards and just go with AMD instead. I decided that my next build will be AMD CPU/GPU. Intel has dropped the ball both in design and performance and Nvidia are certified lunatics with their overpriced and faulty designs.
It will be difficult, after all these and Nvidia have what consequence? their stock rise through the sky..My only concern is whether or not this is a wake-up call for all high-power GPUs with multi-wire power connectors. I'm yet to see a GPU which has the ability to monitor / load balance the current distribution over the ground wires.
but that would require them to admit fault.How about Nvidia recall their dangerously fault cards and implement proper load balancing across each wire? Even their 3000 series load balancing implementation was flawed as it only load balanced the +12V wires and not the ground wires. I bet if you cut 5 of the 6 ground wires on a 3000 series card, the remaining wires would still dangerously overheat.
They need to do it properly, this flaw affects all multi-wire GPUs, even the ones with the old PCI-E power connectors.
You're right, get it under 75W and the PCI-E slot will provide enough power! Problem solved!Here is another brilliant solution. Downclock it