News New GPU Power Connector Eliminates Cables, Delivers More Than 600W

Status
Not open for further replies.
Maybe I'm just old fashioned in that I like copper lines that deliver a lot of power to have decent cross sectional area, but I don't like it. Will we have to plug an extra 4x 8 pins into the mobo and then have all of that EM pollution all up in there next to our ram traces?

I'm probably just old fashioned in my thinking this will just make more problems. Just give me fat wires and big connectors.
 
So... It'll get the extra juice from the 24 pin? the EPS 8pin? Still requires plugging the PCIe 8pins or the new 12pin to the motherboard still?

I would rather Intel get a new version of ATX and work with PCI-sig to get this on their documents so it's a proper standard instead.

Personally, I don't mind the cables, but I can see why aesthetic enthusiasts would want this and all in all, I don't think it's a bad idea... Problem I'd have is the way they go about it.

Regards.
 
  • Like
Reactions: PEnns and Sluggotg
The specifications for a new HPCE GPU power connector have been unveiled, which allows a high-power graphics card to skip the cables and pull all the power from the motherboard.
[...]
In theory, this means this new consumer-focused HPCE connector won't succumb to similar reliability issues as the new 12VHPWR connector, since the connector has already been field-tested in the server space.
According to the images in the linked tweets, this design still involves the 12V-2X6 connector* (the updated 12VHPWR connector, tweaked so that the signal pins are set a bit further back), it just gets plugged into the motherboard rather than the card.

So it doesn't eliminate any cables. Nor does it do anything to avoid the "reliability issues" of the 12VHPWR connector, other than what is already being accomplished by the transition to the (backwards compatible) 12V-2X6 connector. It would arguably hurt reliability, by introducing another point of failure. Except maybe if the motherboard header happened to be situated in such a way as to allow a better/less stressed connection compared to the graphics card power header, depending on your particular case layout.

If the HPCE connector really is more robust, I guess there could also be advantage in using that where you're more likely to have more mating cycles (I'm assuming graphics cards get replaced/reconnected more often that motherboards), but other than that it does really seem like it's largest benefit is cable management/aesthetics.

* Or 12V8Pin, presumably for cases where less power is needed.
 
Last edited:
  • Like
Reactions: _dawn_chorus_
Personally, I don't mind the cables, but I can see why aesthetic enthusiasts would want this and all in all, I don't think it's a bad idea... Problem I'd have is the way they go about it.
You really don't want the "Aesthetic Enthusiasts" determine ANYTHING about PC Hardware.

We're already inundated with RGB Rainbow-Puke on everything that serves no real functional purpose other than creating a gaudy Rainbow light show.
 
You really don't want the "Aesthetic Enthusiasts" determine ANYTHING about PC Hardware.

We're already inundated with RGB Rainbow-Puke on everything that serves no real functional purpose other than creating a gaudy Rainbow light show.
RGB can be turned off and having less cables is not a bad thing, overall. As I said, it'll depend on how they go about it.

Regards.
 
If it works for servers, why not ?
Morherboards might need extra, or thicker layers, so it might add cost here and there.
Motherboard, and likely gpu too
 
  • Like
Reactions: PEnns
I would rather Intel get a new version of ATX and work with PCI-sig to get this on their documents so it's a proper standard instead.
Intel kind of missed the mark on that by giving its 12VO spec insufficient power on the main connector to ditch the EPS12 cable and continuing with 12V. It think it needed at least two extra 12V pins.

I get a feeling there will be power-efficiency pressure for 24VO PCs 10 years from now, maybe even 48VO if GaN or something beyond that manages to close the cost-efficiency gap with MOS. We'll definitely need new standards designed around native DC power distribution voltage for efficiency reasons once that happens.
 
  • Like
Reactions: -Fran- and Sluggotg
It's something that should have been done ages ago, and especially when mutli-GPU consumer setups bit the dust, but the problem is cost and reliability. In server setups it's not an issue as they're built to a standard, consumer boards though, especially lower end ones and ones from certain OEMs, would you really trust, say, an ASRock A730 Mini-ITX to be built to handle a 600w GPU? I wouldn't.
 
I'm hoping this new power connector on the motherboard will force motherboard manufactures to put nvme out of the of the video card, mine has it located directly underneath my video card, limiting my nvme heatsink options...
 
I can't imagine an additional 600w of power being routed through a MB. ..

50amps of power needs to be routed so that it doesn't interfere with existing components and is isolated. MB manufacturers will most likely botch it in favor of cost cutting.

I'm all for simplicity but don't know how they are going to pull off putting an additional 3 4pin headers because we know they won't add 12vhpwr
 
Awesome, but have they seen a Mini-ITX motherboard?
self punishment. almost like bonzai for enthusiast pcs. lets fit an rtx4090 and 16core cpu into a shoebox so we can get bad thermals and it costs twice as much. its the equivalent of shrinking down 4u 8 gpu servers to 1u so you can pay more for losing future upgrades all so it can be small. if you want a small desktop just get a mac mini. i have an extended atx tower.and i wish corsair made 6 ft tall atx cases.
 
  • Like
Reactions: eye4bear
I can't imagine an additional 600w of power being routed through a MB. ..

50amps of power needs to be routed so that it doesn't interfere with existing components and is isolated. MB manufacturers will most likely botch it in favor of cost cutting.

I'm all for simplicity but don't know how they are going to pull off putting an additional 3 4pin headers because we know they won't add 12vhpwr
agree, while this might be fine for server backplanes, I don't trust consumer motherboards to do this properly especially not with existing ATX. If they want to make this work, they should redesign the ATX power connector. Its about time to be honest. multi rail 12V is all you need. can then have surface mount transformers for your to 5V and 3.3V on the motherboard. these are not used much these days as optical drives are gone, and harddrives are gone. only nvme now. Can make the CPU have its own multiphase power delivery section that takes from 12V instead of 3.3V. 5V is mostly for USB power, which can have its own dedicated surface mount transformer that takes from 12V.
 
I'm hoping this new power connector on the motherboard will force motherboard manufactures to put nvme out of the of the video card, mine has it located directly underneath my video card, limiting my nvme heatsink options...
agree, all GPUs should have minimum 2 nvme slots. I wish they would include thunderbolt 4 ports on all GPUs
 
I can't imagine an additional 600w of power being routed through a MB. ..

50amps of power needs to be routed so that it doesn't interfere with existing components and is isolated. MB manufacturers will most likely botch it in favor of cost cutting.
The same heavy copper power and ground planes that are necessary to feed 150-300A continuous to Vcore on modern CPUs can be used to carry 50A extra on 12V and ground everywhere else around the board instead of being wasted doing nothing or going grossly under-used.
 
RGB can be turned off and having less cables is not a bad thing, overall. As I said, it'll depend on how they go about it.

Regards.

I made the mistake of buying a bunch of RGB components in my last several builds. The thing that really annoys me is Trident's software used to control the lighting. To turn off the RGB on the RAM sticks you have to have the Trident software running continuously. I noticed it was using 2-3% of my CPU, (using task manager). I uninstalled it and installed the latest version. It takes 0.8-1.5%. Total waste of Power/Resources. It is small but all the Craplets add up.
It would be nice just to plug in a Video Card without the new connectors going to it but as others have pointed out, you will have to connect several more power connectors to your MB. One of the builds I am working on now uses a 24pin, two 8 pin and a 4pin power connectors. Adding several more to the MB might get a bit cramped.
 
agree, while this might be fine for server backplanes, I don't trust consumer motherboards to do this properly especially not with existing ATX. If they want to make this work, they should redesign the ATX power connector. Its about time to be honest. multi rail 12V is all you need. can then have surface mount transformers for your to 5V and 3.3V on the motherboard. these are not used much these days as optical drives are gone, and harddrives are gone. only nvme now. Can make the CPU have its own multiphase power delivery section that takes from 12V instead of 3.3V. 5V is mostly for USB power, which can have its own dedicated surface mount transformer that takes from 12V.

Agreed, perfect opportunity to release a new formfactor standard with new guidelines and redesign from the ground up. You can't keep bandaiding old designs indefinitely.

The same heavy copper power and ground planes that are necessary to feed 150-300A continuous to Vcore on modern CPUs can be used to carry 50A extra on 12V and ground everywhere else around the board instead of being wasted doing nothing or going grossly under-used.

They could but again, that entirely depends on manufactures to do it right. And all board aren't created equal. I see this as a justification to price hikes and force a new standard rather than choice on everyone, not just for those that this slot would be beneficial.

As mentioned above, a new formfactor would be a better direction IMO.
 
  • Like
Reactions: purpleduggy
Reminds me of the systems in the early (and sometimes not so early) PCIe days that sometimes had a Molex port on the board for multi GPU
evgafront.jpg
Picture%206937%20copy.jpg
GIgabyte_X99_SLI_5-700x233.jpg

But yeah, time to retire ATX, it has been around since the mid-late 90s. And the last time we had power hungry hardware, BTX was created. Which was a superior layout, but advances in efficiency has made that obsolete before it could get any real traction.
 
Huh. More expensive motherboards may be one the way?
Curious what the addition of this connector will do to ITX boards.

RGB can be turned off and having less cables is not a bad thing, overall. As I said, it'll depend on how they go about it.
The cost of the LED hardware and software support(if present) is still included in the product, whether one likes it or not.
The cost is probably insignificant, but everyone isn't going to see it that way.
Gonna be one of those subjects folks will never see eye to eye over.
 
Last edited:
  • Like
Reactions: tamalero
To be honest I don’t see the point of this connector…
Anyway you need to plug cable(s) onto your mb, otherwise the mb will not generate power by itself. So at the end of the day you are not eliminating cables. So you are just moving one (or more) cable to another place. And it may increase difficulty for cable routing due to the potential thicker cable.

Furthermore, I really don’t think transfer such high power on mb is a good idea..
 
  • Like
Reactions: Amdlova
Status
Not open for further replies.