News PCIe 5.0 Power Connector Delivers Up To 600W For Next-Gen AMD, Nvidia GPUs

Page 2 - Seeking answers? Join the Tom's Hardware community: where nearly two million members share solutions and discuss the latest tech.
Status
Not open for further replies.
I disagree kinda... I buy high end and I generally upgrade the gpu every year or 2 because I play at 4k and want a good frame rate with all the bells and whistles enabled in game.

That being said while I don't want my electric bill going up more than it already is and I don't want tons of extra heat dumped into the room I also don't want to have to buy a new PSU to replace my 2 months old one a year from now because of some new pin layout that I can't get an adapter for quickly and easily.
Cutting edge and high performance cost $$.
 
  • Like
Reactions: TJ Hooker
You aren't wrong about liquid cooling being required for any high end gpu if they plan on using much more lower than they are right now. Otherwise the coolers will be so big you won't be able to use any of the other pcie slots and the fans will probably sound like something Alienware built right before they were bought by Dell...which means insanely loud at all times.
The 295X2 was a 500W part and used a two-slots reference HSF, 600W isn't a huge step beyond that as far as the HSF size is concerned and the 3000FE-style through-flow design will likely fare much better there. As I wrote previously, the only real challenge here is that there is 1x450-500W die instead of 2x200-225W ones, so got to remove heat from the core twice as fast. May need some form of forced convection inside the heat pipes to keep up with the power density. Full liquid cooling is only necessary if you want to directly dump the heat elsewhere outside the case.

I disagree kinda... I buy high end and I generally upgrade the gpu every year or 2 because I play at 4k and want a good frame rate with all the bells and whistles enabled in game.
HWUB did a graphics quality comparison between mid, high and ultra last week. In most games they looked at, the quality differences are practically indistinguishable during active game play. In some of them, they even had a hard time spotting the differences looking at side-by-side recorded videos frame by frame. Their conclusion is basically that Ultra is pointless for playing, only useful for making screenshots and stress-testing.
 
Well it is PCIe GEN5 ... expect Dual GPU cards to saturate such bandwidth ... hence the more power standard needed. But I really want to see the onboard Power for the 16 lanes slot increase from 75 watts to 300 watts even if it means a longer slot.
 
Well it is PCIe GEN5 ... expect Dual GPU cards to saturate such bandwidth ... hence the more power standard needed. But I really want to see the onboard Power for the 16 lanes slot increase from 75 watts to 300 watts even if it means a longer slot.
I doubt the PCIe slot power budget will get increased until PCIe gets a 12VO refresh with new connector to better accommodate gen5+ and when it does, I doubt they will push it much beyond 150W to keep the number of extra 12V cables between the PSU and motherboard manageable. Once you pass 150W for a GPU, there is no avoiding having a bunch of extra cables one way or the other, may as well directly connect to the PSU to avoid the extra connector and copper plane losses.

I'm surprised they have managed to push PCIe from 2.5Gbps to 32Gbps using the same 20 years old electro-mechanical slot design.
 
That's going to be a no from me. I'll stick to external connectors for power, I don't want a fried main board and other components, I'd rather have a fried PSU.

On that same note, that's the main reason I bought an AMD card this time around, which is ironic considering that until the 6000 series, AMD was the more power hungry option.
 
Here we are, keep blaming 3rd world countries for not having sufficient energy consumption and pointing fingers how burning coal is the biggest reason for global warming.

I am not a rocket scientist but isn't consuming all this energy ADDS into the global warming too? Anyone who's ever built a PC can tell you how the Cooling system works, all the heat generated by CPU/GPU simply gets into the atmosphere, it doesn't magically "disappear".

All these fancy smart gadgets , phones, tablets, watches, every home appliance getting "smart", heck even the whole houses getting smart, electric cars (did you REALLY think re-charging those batteries is harmless?) etc.. etc..

Who really is the actual cause of "global warming" (if it is a thing to begin with).
 
If the graphic cards of the future receive their entire power draw from the motherboard, wont that shorten the motherboard's life?

No they will still be a separate plug mounted to the graphics card same as they are now ...... A PCIe connector would have to add a dozen or more pins to the standard to carry that kind of power through a PCIe slot .... Those are some pretty small traces and connections and would wither under that kind of current through the existing power pins

I can see there are no actual EEs on this forum ......
 
Last edited:
People who can afford buying $1000+ "mid-range" GPUs without worrying about its up-front cost probably don't care much about power draw.

For more sane people looking to spend $200-300 on a GPU, GPUs will have much lower power draw if AMD and Nvidia could be bothered to make anything for that price range. Right now though, their focus is on pushing higher-end SKU prices to the moon while they sell out of all they can get out of fabs.

As long as the demand is high they are going to sell their most profitable GPUs and why wouldn't they?

There is a huge demand for Minimum Wage food service and retail jobs right now so would you quit your job that pays 3 to 4 times more just so everyone else can get cheap fast food? Of course you would not, and neither would Nvidia or AMD ... Of course they are going to sell their most profitable devices until the demand for them goes down and then they will start introducing more mid-range and budget devices and you would too if you were in their shoes
 
The four little pins should include at least a +-Vsense pair to monitor cable and connector voltage drop and shut down the PSU if it becomes too large, indicating a cable or connector fault. The second pair could be any of a number of things from dumb detect pins to I2C or similar for the PSU and GPU to negotiate power limits, no idea how fancy they are going this time around.

Yeah they are definitely not power connectors but some sort of sensing or feedback .... Probably to keep games like New World from blowing up the VRMs ....
 
Ok. I hope this doesn't encourage the release of gpus with that kind of power draw as the norm.
Some people already complain about specific 30 series cards heating up their rooms.
Like seriously, it's a 400-ish watt gpu with transient spikes into 500-ish watts or more... what were you expecting? And no, liquid cooling does not make that energy just go away, which I think some people actually seem to believe...
Well liquid cooling makes the heat go away from your PC to the ambient environment (your room) this is the whole point of convection ( I guess people forgot the thermal energy doesn't just disappear) . But thermodynamics aside those power hungry GPUs are direct result of samsung poor "8nm" process with combinations of Nvidia ampere architecture. Look at the A series cards (formerly known as quadro lineup) not the most tamed in terms of power draw but far more reasonable thanks to using TSMC 7nm. For next lineup Nvidia needs to go back to power efficient TSMC or we truly have 500 watts GPUs, also improving the architecture significantly might also help.
 
Well liquid cooling makes the heat go away from your PC to the ambient environment (your room) this is the whole point of convection ( I guess people forgot the thermal energy doesn't just disappear) .
Please don't say they make heat go away.
They relocate the point where the energy used is exchanged for heat, which ultimately makes its way back to the PC(given enough time) if the user doesn't have some means of guiding the heat out of their room, or brute forcing things with A/C.


those power hungry GPUs are direct result of samsung poor "8nm" process with combinations of Nvidia ampere architecture.
That, and Nvidia's software based gpu scheduler(which they've been using since GTX 600) as opposed to AMD still using a hardware based one.
That software scheduler has been shown to make cpus work a little harder and thus increase the power they use, as well as increasing the probability of cpu core/thread bound scenarios; many games are still bound by a single one.
Users and reviewers would be deceiving themselves if they only looked at gpu power consumption.
 
Yeah they are definitely not power connectors but some sort of sensing or feedback .... Probably to keep games like New World from blowing up the VRMs ....
The PSU cable sense lines are there to keep the PSU's connectors and wiring from becoming a major fire hazard when upwards of 55A are passing through them, I doubt the sense pins add anything to VRM protection. The GPU's power management and monitoring circuitry, firmware and drivers are the parts that are responsible for keeping the GPU and its VRM safe.

IMO, they should have used an XT90 connector for future-proofing up to 1kW modified with a pair of sense pins and called it a day.
 
I doubt the PCIe slot power budget will get increased until PCIe gets a 12VO refresh with new connector to better accommodate gen5+ and when it does, I doubt they will push it much beyond 150W to keep the number of extra 12V cables between the PSU and motherboard manageable. Once you pass 150W for a GPU, there is no avoiding having a bunch of extra cables one way or the other, may as well directly connect to the PSU to avoid the extra connector and copper plane losses.

I'm surprised they have managed to push PCIe from 2.5Gbps to 32Gbps using the same 20 years old electro-mechanical slot design.

Apple Already made a slot for upto 475+75 = 550 watts power for their GPU in their latest MAC Pro wokstations and using Gen3 PCIe . Sadly Apple always wins ahead of the PC in design ... 300 watts total from the slot is not that big compared to that.

mpx_bottom.PNG
 
Apple Already made a slot for upto 475+75 = 550 watts power for their GPU in their latest MAC Pro wokstations and using Gen3 PCIe.
It is a proprietary extension to fit in a proprietary form factor all custom-built for Apple's exclusive use where the lack of interoperability with normal hardware is a feature rather than a bug. Apple's proprietary 500W slot is more than twice as long in total as a standard PCIe slot and wouldn't even fit on a standard ATX board, you need EATX to fit one of those.

If you want to have a nearly universal internal expansion connector, it has to make sense across the entire price range that may use it. I doubt it would help many people to have the GPU market split between models that only fit EATX motherboards with 600W slots (the dangling 500W extension may mechanically interfere with other board components on non-600W motherboards) and models designed for normal motherboards using direct power cables. A bump to 150W on the other hand could be done by adding a few pins toward the IO bracket, though this still has the issue of possible mechanical interference on motherboards that cram stuff there, so you'd still need boards in 75W tab and AUX variants... or a completely new connector where backward compatibility is a non-concern.

As I have written before, all of that extra power has to be brought from the PSU to the motherboards too. So, for the sake of eliminating the 12-16pins connector(s) on the GPU, you end up adding extra copper layers to the motherboard, a 100+pins PCIe slot extension, a 16+ pins power connector to the motherboard clutter, and a matching cable from the PSU to the motherboard too.

More parts between the PSU and the load is just more thing that can potentially go wrong. With power going straight from the PSU to the GPU, a fatal fault on the GPU may fry the GPU's PCB and possibly PSU connectors but leave everything else intact. If the GPU pulls all of its fault current through the motherboard's 12V connectors, the motherboard's power planes and the PCIe slot, a fatal fault on the GPU may now also fry the motherboard if there are any weak points along the way.

I'd pick the relative safety of giving each major load its own power cables for anything significantly in excess of 100W over the added footprint, costs and risks from needlessly routing high power through the motherboard.
 
It is a proprietary extension to fit in a proprietary form factor all custom-built for Apple's exclusive use where the lack of interoperability with normal hardware is a feature rather than a bug. Apple's proprietary 500W slot is more than twice as long in total as a standard PCIe slot and wouldn't even fit on a standard ATX board, you need EATX to fit one of those.

If you want to have a nearly universal internal expansion connector, it has to make sense across the entire price range that may use it. I doubt it would help many people to have the GPU market split between models that only fit EATX motherboards with 600W slots (the dangling 500W extension may mechanically interfere with other board components on non-600W motherboards) and models designed for normal motherboards using direct power cables. A bump to 150W on the other hand could be done by adding a few pins toward the IO bracket, though this still has the issue of possible mechanical interference on motherboards that cram stuff there, so you'd still need boards in 75W tab and AUX variants... or a completely new connector where backward compatibility is a non-concern.

As I have written before, all of that extra power has to be brought from the PSU to the motherboards too. So, for the sake of eliminating the 12-16pins connector(s) on the GPU, you end up adding extra copper layers to the motherboard, a 100+pins PCIe slot extension, a 16+ pins power connector to the motherboard clutter, and a matching cable from the PSU to the motherboard too.

More parts between the PSU and the load is just more thing that can potentially go wrong. With power going straight from the PSU to the GPU, a fatal fault on the GPU may fry the GPU's PCB and possibly PSU connectors but leave everything else intact. If the GPU pulls all of its fault current through the motherboard's 12V connectors, the motherboard's power planes and the PCIe slot, a fatal fault on the GPU may now also fry the motherboard if there are any weak points along the way.

I'd pick the relative safety of giving each major load its own power cables for anything significantly in excess of 100W over the added footprint, costs and risks from needlessly routing high power through the motherboard.

Apple 550 watts was just an example to show it can be done ... I still think 300 watts total can be done for PC , (225 watts above the 75 watts) and the majority of GPUs wont need more that. anyways all GPUs that need 250+ watts are longer than the Slot length , will not need larger space or larger form factor than ATX .
More over , 300 watts are already here on motherboards with two sockets, not a big deal at all .
 
More over , 300 watts are already here on motherboards with two sockets, not a big deal at all .
If by "socket" you mean CPU power connectors, those are dedicated to the CPU's VRM, they don't power anything else on the board and are mostly found on expensive boards marketed at extreme overclockers. This isn't the sort of cost burden that normal everyday-use/gaming CPUs should be saddled with. I like my motherboards to cost $100-150, not $500.

Also, as I previously mentioned, GPU manufacturers would need to create separate SKUs for slot-powered and cable-powered GPUs since the extended power tab will interfere with motherboard components on most motherboards that don't have the extended slot power.
 
It is a proprietary extension to fit in a proprietary form factor all custom-built for Apple's exclusive use where the lack of interoperability with normal hardware is a feature rather than a bug. Apple's proprietary 500W slot is more than twice as long in total as a standard PCIe slot and wouldn't even fit on a standard ATX board, you need EATX to fit one of those.

If you want to have a nearly universal internal expansion connector, it has to make sense across the entire price range that may use it. I doubt it would help many people to have the GPU market split between models that only fit EATX motherboards with 600W slots (the dangling 500W extension may mechanically interfere with other board components on non-600W motherboards) and models designed for normal motherboards using direct power cables. A bump to 150W on the other hand could be done by adding a few pins toward the IO bracket, though this still has the issue of possible mechanical interference on motherboards that cram stuff there, so you'd still need boards in 75W tab and AUX variants... or a completely new connector where backward compatibility is a non-concern.

As I have written before, all of that extra power has to be brought from the PSU to the motherboards too. So, for the sake of eliminating the 12-16pins connector(s) on the GPU, you end up adding extra copper layers to the motherboard, a 100+pins PCIe slot extension, a 16+ pins power connector to the motherboard clutter, and a matching cable from the PSU to the motherboard too.

More parts between the PSU and the load is just more thing that can potentially go wrong. With power going straight from the PSU to the GPU, a fatal fault on the GPU may fry the GPU's PCB and possibly PSU connectors but leave everything else intact. If the GPU pulls all of its fault current through the motherboard's 12V connectors, the motherboard's power planes and the PCIe slot, a fatal fault on the GPU may now also fry the motherboard if there are any weak points along the way.

I'd pick the relative safety of giving each major load its own power cables for anything significantly in excess of 100W over the added footprint, costs and risks from needlessly routing high power through the motherboard.

Power is no big deal. ITs nothing more than trace width and thickness. Your PSU are all running traces instead of physical wires.

Btw, server boards are all running traces all along, esp those using hotswappable PSUs. You don't see cables running from PSU to CPU. Boards even provide passthrough connectors for GPUs.

An example from Tyan.

https://www.tyan.com/Barebones_GC79B8252_B8252G79V4E4HR-2T
 
If by "socket" you mean CPU power connectors, those are dedicated to the CPU's VRM, they don't power anything else on the board and are mostly found on expensive boards marketed at extreme overclockers. This isn't the sort of cost burden that normal everyday-use/gaming CPUs should be saddled with. I like my motherboards to cost $100-150, not $500.

Also, as I previously mentioned, GPU manufacturers would need to create separate SKUs for slot-powered and cable-powered GPUs since the extended power tab will interfere with motherboard components on most motherboards that don't have the extended slot power.

Adding a small slot for more GPU power will not increase the motherboard price from $100 to $500 ...
 
Btw, server boards are all running traces all along, esp those using hotswappable PSUs. You don't see cables running from PSU to CPU. Boards even provide passthrough connectors for GPUs.
Proprietary case, proprietary PSU, proprietary motherboard for the low-low starting price of about $4000. Not exactly a mainstream-friendly price point for the minor convenience of using the motherboard as a power backplane.

Adding a small slot for more GPU power will not increase the motherboard price from $100 to $500 ...
$100 motherboards often struggle with GPUs over 90W since they have the minimum amount of copper and everything else to make something that works.
You have to step up to motherboards around $200 for something that can handle 150W continuously.
If you want motherboards equipped for overclocking beyond 200W, those usually are $300-700.
If you want a motherboard capable of handling an extra 600W for GPU pass-through on top of that, you would be looking for motherboards built beyond extreme OC spec. Expect those to cost another $100 or more extra. That brings the likely average price of hypothetical boards supporting a 600W slot spec somewhere in the neighborhood of $500.

Completely pointless waste of motherboard copper in the consumer space when you can avoid it altogether by simply sticking with the inexpensive good ole AUX power cables and connectors and their new HIPWR variant.

Since the PCI-SIG now has the 600W AUX connector spec to go along PCIe5, it probably has no plan for adding an official higher-power slot spec.
 
Proprietary case, proprietary PSU, proprietary motherboard for the low-low starting price of about $4000. Not exactly a mainstream-friendly price point for the minor convenience of using the motherboard as a power backplane.


$100 motherboards often struggle with GPUs over 90W since they have the minimum amount of copper and everything else to make something that works.
You have to step up to motherboards around $200 for something that can handle 150W continuously.
If you want motherboards equipped for overclocking beyond 200W, those usually are $300-700.
If you want a motherboard capable of handling an extra 600W for GPU pass-through on top of that, you would be looking for motherboards built beyond extreme OC spec. Expect those to cost another $100 or more extra. That brings the likely average price of hypothetical boards supporting a 600W slot spec somewhere in the neighborhood of $500.

Completely pointless waste of motherboard copper in the consumer space when you can avoid it altogether by simply sticking with the inexpensive good ole AUX power cables and connectors and their new HIPWR variant.

Since the PCI-SIG now has the 600W AUX connector spec to go along PCIe5, it probably has no plan for adding an official higher-power slot spec.

I dont know where do you get the Prices from ? even MSI released HEDT motherboards with 180 watts CPU + 75 watts GPU onboard for $200 ... I still think it is doable and much cleaner for the PC setup. Keep in mind tat I am not asking for 600watts .. just 225watts more above the 75 watts standard.
 
Keep in mind tat I am not asking for 600watts .. just 225watts more above the 75 watts standard.
And what modern GPU are you going to power with only 300W? Only GPUs RTX3070 and RX6800 and below, everything else will still require an AUX power cable. Most rumors about next-gen GPUs indicate they will come with substantial power draw increases, so don't be surprised if 300W becomes only sufficient for 60(0)-tier cards.

While I wouldn't be opposed to reduced wiring, the power range for GPUs is just too wide to satisfactorily cover in a cost-effective manner.
 
Please don't say they make heat go away.
They relocate the point where the energy used is exchanged for heat, which ultimately makes its way back to the PC(given enough time) if the user doesn't have some means of guiding the heat out of their room, or brute forcing things with A/C.



That, and Nvidia's software based gpu scheduler(which they've been using since GTX 600) as opposed to AMD still using a hardware based one.
That software scheduler has been shown to make cpus work a little harder and thus increase the power they use, as well as increasing the probability of cpu core/thread bound scenarios; many games are still bound by a single one.
Users and reviewers would be deceiving themselves if they only looked at gpu power consumption.
heat convection is used to move the heat from the heat source (on our case the chips) to the radiator and the coolant is the medium the heat is moved, in the radiator the heat from the fluid is exchanged with the surrounding air via radiation and conduction. the air in the room will absorb the heat and temperatures will rise until equilibrium has been reached, the equilibrium temperature depends on multiple factor but user does not need to use "brute force" to remove the heat unless he wants the room to equalize on a lower temperature.
so the phrase "go away" is correct since the heat does move from your components to your surrounding and then to the outside world.
 
heat convection is used to move the heat from the heat source (on our case the chips) to the radiator and the coolant is the medium the heat is moved, in the radiator the heat from the fluid is exchanged with the surrounding air via radiation and conduction. the air in the room will absorb the heat and temperatures will rise until equilibrium has been reached, the equilibrium temperature depends on multiple factor but user does not need to use "brute force" to remove the heat unless he wants the room to equalize on a lower temperature.
so the phrase "go away" is correct since the heat does move from your components to your surrounding and then to the outside world.
"Well liquid cooling makes the heat go away from your PC to the ambient environment "

As does an air cooler.

The CPU produces the same wattage and heat. Liquid cooling just moves the transfer to a slightly different place, 6" away from the CPU. And may be slightly more efficient in doing so.
 
Status
Not open for further replies.