Can my PSU handle a GPU upgrade?

Biggie-shackleton

Reputable
Feb 19, 2014
3
0
4,510
Hi guys
My current PSU is a Corsair CX500M (500w)
My hardware is:
i7 4770 CPU
Corsair H60i liquid cooler
8gb RAM
2gb AMD 7850 GPU
240gb SSD
2tb HDD

I'd like to swap the 7850 for something better, a Nvidia GTX 770 is what I had in mind, will my PSU be okay? or do I need to upgrade that too?

Thanks
 
The AMD Radeon 7850 is a relatively light 130 watt stock rated video card. It can scale up to 150 watts with a factory overclocked edition.

You wouldn't want to go much higher than 175-200 watts for a replacement video card.
The GTX 760 is rated at 170 W, while the GTX 770 is rated at 230 W. That's starting to push it for a measly 500 W PSU.

Keep in mind the GTX 680, which the GTX 770 is essentially a rebranded 680 with higher memory clock, is only 195 W. You should be able to get away with that without upgrading your PSU. However, if you want the latest and greatest, wait for the next Gen GTX GPU made using Maxwell architecture. I hear it's supposed to be much more power efficient than the 600 and 700 series. Be patient and time will tell.
 
Seeing as you're running a locked processor, your PSU will have absolutely no problem coping with the load of any GPU except maybe a highly overclocked R9-290X. I'd estimate your total system gaming load with a stock 770 to be around 250W or less, so you'll also be right in the most efficient load range for your PSU.
 

The CX500M has a 38A/456W rating on the 12V rail and is a "tier-3" PSU on the Xpert list so it should be able to provide 350W on the 12V rail without breaking much of a sweat, enough to run a 250W GPU and 100W worth of other stuff off 12V.
 
The CX500M has a 38A/456W rating on the 12V rail and is a "tier-3" PSU on the Xpert list so it should be able to provide 350W on the 12V rail without breaking much of a sweat, enough to run a 250W GPU and 100W worth of other stuff off 12V.
In theory, you are correct. In practice, that all depends on everything else you have running on your system. If you add a 250W GPU to your system running a 500W PSU, and have 2-3 HDDs (which run on 12v system), several case fans, and other various subsystems such as voltage regulators and you will draw another 50-100W of power.

Check out this Power Supply Article on Tom's for more in depth explanation.

MSI GeForce GTX 770 Lightning 2GB Overclocked Video Card Review
What this review found was a bare system running a single SSD drive and "minimal cooling fans" + the GTX 770 GPU peaked at 437 watts. Sure that's under the 500 watts your PSU can handle, but you would essentially be running the PSU 87% power most of the time when gaming. The PSU will handle it; but for how long? Longevity will certainly suffer.

If you're willing to risk burning out your PSU, which can sometimes damage other components (MB, CPU, GPU, HDDs, etc), then by all means; have at it. You've been warned.
 

For someone attempting to teach me a lesson about PSUs, you made one huge fundamental mistake: that 437W figure is MEASURED AT THE WALL and with the PSU in that test setup being ~90% efficient, that means the actual output (what any remotely respectable PSU's power rating is about) is less than 400W. Also, your "minimal" setup from that review uses an overclocked i7-3960X OC'd to 4.7GHz so that with the support components that make it possible knock system power up by 80-100W. OP appears to have a non-k i7-4770.

OP's system with a GTX770 should barely break 300W peak output and even that probably won't be happening often without running FurMark and Prime95 concurrently. I would expect any PSU worth its weight in salt to operate for several years at ~60% load.

As for PSUs failing and taking other components out, that is extremely rare with any decent PSU. It is not even that common with crappy PSUs unless the user ignores odd behavior (random shutdowns or crashes) and keeps powering the system back up until catastrophic/spectacular failure occurs or the system outright refuses to boot - the majority of PSU failures (probably over 90%) are output caps failures which is almost always a progressive issue and will rarely kill components until those capacitors have degraded much further beyond the first random reboot/shutdown event they may have caused. My LG 204T (20" 1050p LCD) remained usable for several months after the first time it mysteriously would not turn on the first try so I made the mistake of delaying the tear-down/repair until it stopped turning on. As I suspected, some of the output caps went bad and as expected, replacing them fixed the won't-turn-on problem but the DVI port was fried so I'm stuck using it with VGA input. That display is now 8 years old and its backlight is starting to burn out so I will have gotten about five extra years out of that $300 display with a $1.50 repair. When my 24" 1200p LG LCD started showing similar symptoms, I dismantled it at my earliest convenience, again found exactly what I was expecting, replaced caps (~$1.50 worth of 'em again) and that has given me four more years of flawless operation so far and counting. My oldest crappy PSU is a 300W T&C unit originally bundled in a $35 ATX case 12 years ago, re-capped it when the PC started randomly rebooting and it has been powering my living room PC 24/7 ever since.

A timely repair on crappy PSUs can drastically extend their useful life and by extension, the useful life of whatever device they may be integrated in.

But it isn't crappy PSUs we are talking about here, it is a CX500M which is generally considered a pretty good model for its price.
 
It's pretty funny reading how you think you know what you're talking about, but all you can do is make up statistics and ramble off nonsense. Working in IT I've seen PSUs go bad and once catastrophically fail. Unfortunately in theory you are right, in reality that's not always the case. If you want to give poor advice and someone follows it, that's fine because it's not my money or my problem. I would feel sorry for the guy who listens to your crappy advice though. And for your sake, you're lucky poor advice on here does not hold you legally liable. In the professional world though, you are held liable. If I were to take your poor advice here and apply it at work, I would be held responsible if one of my coworker's PCs failed due to my negligence. Now it only becomes an issue if valuable data is lost or corrupted, which does happen.

But you're truly an idiot if you think you're giving good advice InvalidError. I was never trying to teach you about PSUs. The fact that you can plug in a PSU model into a website and come up with a supposed answer shows you have absolutely no ability of qualitative and quantitative thinking (that's thinking critically for you). That alone tells me something about you...

When you
MEASURE POWER AT THE WALL
guess what, that's the power your PSU is USING! Wow, that's a hard concept for someone of your intelligence I know...

the majority of PSU failures (probably over 90%) are output caps failures
PLEASE show me the statistical source. Oh wait, you already did. It's the coming from your ass!

Did you know, 90% of statistics are pulled out of thin air??? Well, you do now, because it is.

make it possible knock system power up by 80-100W.
And you are basing this on...what exactly? Have you ever overclocked a system before? Outside of OC'ing to the extreme you won't increase your total system power consumption by that much. When I OC'd my system, both CPU and GPU the total power increased by only 20-30W. Granted it was a relatively mild 500MHz CPU OC.

Actually if a PSU fails catastrophically it's not the capacitors slowly fizzling out, it's the capacitors that blow out; catastrophically... This is why that word catastrophically is in there. If you're consistently over burdening a PSU that greatly increases the chance that will to be the outcome. There's a reason ALL the GTX 770 partner manufacturers (including AMD) all recommend a 600W MINIMUM PSU. Another sound bit of advice is that capacitors degrade over time, and if you are running your PSU close to max all the time, you don't leave any headroom for degradation. This will further increase the chance of a catastrophic failure.

A timely repair on crappy PSUs can drastically extend their useful life and by extension, the useful life of whatever device they may be integrated in.
Again, if you're talking about someone who turns to a forum for answers, I'm pretty sure they won't have the knowledge to catch a failing PSU in time before it catastrophically fails, let alone fix it themselves. No offense to those using this forum for advice.

Please give sound advice here, not marginal at best. People here turn to those with greater knowledge seeking simple or sometimes complex answers. When you come on here and try to justify your answer with an answer you found while Googling a response, that doesn't help the beginners understand anything.
 
One more thing I neglected to ask: Do you know what a capacitor does inside a computer and PSU?

Answer: a capacitor stores energy in the form of an electrostatic field.

So when a capacitor fails, it has one of two outcomes.
1. It fails to store energy and fizzles out (bloats up and the cap top bubbles up and ruptures).

2. It fails while the capacitor has a full or partial electrical charge and sends an electrical spike through your computer. That's an internal power spike and that's the worst kind to have. It's bad enough to have a external power spike going into your PSU and burn that out. It's a whole different ordeal when it's your internal components are the ones causing those electrical spikes...

Think quantitatively about that for a bit.
 
Uber, I can see you aren't clueless, but you are still in the wrong here because you are trying to give very general advice instead of trying to help the OP and his case specifically. Let me help you by providing evidence from the very site you referenced.

In the GPU load results you link, you mentioned a 437W load result, but you also failed to realize that you were pointing out the MSI Lightning Edition card, which is specially overbuilt with high end power delivery components to allow it to be highly overclocked. Note that the reference 770 results in a lower 419W.

Doesn't seem like much, but then when you move on to the CPU results, you again fail to realize that the total system load results you linked for the GPU were using a 3960X processor, which is shown to cause anywhere from 370 to 540W total system consumption, depending on the overclock. Compare that to the stock 4770k's much lower 264W. With both CPUs at stock speeds, you are still talking about a 100W+ consumption difference, which is is a very large difference. Even overclocked to 4.6GHz, the 4770k still uses less power than a stock 3960X, and the OP doesn't even have one of these, he has a locked processor, so it's not even a consideration.

So realistically, OP's setup will see around 140W-160W less power consumption than your linked result of 437W, due to a lower core count, non-overclocked processor, and a lower clocked graphics card. Also remember that he probably won't be running super-strenuous benchmarks 24/7, but will instead be running less intensive game loads; all of a sudden, your argument is looking less and less relevant for the OP.
 

Google is your friend. If you do a couple of searches about power supply failures, you will see that capacitors leaking/bulging is a far more common problem than capacitors exploding and it usually takes weeks if not months from first signs of failure to total failure in that mode. The primary cause for premature venting/bulging (assuming the correct polarity and voltage rating are used) is under-rated AC ripple current which causes the capacitor to heat up from I2R losses.

If you think a capacitor explodes due to the amount of electrostatic energy it stores, think again. The math does not add up: the capacitor has a few grams worth of aluminum with specific heat of 0.91J/g/K while a 3300uF cap charged at 12V is only 238mJ so even if the capacitor weighed only one gram worth in aluminum ignoring all other components, that short (assuming no losses in the leads or the spark itself) would only warm it up by 0.25C, which is utterly insignificant. It needs an external energy source such as an internal short (sinking a large chunk of the PSU's output) or some extreme current ripple (voltage regulation going to hell due to excessive output caps deterioration) to make it boil its electrolyte faster than the vent can split open to let the gasses out.

As for "all the partners recommending 600W PSUs", that is nothing more than a blind general recommendation since board manufacturers have no interest in making PSU recommendations on a case-by-case basis. They simply make a recommendation based on the worst yet still reasonable case they expect their products to find themselves in. People who have setups requiring uncomfortably close to that much power most likely know they need to step up while people who do not know better are unlikely to have setups requiring anywhere near that much and will most likely be fine no matter how crappy their 600+W PSU is. In other words, it is simply an ass-covering figure manufacturers are reasonably comfortable with.

BTW, it does help beginners understand something: many companies are sacrificing 5-10 years of trouble-free useful life to save well under $1 on parts cost.
 
InvalidError and doubletake your answers are valid.

Yes, the article I linked was using a more power-hungry processor than the 4770 and it's unlikely Biggie will be using an OC'd GTX 770 resulting in that much extra power usage...but, those are also assumptions (except for the 4770 part). Undoubtedly he would be using less than the 437W the article's test computer used. To assume how much less would be conjecture, but lets play along with assumptions.

The 3960x is a 130W rated CPU and the 4770 is rated at 84W. Since we're on board with assumptions it's logical to assume the total system power usage would be of a ratio of two at the best case scenario and at the worst case it's simply an arithmetic differential equation if all else is considered equal.

84/130 = 0.64615384615384615384615384615385 ~ 0.65 for ease of calculation sake.
437W * .65 = 284.05W
______________________________________________________________
130-84 = 46W
437-46 = 391W
______________________________________________________________
284W max power consumption - best case = 56.8% PSU usage during gaming
391W max power consumption - worst case = 78.2% PSU usage during gaming

Either scenario presents ample headroom for this CPU / GPU combination.

I was wrong...