*Note: This post has been pared down, modified, and added to as I've learned a little bit more about how GFX cards are powered.
I have a 2013 Dell Precision T5600 825W tower that needs a graphics card upgrade. It currently has an NVIDIA Quadro 2000 1GB. I am going to replace the outdated Quadro with the fastest NVIDIA GTX I can put in this jalopy without making it a complicated process. Anyone have any advice?
Here's what I've figured out so far: This computer doesn't have any power connectors coming off of the Power Supply Unit. It provides power to the graphics card exclusively through the PCIE slot the card plugs into. After doing some research on PCIE-powered GFX cards I've learned 2 things:
1) Looks like PCIE slots are limited to 75W of power output
2) There appears to be a whole class of new PCIE-powered cards released in the last year or so which are designed to be effective upgrades for older systems like mine (systems with no spare power connectors coming off of the Power Supply Unit), and which limit themselves to using 75W of power. Much of the ability of these cards to supply greater performance in a package with much lower power draw is due to newer, more power-efficient Pascal architecture.
Having learned some of these basics, I believe I have landed on the most high-powered GTX card that can plug directly into my motherboard: the ASUS GeForce GTX 1050 Ti Dual. After learning and searching for more info about PCIE-powered cards, I found lots of posts about the GTX 750 ti, which seems to be the reliable and venerable standard by which all PCIE-powered GFX cards are measured (the 2009 clunker Mac they have me working on at the office is running one).
A few of these posts lead me to the GTX 950-2G and mini GTX 950-2G as the new hotness that takes over in this area for the old 750 ti, and a couple of those posts lead me in turn to the 1050 ti as the *new* new Pascal hotness. There are other 1050 ti versions from companies like MSI, as well as other configurations from ASUS too, but the ASUS GeForce GTX 1050 Ti Dual seems to be the one that has the highest clock and overall performance in this class of card.
So, before I pull the trigger on a GeForce GTX 1050 Ti Dual, can anyone tell me if I'm missing anything important or have I done my homework OK? Is there yet another 75W step up I can take advantage of which I haven't learned about and mentioned here? Or is there some other aspect to this equation I'm ignorant of?
*Note regarding my choice of GTX vs Quadro:
I do After Effects/editing/some Cinema4D for animation and other entertainment media and never use science and engineering software like Solidworks, etc. I have no need for Quadro cards' double precision float computation features and the like, and no need to blow piles of cash on a faster Quadro, especially to put in a computer this old, puny, and close to the end of its useful life. I'm grabbing the largest number of CUDA cores with as fast a clock as I can get that'll run off of my existing power. This is why I will definitely be getting a GTX card and not a Quadro, and when I build a no-holds-barred GPU render rig later on, it will be GTX 1080 ti-based and not Quadro-based.
Some specs:
- Windows 10 Pro
- Intel Xeon Dual Six Core E5-2620 Processor (2.0GHz, 15M, 7.2 GT/s,Turbo)
- 32GB DDR3 RDIMM Memory, 1600MHz, ECC (4 x 8GB DIMMs)
- 1TB Samsung 840 EVO OS disk: I've yanked the faulty PERC H310 RAID Controller, disconnected a DVD drive, and directly connected the 840 EVO.
- 825w PSU: I think it's, like, part of the case? It's a Chicony Power Technology Co unit.
I have a 2013 Dell Precision T5600 825W tower that needs a graphics card upgrade. It currently has an NVIDIA Quadro 2000 1GB. I am going to replace the outdated Quadro with the fastest NVIDIA GTX I can put in this jalopy without making it a complicated process. Anyone have any advice?
Here's what I've figured out so far: This computer doesn't have any power connectors coming off of the Power Supply Unit. It provides power to the graphics card exclusively through the PCIE slot the card plugs into. After doing some research on PCIE-powered GFX cards I've learned 2 things:
1) Looks like PCIE slots are limited to 75W of power output
2) There appears to be a whole class of new PCIE-powered cards released in the last year or so which are designed to be effective upgrades for older systems like mine (systems with no spare power connectors coming off of the Power Supply Unit), and which limit themselves to using 75W of power. Much of the ability of these cards to supply greater performance in a package with much lower power draw is due to newer, more power-efficient Pascal architecture.
Having learned some of these basics, I believe I have landed on the most high-powered GTX card that can plug directly into my motherboard: the ASUS GeForce GTX 1050 Ti Dual. After learning and searching for more info about PCIE-powered cards, I found lots of posts about the GTX 750 ti, which seems to be the reliable and venerable standard by which all PCIE-powered GFX cards are measured (the 2009 clunker Mac they have me working on at the office is running one).
A few of these posts lead me to the GTX 950-2G and mini GTX 950-2G as the new hotness that takes over in this area for the old 750 ti, and a couple of those posts lead me in turn to the 1050 ti as the *new* new Pascal hotness. There are other 1050 ti versions from companies like MSI, as well as other configurations from ASUS too, but the ASUS GeForce GTX 1050 Ti Dual seems to be the one that has the highest clock and overall performance in this class of card.
So, before I pull the trigger on a GeForce GTX 1050 Ti Dual, can anyone tell me if I'm missing anything important or have I done my homework OK? Is there yet another 75W step up I can take advantage of which I haven't learned about and mentioned here? Or is there some other aspect to this equation I'm ignorant of?
*Note regarding my choice of GTX vs Quadro:
I do After Effects/editing/some Cinema4D for animation and other entertainment media and never use science and engineering software like Solidworks, etc. I have no need for Quadro cards' double precision float computation features and the like, and no need to blow piles of cash on a faster Quadro, especially to put in a computer this old, puny, and close to the end of its useful life. I'm grabbing the largest number of CUDA cores with as fast a clock as I can get that'll run off of my existing power. This is why I will definitely be getting a GTX card and not a Quadro, and when I build a no-holds-barred GPU render rig later on, it will be GTX 1080 ti-based and not Quadro-based.
Some specs:
- Windows 10 Pro
- Intel Xeon Dual Six Core E5-2620 Processor (2.0GHz, 15M, 7.2 GT/s,Turbo)
- 32GB DDR3 RDIMM Memory, 1600MHz, ECC (4 x 8GB DIMMs)
- 1TB Samsung 840 EVO OS disk: I've yanked the faulty PERC H310 RAID Controller, disconnected a DVD drive, and directly connected the 840 EVO.
- 825w PSU: I think it's, like, part of the case? It's a Chicony Power Technology Co unit.